text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Scrapy-based Incremental Housing Rental Information Crawling System Design
A web-controlled incremental crawling system is designed for incremental crawling of property rental information on websites because of the need for massive data sets to train the housing rental system model, and to solve the problem of always using site-wide crawling and multiple database accesses for crawling websites based on the Scrapy framework. In order to achieve incremental crawling, a download middleware is added to the Scrapy framework, the system loads the seed page, the visited URLs and their hash lists and the control page list when the crawler starts, obtains the URLs of the sub-level pages and enters them into the database, then crawls the sub-level pages in bulk and parses the property information in the sub-level pages. The data is cleaned by verifying the data format, completing missing items, removing duplicate data and detecting abnormal data to get the eligible property data.
Overview
A large data set of rental amounts is required to train a machine learning based rental assessment model in order to provide a more objective and realistic representation of the assessment results. Rent data is influenced by time, location, property attributes and price. In addition to the specific rental data that can be found in the transaction records and advertised prices, the data set can also be obtained by searching or crawling through rental websites. The system is designed to crawl data from housing rental websites, extract data, filter data and integrate encapsulated data to serve as the dataset for the evaluation system. In order to avoid duplicate crawling of part of the website content when doing site-wide crawling, the website is crawled incrementally [1] , incremental crawling means that when doing site-wide crawling of the website only the new and changed parts of the content are crawled [2] , which can significantly reduce the pressure of accessing the crawled website, shorten the crawling time of the crawler and at the same time reduce the storage space requirements of the crawler system. Scrapy [4] is a mature Python [3] based open source web crawler framework that provides a clean and powerful way to build and deploy web crawlers. Using it to develop web crawler software systems [5] can reduce the development of repetitive software function modules such as treating crawl queues and their access, and process control of the crawling process. Therefore, the design of an incremental housing rental information crawling system based on Scrapy has practical engineering value.
System working principle 2.1 Crawling seed pages
Crawling seed pages is done to get the URL of the seed page and put it in the database in preparation for getting the entity information. The HTML source code of the seed page of the target website is viewed in the following format: Depending on the selector provided by Scrapy, elements can be selected by CSS expressions or XPath expressions. In a practical way, XPath expressions XPath expressions are more suitable in terms of their capabilities and are the basis for Scrapy selectors.
Given the common situation where the seed pages are page-flipped, it is possible to get the XPath expression of the "Next" button to extract the URL for loading into the scheduler after getting the information of the current page. We can also test that the "next page" and "last page" HTML have the same code pattern and the same XPath expression. The extracted URLs need to be verified and corrected.
Pattern check: detects if the URL has a suffix of ".html" or ".htm", or if the URL is a relative address, and needs to be filled in with the missing address.
Repeatability check: queries the database to see if the link already exists.
Crawling sub-level pages
The sub-level page is the target page of the desired crawl information, mainly displaying the specific information of the rental property. This includes: time of release, district, name of the neighborhood, type of flat, size of the flat, door number, price, orientation, floor/total floor, decoration, lift, age of the property, etc. It is to be expected that the information displayed on the website is not exactly the same, and even the information displayed on different listings on the website may not be complete, so data cleaning needs to be carried out at a later stage, with non-critical information missing being filled in, and critical information missing being discarded.
Data cleansing
The purpose of data cleansing is to ensure that the data is consistent and usable for subsequent operations.
(1). The HTML language is essentially a plain text language and the content obtained is in string format, so it is particularly important to transform the data type and verify the validity of each piece of information for outlier processing. For example, for 'posting time', a time interval is set for the information extracted from the website and is considered valid if it meets the interval. Premature invalid posting times and incorrect posting times that are overdue are removed. The "District" field, which matches a string in the pre-set district database, is considered compliant if it matches; "House Size", "Price", and The "floor/total floors" should be checked to see if they are legal numbers. (2). Missing item processing, according to the specific field meaning, such as key parameters "district", "district name", "price" is missing to clear the data items, such as non-key parameters if the fields "release time", "house type", "house area" are empty, then take as None. (3). Before packaging into the database, repeat the value processing. Because it often happens that the same property is registered for rent on more than one housing rental website, it is necessary to compare the key information to confirm whether the property belongs to the same property with the property in the database before encapsulating it into the database. If the key parameters "district", "block name" and "door number" are the same, and if the parameters "price" are not the same, then the property will be entered into the database, otherwise, it will not be entered into the database.
Existing research
Scrapy now supports incremental crawling by using the Scrapy command line with arguments to Hash the URLs and request parameters of the crawled pages [6] and store them persistently in a specified directory. Scrapy does the same hash operation on the URLs and request parameters of the crawled web pages starting from the seed page, and then compares the results with the persisted hash operation to determine if there are any updates, and continues to download if there are any updates. However, in realworld engineering applications, there is a need for more granular control when determining whether there are updates to the content of a web page, controlling the size of the crawled pages, etc. Current research on incremental crawling functionality has been implemented in two ways: (1). Scrapy only uses the URL crawled on the seed page to determine if a page has been updated, for example, when the content of a sub-level page has changed but the content of the seed page has not changed, this approach will not solve the problem. (2). Scrapy performs site-wide crawling and then performs reweighting, but this situation will result in data redundancy and a significant increase in storage pressure and crawler stress, and is suitable for target pages with little web content.
The established approach cannot achieve a balance between perceptual capability and spatial redundancy. Therefore it would be of practical importance to develop and design incremental crawling functionality, and in this paper we will design a three-level approach to incremental crawling of web page control
System design 4.1 System flow
The goal of the system is to crawl housing rental information on rental websites, clean and filter the data and integrate it in the database. Objectives of the crawl: (1). Set the URL of the seed page as a list of seed pages.
(2). To crawl seed pages and sub-level pages formatted using the MD5 algorithm and recorded in the visited list, and to unify them in the Hash list to record whether they are duplicates; (3). Decode the URL addresses to be put into the visited list to obtain the URLs, crawl and crawl the property details.
System weighting
Because one of the key issues in determining whether the current page to be crawled is a new page for crawl control is by the form of the URL. However, since the length of the URL varies, and URLs with access parameters can be as many as hundreds of characters, while URLs without parameters may be as short as about 20 characters, it is a good option to apply the MD5 algorithm to map the URL to a string of hexadecimal letters fixed at 32 characters in length. The MD5 algorithm results are almost conflictfree.
The process is that when the web page is crawled, the MD5 algorithm is used to calculate the URL into a 32-bit hexadecimal string, and a Select query is performed from the database list to determine if the web page already exists, and when it does, the page is inserted into the database list. Then the Item properties and data are extracted.
However, there are two problems with this approach.
(1). Inevitably, the whole site is crawled, either in the spider class parse() method or in the Pipline object to do the URL reweighting, the page has been downloaded by the spider. (2). In the operation of sentencing, each sentencing requires a Select database query operation. When the contents of the database are large, the Select operation will take more time and memory. The fact that Scrapy is a multi-threaded crawl means that the requirements of the operation are multiplied by a factor of one.We have improved this operation by: • improve all crawling to incremental crawling • use caching and no longer perform SelectSQL operations on the database.
The best place for the download middleware is between the engine and the downloader, because if it is placed between the engine and the scheduler, then the problem of downloading pages will not be solved. This is because there will be a situation where the internal content changes, even though the URL has not changed. After we have judged the weighting based on the URL, we will determine if the pages are duplicated based on their content. This is done by comparing the lengths of the recorded pages with the lengths of the recorded pages to determine if the pages have been updated. If there is a change in the length of the web page, then the page has been updated, otherwise it has not.
Scale control
Since the process of determining duplicates by the length of the page content requires the download of the page, we need to consider the size of the control while considering the efficiency. The size of the crawl is controlled according to the seed page. The list of seed pages of the site to be crawled is used as a control list. As the number of seed pages to be controlled is small, they can be loaded into the cache.
The URL of the seed page is the first page, i.e. the seed page is the first level page, and the sub-level page of the seed page is the second level page, and so on. When you start the crawl, you load the page control list from the database to get the seed page. The Process_request() method of the download middleware is executed before each page is crawled, so it is crucial to determine whether the URL of the currently crawled page is in the control list of pages. This is done by comparing the URLs recorded under each element in the control list, and if one element can be found then it is in the control list. If it is in the control list of pages, download the page first and then observe if the content of the page has changed. If the content of the page has changed, the control list and the database are updated simultaneously, and none is returned. Because if None is returned in the process_request() method, Scrapy will continue to download the current page and parse the new page to be crawled. If there are no changes to the content of the page, then an empty Response object is returned. Why return an empty Response object? Because if a Response object is returned in the Process_request() method, Scrapy will not download the current page and the crawler's parse() will get the empty Response object so that it can control the crawl from continuing. If it is not in the page control list, it will do a page reassessment. If duplicated, a Response object with empty content is returned; if not duplicated, none is returned.
System optimization
When a page is dynamically rendered by JavaScript, downloading the page and viewing the HTML source code reveals an "empty shell" with no real content. (1) Pre-rendering JavaScript; (2) Using a headless browser.) Using a headless browser is more compatible, but less efficient, while using a prerendering JavaScript solution allows multiple pages to be processed in parallel, which is more efficient.
Scrapy-Splash is a solution to the JavaScript rendering problem provided by the official Scrapy team, Splash is the module that handles the rendering of web pages, it uses the open source Webkit browser engine internally and uses the rendering service via the HTTPAPI. The web page request is handled in Scrapy via the downloader, which actually requests the Splash interface and gets the rendered data.
Conclusions
This incremental was created on demand, in the crawl control strategy to crawl the current web page to judge the weight, to solve the problem of always crawling the whole site and crawl efficiency is low, and in the Scrapy framework placed in the download middleware, and in the control of the scale at the same time, to increase efficiency and reduce the burden. It also works well for extracting dynamically rendered pages in JavaScript. | 3,262 | 2023-06-08T00:00:00.000 | [
"Computer Science"
] |
Comparing Several Gamma Means: An Improved Log-Likelihood Ratio Test
The two-parameter gamma distribution is one of the most commonly used distributions in analyzing environmental, meteorological, medical, and survival data. It has a two-dimensional minimal sufficient statistic, and the two parameters can be taken to be the mean and shape parameters. This makes it closely comparable to the normal model, but it differs substantially in that the exact distribution for the minimal sufficient statistic is not available. A Bartlett-type correction of the log-likelihood ratio statistic is proposed for the one-sample gamma mean problem and extended to testing for homogeneity of k≥2 independent gamma means. The exact correction factor, in general, does not exist in closed form. In this paper, a simulation algorithm is proposed to obtain the correction factor numerically. Real-life examples and simulation studies are used to illustrate the application and the accuracy of the proposed method.
Introduction
Consider a sample of (x 1 , . . . , x n ) from the two-parameter gamma model with mean µ and shape λ. The joint density is f (x 1 , . . . , x n ; µ, λ) = Γ −n (λ) λ µ where (s, t) = (∑ n i=1 x i , ∑ n i=1 log x i ) is a minimal sufficient statistic. The two-parameter gamma distribution is often used to model the non-negative data with right-skewed distribution. Moreover, depending on the values of the parameters, it can have a decreasing failure rate, a constant failure rate, or an increasing failure rate. This makes it a valuable model for analyzing data arising from engineering, environmental, meteorological, and medical studies.
Similar to the normal distribution, the two-parameter gamma distribution has a twodimensional minimal sufficient statistic (s, t). Another version of the minimal sufficient statistic is (r, s), where r is the log offset of the arithmetic mean from the geometric mean Notice that the density of log x i has location model form for a fixed λ. It follows from [1] that the conditional density for s given r and the marginal density for r take the form f (r; λ) = Γ(nλ)Γ −n (λ)n −nλ+1/2 exp{−nλr}h n (r) where h n (r) appears in the transformed measure which requires (n − 2)-dimensional integration. Hence, the joint density for (s, r) is f (s, r; µ, λ) = f (s|r; µ, λ) f (r).
By change of variable, the joint density for (s, t) takes the form The same result can be obtained by using the properties of the exponential transformation model by [2] or the conditional argument in [3]. Note that h n (·) requires (n − 2)dimensional integration, and it is available exactly only for small value of n (see [3,4]). Unlike the normal model, where inference for the normal mean can be obtained explicitly, inference for the gamma mean is a complicated and challenging problem. Many asymptotics inferential methods for the gamma mean exist in statistical literature. Moreover, most of the existing asymptotics methods are likelihood-based methods. Some are very simple but not very accurate, and others are very accurate but mathematically complicated and also computationally intensive. Furthermore, only limited methods can be applied to the problem of comparing the means of k > 2 independent gamma distributions.
Ref. [5] considered the log-likelihood function obtained from (1), which takes the form The maximum likelihood estimate (MLE) is (μ,λ), whereμ = s n , andλ satisfies −ψ(λ) + logλ − log s n + t n = 0 with ψ(·) being the digamma function. In addition, the observed information matrix evaluated at MLE isĵ It is well-known that the variance-covariance matrix of the maximum likelihood estimators can be approximated byĵ −1 . Hence, the standardized maximumm likelihood estimator is where var(μ) can be approximated by the (1, 1) entry ofĵ −1 . With the regularity conditions stated in [6] and also in [7], for large n, with first order accuracy, O(n −1/2 ). Thus, inference for µ can be obtained based on the limiting distribution. This method is generally known as the Wald method or the asymptotic MLE method.
Another commonly used method to obtain inference for µ is the log-likelihood ratio method. Letλ µ be the constrained MLE, which maximizes (µ, λ) for a fixed µ. In this case, λ µ must satisfy Then the log-likelihood ratio statistic is Again, with the regularity conditions stated in [6] and also in [7], for large n, with first order accuracy, O(n −1/2 ). Hence, inference for µ can be approximated based on the limiting χ 2 1 distribution. This method is also known as the Wilks method. To improve the accuracy of the log-likelihood ratio method, ref. [8] applied the Bartlett correction to the log-likelihood ratio statistic (see [9]). The resulting Bartlett corrected log-likelihood ratio statistic takes the form where and B(·) is known as the Bartlett correction factor. Ref. [9] showed that the Bartlett corrected log-likelihood ratio statistic converges to χ 2 1 distribution with fourth order accuracy. Hence, inference for µ can be approximated based on the limiting χ 2 1 distribution. Refs. [3,4] showed that the exact form of h n (·) in (2) is only available when n is small. Jensen used the fact that the model is an exponential-transformation model and applied the saddlepoint method to approximate h n (·) and derived an inference procedure for µ with third order accuracy. However, due to the complexity of the method, ref. [3] only provided tables for 1, 2.5, 97.5, and 99 percentiles of µ for sample sizes 10, 20, 40, and ∞, which were obtained by extensive iterative calculations. On the other hand, ref. [4] proposed another third order method to obtain inference for µ. This method is asymptotically equivalent to Jensen's method with the exception that it involves direct implementation of the method derived in [10].
Note that Gross and Clark's method is very simple but not very accurate. The loglikelihood ratio method is slightly more complicated because of the calculation of the constrained MLE, and it is still not very accurate. The Bartlett corrected log-likelihood ratio method by [8] gives very accurate results. It is also relatively straightforward because [8] derived all the necessary equations. The method presented in [4] is also very accurate, but it is computational intensive. Gross and Clark's method, Jensen's method and Fraser, Reid and Wong's method are not applicable to the problem of testing homogeneity of k > 2 independent gamma means. The log-likelihood ratio method can be extended to this problem but it has only first order accuracy. Ref. [8] also derived the explicit Bartlett correction factor for testing equality of two independent gamma means, but, due to the complexity of the method, did not derive the explicit Bartlett correction factor for testing homogeneity of k > 2 independent gamma means.
In this paper, a Bartlett-type correction of the log-likelihood ratio statistic is proposed in Section 2. The proposed Bartlett-type correction factor is numerically obtained by simu-lations. The proposed method is then applied testing homogeneity of k ≥ 2 independent gamma means in Sections 3 and 4, respectively. Some concluding remarks are given in Section 5. Real-life examples and simulation studies results are presented to compare the accuracy of the proposed method with the existing methods.
Main Results
Let (θ) be the log-likelihood function with a p-dimensional parameter θ. With the regularity conditions stated in [6], the log-likelihood ratio statistic, is asymptotically distributed as χ 2 p with first order accuracy, whereθ is the overall MLE, which maximizes (θ). Ref. [9] showed that the mean of W(θ) can be expressed as where n is the size of the observed sample and B(·) is the Bartlett correction factor. Hence, the Bartlett corrected log-likelihood ratio statistic is and has mean p with fourth order accuracy.
The above method can be generalized to the case when ψ = ψ(θ) is the parameter of interest and dimension of ψ is m < p. With the regularity conditions stated in [6], the log-likelihood ratio statistic is asymptotically distributed as χ 2 m with first order accuracy. Note thatθ is the overall MLE, which maximizes (θ), andθ ψ is the constrained MLE, which maximizes (θ) for a given value of ψ. The Bartlett corrected log-likelihood ratio statistic is then and W * (ψ) has mean m with fourth order accuracy.
Theoretically, the Bartlett correction method gives extremely accurate results, even for small sample sizes. However, obtaining the explicit closed form expression of the Bartlett correction factor is a very difficult problem. There exists only limited problems in statistical literature that the explicit closed form, or even the asymptotic form of the Bartlett correction is available. For example, Ref. [8] obtained the Bartlett correction factor for the one-sample gamma mean problem as well as for the equality of the two independent gamma means problem only. However, they did not discuss the case for testing homogeneity of k > 2 independent gamma means. The aim of this paper is to propose a systematic way of approximating a Bartlett-type correction factor.
Since the log-likelihood ratio statistic W(ψ) has the limiting distribution χ 2 m , similar to the Bartlett correction method, we want to find a scale transformation of W(ψ) such that the transformed statistic has the exact mean m. An obvious transformation is The primary task is to obtain such a sample. We propose to employ simulations to create such a sample. The main idea to to generate samples from the original model but with the parameters being the constrained MLE obtained from the original observed sample. We summarized the idea into the following algorithm. Assume: (x 1 , · · · , x n ) is a sample from a model with density f (·; θ), and ψ is the parameter of interest. Have: The log-likelihood function (θ) is given in (3). From the log-likelihood function, we can obtainθ,θ ψ , and W(ψ).
Step 2: From the simulated sample, obtain the log-likelihood ratio statistic and denote it as W s (ψ).
Step 4: Obtain Step 5: By the method of moments, W s (ψ) is an consistent estimate of has mean m, and its limiting distribution is χ 2 m .
One-Sample Gamma Mean Problem
Consider the one-sample gamma mean problem with the log-likelihood given in (3). Ref. [5] proposed to use Wald statistic given in (4) to obtain inference for µ, whereas [8] recommended to use the Bartlett corrected log-likelihood ratio statistic given in (6) to obtain inference for µ. In this paper, a Bartlett-typed corrected log-likelihood ratio statistic based on the algorithm given in Section 2 is proposed as an alternative approach to obtain inference for µ. To compare the results obtained by these methods, we consider the data set given in [5], which is the survival time of 20 mice exposed to 240 rad of gamma radiation: Table 1 recorded the 95% confidence intervals for µ and the p-values for testing H 0 : µ = 133 vs. H a : µ = 133 obtained by the methods discussed in this paper. From Table 1, we observed that results obtained by Jensen and Kristensen's method and by the proposed method are almost identical. However, they are very different from the results obtained by Gross and Clark's method and the standard log-likelihood ratio statistic method. Simulation studies are performed to compare the accuracy of the methods discussed in this paper. In particular, 5000 simulated samples are obtained for each combination of µ, λ, and n. Moreover, we use N = 100 for each of the simulated sample to approximate the mean of the log-likelihood ratio statistic. The proportion of samples that was rejected at 5% significance level are recorded in Table 2. Theoretically, the true percentage of samples that will be rejected is 5% with a standard deviation of 0.31%. Extensive simulation studies were performed. All results are very similar and, therefore, only a subset of them are presented in Table 2. Results from other combinations of the parameters are available from the authors upon request. We observed that results by Gross and Clark's method are significantly different from the nominal 5% level, but the accuracy is improving slowly as the sample size n increases. Results by the log-likelihood ratio method are slightly better, and the accuracy improves much faster as the sample size increases. Results by both Jensen and Kristensen's method and the proposed method are very accurate and always within 3 standard deviation of the nominal 5% level, even when the sample size is as small as 5.
Testing Homogeneity of k Independent Gamma Means
In this section, the proposed method is extended to testing homogeneity of k ≥ 2 independent gamma means problem. Let (x i1 , . . . x in i ) be a sample from the two-parameter gamma model with mean µ i and shape λ i , where i = 1, . . . , k and k ≥ 2. Moreover, assume the k two-parameter gamma models are independent. Let i (µ i , λ i ) be the log-likelihood function from the ith model. Then the joint log-likelihood function is where Since the k models are independent, the overall MLE is (μ 1 , . . . ,μ k ,λ 1 , . . . ,λ k ) wherê Hence, the log-likelihood function is evaluated at MLE (μ 1 , . . . ,μ k ,λ 1 , . . . ,λ k ).
To test homogeneity of k gamma means, the null and alternative hypotheses are H a : not all equal.
To illustrate the application of the log-likelihood ratio method and the proposed method for this problem, we consider the intervals in service-hours between failures of the air-conditioning equipment in 10 Boeing 720 jet aircrafts reported in "Example T" from [11]. It is assumed that the reported times for each aircraft is distributed as a two-parameter gamma distribution. The question of interest is whether the ten aircrafts have the same mean intervals in service hours between failure. In other words, we are testing H a : not all equal.
The log-likelihood ratio method gives a p-value of 0.0871, whereas the proposed method gives a p-value of 0.1295 (using N = 100,000). At 10% level of significance, the two methods give contradictory results with the log-likelihood ratio method rejecting H 0 , and the proposed method failing to reject H 0 .
As in the one sample case, to compare the accuracy of the two methods, extensive simulation studies were performed. For each combination of k, n 1 , · · · , n k , (µ 1 , λ 1 ), · · · , (µ k , λ k ), 5000 simulated samples are obtained. Moreover, for each simulated sample, N = 100 is used to estimate the mean of the log-likelihood ratio statistic. The proportion of samples that reject the null hypothesis of homogeneity of the k means at 5% significance levels are recorded. Since all results are very similar, only results from the 8 cases listed in Table 3 are reported. Table 3. Various combinations of k, (µ 1 , λ 1 ), . . . , (µ k , λ k ). Results in Tables 4 and 5 demonstrate that the log-likelihood ratio method gives unsatisfactory results, especially when the sample sizes are small. However, the accuracy of the results improve as the sample sizes increase. In comparison, the results from the proposed method are very accurate, and they are always within 3 standard deviation of the nominal 5% level, regardless of the sample sizes.
Conclusions
In this paper, a Bartlett-type corrected log-likelihood ratio method for comparing the means of several independent gamma distributions is proposed. The method can easily be applied in statistics software, such as R. Simulation results demonstrate the log-likelihood ratio method does not give satisfactory results, especially when the sample sizes are small. However, the proposed method is extremely accurate even when the sample sizes are small. One advantage of the proposed method is that it is not restricted to the gamma means problem, as it is applicable to any parametric models. | 3,768.8 | 2023-01-01T00:00:00.000 | [
"Mathematics"
] |
Peacocke on magnitudes and numbers
Peacocke’s recent The Primacy of Metaphysics covers a wide range of topics. This critical discussion focuses on the book’s novel account of extensive magnitudes and numbers. First, I further develop and defend Peacocke’s argument against nominalistic approaches to magnitudes and numbers. Then, I argue that his view is more Aristotelian than Platonist because reified magnitudes and numbers are accounted for via corresponding properties and these properties’ application conditions, and because the mentioned objects have a “shallow nature” relative to the corresponding properties. The result is an asymmetric conception of abstraction, which contrasts with the neo-Fregeans’ but has important tenets in common with an approach that I have recently developed.
Introduction
The Primacy of Metaphysics (Peacocke 2019) is about the relation between representation and reality, in particular about their relative explanatory priority. Although this is ''a timeless, ur-issue in philosophy'' (p. 10), the book develops a distinctive and novel position, which is often plausible and always interesting. As one might expect, given the topic and the author, the book covers a lot of ground: from magnitudes and numbers, through space and time, to the self, as well as our language and thought about each of these domains. I will here focus on magnitudes and number, the discussion of which comprises roughly a third of the book, where Christopher Peacocke (henceforth CP) makes a number of important contributions.
First, however, I wish to make some remarks about the project as a whole. According to meaning-first views, theories of meaning and intentional content concerning a domain are always explanatory prior to the metaphysics of the domain. This approach has been pioneered by Michael Dummett and sympathetically discussed by Robert Brandom and Crispin Wright-with Kant lurking in the shadows. Although Peacocke's framing of the discussion has a clear and acknowledged debt to Dummett, he strongly rejects the latter's meaning-first view. Instead, CP defends:
Primary Thesis
The metaphysics of a domain is involved in the philosophical explanation of the nature of the meanings of sentences about that domain; and the metaphysics of a domain is involved in the philosophical explanation of the nature of intentional contents (ways of representing) concerning that domain. (p. 4) Instead of meaning-first, we are thus given a choice between metaphysics-first (which speaks for itself) and no-priority (which holds that the metaphysics of a domain and our representation of it are explanatorily on a par). According to CP, each of these two choices has important implications.
Let's take a closer look at how CP understands the two relata, namely 'the metaphysics of a domain' and our representation of this domain. First, what is 'the metaphysics of a domain'? CP writes: By 'the metaphysics of a domain' I mean a theory that states truly what is constitutive of the objects, properties, and relations of that domain -a theory of what makes them the objects, properties, and relations they are. (p. 16) Notice the centrality of questions of individuation to this conception of metaphysics. We are inquiring into what is 'constitutive' of various entities, which is glossed as a question of what ''makes these entities the entities they are''. This is a purely metaphysical notion of individuation, which is distinct from some semantic or metasemantic notions that are also prominent in the literature.
Concerning the second relatum, CP writes that we want an explanation that does not merely specify the meanings of expressions in the relevant language, but rather a theory that says, substantively, what it is to understand those expressions. (p. 17) That is, we want not only a semantics for the relevant language but also a metasemantic account, which explains what endows linguistic expressions and mental representations with their semantic values and, relatedly, what it is to understand an expression or have a representation. Here we are concerned with a metasemantic notion of individuation: what is it to pick out, or refer to, a particular entity in language or thought? CP's Primary Thesis is thus at heart a thesis about the relation between two notions of individuation. The thesis states that the metaphysical notion is prior to, or on a par with, the metasemantic one. Or, with CP's Sartre-inspired slogan: individuation is prior to representation.
Why accept the Primary Thesis? CP's principal argument is fairly straightforward.
Which relations a thinker can stand into an entity depends on the correct metaphysics of that entity. It follows that the metaphysics of a domain constrains the theory of concepts of entities of that domain. (p. 27) I find this argument quite compelling. So as far as I am concerned, the more pressing question concerns the choice between metaphysics-first and no-priority. While CP tends to favor the former, I see a more extensive role for the latter, for reasons I explain towards the end of this note.
Towards a metaphysics of extensive magnitudes
The book mounts an impressive defense of the philosophical importance of extensive magnitudes, defined as magnitudes for which there is a natural operation of addition. Examples include lengths, durations, and masses. This contrasts with temperature or a material's hardness, which are classified as intensive magnitudes.
Chapter 2 distinguishes three notions of extensive magnitude: (i) Magnitude types, e.g. length, duration, mass (ii) Magnitudes themselves, e.g. length 1 m, duration 1 s, mass 1 kg (iii) Magnitude tropes, e.g. the length of this stick As I expect CP would agree, two further notions are important as well: (iv) Quantities, defined as the entities that have magnitudes, e.g. this ruler, this process, this bronze weight (v) The property of having a certain magnitude, e.g. being 1 m long, lasting 1 s, having mass 1 kg.
What is the relation between all these notions? In particular, can some of them be reduced to, or eliminated in favor of, the others? Following the ancient debate about the status of univerals, we can distinguish three broad orientations towards the metaphysics of magnitudes. At one extreme, we find nominalism, which holds that there are just concrete objects and thus no magnitudes or magnitude properties. At the opposite extreme, we find Platonism, which affirms the existence of the mentioned entities and insists that these can be made sense of regardless of how things stand with the concrete world. Somewhere in between these two extremes we find the Aristotelian view that magnitude properties (and perhaps also magnitudes) exist provided that the property is instantiated, or at least possibly instantiated. Moreover, an Aristotelian insists that in order to make sense of, or individuate, these entities, it is necessary to appeal to concrete instantiations of this magnitude, or at least the possibility of such instantiations.
Where does CP belong on this rough map of the metaphysical terrain? He is certainly no nominalist. Among the forms of realism that remain, I will argue that he is more of an Aristotelian than a Platonist. 1 There is a further distinction too within the family of realist conceptions of magnitudes. I have in mind the distinction between magnitudes themselves, i.e. notion (ii), and the corresponding magnitude properties, i.e. notion (v). CP regards magnitudes as objects, in the sense that they are ''entities referred to by singular terms'' (p. 69). Magnitude properties, by contrast, are most naturally understood as the semantic values of predicates. What, then, is the relation between a magnitude m and the property P m of having that magnitude? The following principle, which I call the Reification Link, plays a central role in CP's account: h8x Of course, this principle forms only the beginning of an answer to our question of the relation between a magnitude proper and the corresponding magnitude property-of which more shortly.
The refutation of nominalism?
Before trying to determine the correct metaphysics of magnitudes, it is useful to review some basic measurement theory. Let us follow CP and consider Patrick Suppes' influential account of extensional quantities. 2 We begin with the language. First, there is a primitive predicate '"', which we use to express that one object is no more massive, or long, or whatever, than another. Thus, 'x " y' means that the quantity x is less than or equal to y in the relevant respect. Next, we define a predicate 'x * y' as 'x " y^y " x'. There is also a primitive summation operation È .
Suppes' theory of extensive quantities has the following axioms: Notice that, when this theory holds for certain quantities, it also holds for the corresponding magnitudes. In fact, this is CP's preferred interpretation. 3 This theory has a number of philosophically important theorems. Let me mention one now and another in the next section. The first theorem states that * is an equivalence relation. The question of nominalism about magnitudes can thus be put more sharply: what is the relation between a magnitude and the corresponding * -equivalence class, which the nominalist can construe as just a plurality of concrete objects? 4 For example, what is the relation between the magnitude 1 m and the class of objects of that length? Can we, as the nominalist proposes, eliminate magnitudes and magnitude properties in favor of just the objects which, loosely speaking, have this magnitude? That is, can we dispense with notions (ii) and (v) in favor of just notion (iv)?
CP develops an interesting argument against any such elimination or reduction, inspired by (Putnam 1969). The argument turns on the modal profile of the magnitude (or magnitude property) and the class of entities that have this magnitude. Let a be a rod of length 1 m. It is merely contingent that a has this magnitude (or magnitude property). Next, let C be the class of objects of length 1 m-or perhaps, for reasons of nominalistic hygiene, the plurality of such objects. Then it is necessary that a is a member of C. Thus, we have: Magnitudes and their corresponding equivalence classes have different modal profiles. This precludes any reduction of the former to the latter. Our analysis of magnitudes needs an intensional element and cannot be given in fully extensional terms.
Does the argument succeed? Although it has substantial force, it raises some interesting questions. First, are all magnitudes had only contingently? If not, then CP's argument has restricted scope: magnitudes that are had by necessity escape its clutches. And in fact, the answer is negative. Consider the cardinality of a plurality of objects, which satisfies the axioms of Suppes' theory and thus qualifies as an extensive magnitude. 5 But the cardinality of some objects is essential to these objects: some objects could not be those very objects unless they had that cardinality.
Where does this leave us? CP's argument certainly gives the realist a foot in the door. Perhaps this is enough. Once some magnitudes are accepted, why be squeamish about accepting more?
Second, how should we understand transworld comparisons of magnitudes? It seems straightforward that a particular rod might have been 10% longer than it is. But could all physical objects have been 10% longer in all directions? For familiar Leibnizian reasons, it is tempting to deny that this scenario is genuinely different from the way things actually are. But if so, what sense can be made of transworld comparisons of length and other magnitudes? I will return to this question shortly.
The representation theorem and its significance
The second philosophically important theorem of Suppes' theory is a useful representation theorem, which explains why extensive quantities can be measured by positive real numbers. 4 On the appeal to plurals, see Boolos (1984). 5 On CP's analysis, numbers are ascribed, in the first instance, to concepts, not pluralities. This seems the wrong way round. To be n is intrinsic to xx, while it is only extrinsic to F, namely in virtue of xx being all and only the F's and xx being n in number, for some xx.
Theorem 1 (Representation theorem) Suppose E ¼ D; "; È satisfy the theory. Then there is a homomorphism f : E ! R þ ; ; þ h i ; that is, for all x and y: x Moreover, the homomorphism f is unique up to multiplicative constant. Underneath this somewhat abstract statement lies an important and easily understood lesson. Extensive quantities can be measured by means of positive real numbers. All we need to do is choose some quantity as a unit, whose measure is therefore 1. Relative to this unit, there is a unique way to assign a measure to every other quantity of this magnitude type. The theorem is important for several reasons. First, since our only choice is that of a unit, it means that ratios of two magnitudes of one and the same type are absolute. For example, being twice as long, or massive, as some other object is absolute. This absoluteness of ratios plays an important role in the book, both in some of CP's reflections on our representation of magnitudes and in his metaphysical account of the positive real numbers. 6 Second, the theorem reveals the limited scope of this particular analysis. Only positive reals figure as measurements on this particular analysis, not zero or negative reals. Thus, this analysis excludes magnitudes such as electric charge, which can have both positive and negative values relative to some unit. Nor is any provision made for the complex numbers, infinitesimals, or angular measure in the interval [0, 2p). 7 So the particular analysis just outlined is only a beginning.
Finally, the theorem pinpoints what is required to make sense of transworld comparisons of magnitudes, namely to correlate the units chosen in each of the two possible worlds. Does one of these arbitrarily chosen units represent the same magnitude as the other one or merely some ratio of it? The Leibnizian challenge is that there is no objective answer to this question. Although the challenge is of profound theoretical importance, in practice its force can be blunted. In most ordinary modal theorizing, there is a unique salient correlation. Consider a world in which my meter stick is 10% larger relative to every other object, but where every other ratio of spatial size remains unchanged. (This characterization is meaningful because ratios are absolute.) Then it is far more natural to choose a unit, in each of the two worlds, among the objects that undergo no relative change, say, someone else's meter stick.
I conclude that the Leibnizian challenge can be met, and that CP's antinominalist argument therefore succeeds.
Aristotelian realism and the question of reification
As adumbrated, CP rejects not only nominalism about magnitudes but also Platonism, at least in the traditional sense mentioned above. 8 Magnitudes are not transcendent entities, as Plato would have it, but enter into causal and scientific explanations and figure in causal explanatory laws.
CP's Aristotelianism is particularly clear in connection with a novel account of numbers, which are closely related to magnitudes. The central idea of the accountcalled ''applicationist individuationism''-is that numbers are individuated in terms of their application conditions. For example: What makes something the number 1 is that it is the number n such that for an arbitrary concept F, for there to be precisely n Fs is for [it to be the case that A!xFx] (p. 210) More generally, let A n xF(x) be the first-order formalization of the claim that there are precisely n F's. This is a magnitude property of concepts. Then n is individuated as the number such that, for there to be precisely n Fs is for it to be the case that A n xFx. The view that numbers and numerical properties are individuated in terms of their application conditions is distinctly Aristotelian. An obvious advantage of this view is that it removes the sense of mystery about how numbers can be relevant to our study of the physical world. 9 What about Aristotle's stringent requirement that a property be instantiated in order to exist? Since the natural numbers are understood as cardinality properties of concepts, this requirement threatens to saddle arithmetic with a commitment to an actual infinity of nonmathematical objects. But as CP observes, it is ''quite implausible'' that arithmetic should be committed to ''infinity in the non-abstract world'' (p. 217).
Thankfully, this unpalatable commitment can be avoided. Aristotle himself suggests one option, namely to regard the sequence of natural numbers as merely potentially infinite. Suppose the non-abstract world contains some finite number M of objects. Then only the numbers 0 through M actually exist. But potentially there are more numbers. For necessarily, given any number N, possibly there exists a successor N ? 1. This Aristotelian conception of the natural numbers can be proven to succeed in the precise sense that it allows us to interpret all of first-order Dedekind-Peano arithmetic. 10 CP's preferred way to avoid the unpalatable commitment is to adopt a more relaxed form of Aristotelianism which individuates properties in terms of their possible instances, not only their actual ones. We can thus allow two uninstantiated cardinality properties to be distinct, provided that the two properties differ with regard to their possible instances. The corresponding reified numbers will thus be individuated as distinct objects via the Reification Link. 11 Further options exist as well, including one described in Sect. 7, which yields an actual infinity of numbers without relaxing the requirement that properties be instantiated.
I turn now to a different aspect of CP's realism, namely his acceptance of reified magnitudes and numbers. His reasons for this acceptance are much like Frege's (see p. 42). Natural language contains singular terms that refer to magnitudes and numbers. This is also borne out in mathematical practice, which regards numbers and other magnitudes as objects, e.g. by allowing them to be counted and to figure as elements of sets.
Is this reification of magnitudes and numbers defensible? The answer will depend on our ability to answer some more specific questions, which will occupy us in the remainder of this note.
The metaphysical question
What are reified magnitudes and numbers? In particular, to what extent does the Reification Link shed light on the nature of the object m in terms of the corresponding property P m ? The question of permissible reification Can every magnitude property be reified? Should we adopt a comprehension axiom stating that for every extensive magnitude property P m there is a magnitude proper, m, such that the Reification Link holds? Analogous questions arise for numbers. The metasemantic question What is it to refer to a reified magnitude m, as opposed to using a predicate with the corresponding magnitude property P m as its semantic value? Analogous questions arise for numbers.
As will transpire, I find CP's answers to the first two questions very congenial and indeed broadly similar to views I have recently defended in Linnebo (2018). We appear to differ, however, concerning the final question.
6 Shallow nature CP claims that the Reification Link provides a complete account of a magnitude in terms of the corresponding magnitude property. The link individuates the magnitude m in terms of corresponding property P m , that is, it provides an account of ''what makes m the object it is''. The magnitude m has, as I will put it, a shallow nature relative to the property P m . 1213 Again, CP's view becomes particularly clear in the case of numbers, which are said to have no nature beyond what is contained in their individuation: ''there is nothing more to being any given number than is given in the individuating condition'' (p. 141). More generally, ''the very nature of abstract objects is explained by their application conditions.'' The idea of shallow nature applies not only to numbers and other abstract objects but also to relations between them. Consider the successor relation S that holds between any natural number and its immediate successor. S holds between two numbers m and n just in case n applies to concepts with precisely one more instance than concepts to which m applies. This is in effect just Frege's famous characterization of the successor relation: 14 Again, CP makes a claim about shallow nature: ''There is no more for two natural numbers to stand in the successor-of relation than the displayed condition's holding'' (p. 215). The result is an attractive and broadly neo-Fregean metaphysics of the abstract. While there are numbers and other abstract objects, there is nothing more to these objects and the relations in which they stand than what is contained in certain corresponding magnitude properties.
Asymmetric abstraction
The question of permissible reification requires a bit of background. Reification is dangerous, as Frege painfully discovered. His claim that every property F can be reified as a corresponding extension uˆ.Fu was famously refuted by Russell's paradox. The problem concerns Frege's Basic Law V, which we can formulate as follows in order to highlight the structure it shares with the Reification Link: Of course, extensions of concepts aren't magnitudes or numbers. But CP recognizes that there is a notion of ordinality, which is measured by an ordinal number. And the Reification Link for ordinality gives rise to the Burali-Forti paradox (that is, the 12 For some closely related ideas, see Hale and Wright (2007) and Linnebo (2018, §11.3). 13 This view plays a crucial role in CP's response to the notorious Julius Caesar problem.
A natural number is individuated by its application conditions, in numerical quantifications. Any object that is not individuated by its application conditions in numerical quantifications is not a natural number. Julius Caesar is not so individuated. So no natural number is identical with Julius Caesar. (p. 214) This response is closely related to the neo-Fregeans' (Hale and Wright, 2001b), which seeks to distinguish Caesar from any natural number on the grounds that the two objects are subject to different criteria of identity. 14 Instead of this characterization, CP gives a more complicated, but ultimately equivalent characterization.
paradox of the ordinal of the well-ordering of all ordinals). 15 Thus, even the reification of numbers is dangerous unless constrained in some way. Simultaneously, the constraints imposed must not be too severe, as CP clearly recognizes. In fact, although his approach to philosophy differs markedly from Carnap's, CP goes out of his way to commend Carnap's liberal and permissive attitude towards the existence of mathematical objects.
Is there a way to balance these two conflicting pressures-for safety against paradox, on the one hand, and a liberal approach to mathematical ontology, on the other? This has become known as the bad company problem. We want to excise all ''bad companions'', such as Basic Law V and the problematic form of ordinal abstraction, while retaining all ''good'' cases of abstraction or reification. Ideally, the line of demarcation should also be well motivated and suitably integrated with our philosophical account of abstract objects. This is obviously a tall order.
Although CP, like most commentators, doesn't provide a worked-out answer to the bad company problem, he makes some tantalizing suggestions, which point in the same direction as an attempted solution recently proposed by myself (Linnebo 2018, ch. 3). Consider the celebrated Hume's Principle, which describes how cardinal numbers are obtained by abstraction on concepts: A question of great philosophical importance arises. What is the relation between the two sides of this abstraction principle? The prevailing neo-Fregean view has been that the two sides are symmetrically related: these are just two different ways to ''carve up'' one and the same fact. 16 In Linnebo (2012), which CP quotes with approval, I complained that on this symmetrical conception any problematic features attaching to one side would be inherited by the other side. Instead, both of us emphasize certain asymmetric features of (HP) and other acceptable abstraction principles, namely that matching instances of the two sides differ with respect to (i) which objects they refer to; (ii) their ontological commitment; and (iii) metaphysical explanation (which flows right-to-left, not left-to-right). The resulting asymmetric conception of abstraction holds great promise with respect to the bad company problem. It suggests, as CP puts it, a ''hierarchy of individuation'' (p. 228). Abstract objects are individuated successively, starting with material made available by the physical world, perhaps including its modal aspects. Once a certain stock of abstract objects have been individuated, they can be used to individuate yet further abstract objects. Paradox is avoided by ensuring that, throughout this stepwise individuation, we only ever draw on objects and truths that are available at that stage. For example, we must require that in the individuation of any particular natural number, the individuating condition not involve quantification over, or involve reference to, that very number whose individuation is in question. (p. 224) More generally, the asymmetric conception of abstraction suggests a grounded, or broadly predicative, approach to abstraction, where at any stage we can only appeal to entities and truths available at that stage. The details are subtle but need not detain us here. 17 Notice that the asymmetric conception of abstraction permits Fregean ''bootstrapping''. Suppose we have established the existence of the numbers 0,1, …,N. Since numbers are bona fide objects, they can figure in further instances of abstraction. This enables us to establish the existence of N ? 1 by cardinality abstraction applied to the mentioned list. We now repeat the argument, only this time starting with the longer list of numbers 0,1, …,N ? 1. By iterating further, we establish the existence of infinitely many numbers-without any unpalatable assumptions about the cardinality of the non-abstract world and (if desired) without lifting the Aristotelian requirement that properties be instantiated.
The representation of magnitudes and numbers
The book develops a rich and detailed account of how animals (including humans) represent magnitudes. The central idea is that magnitudes external to the mind are represented by magnitudes in the mind or brain. This account nicely explains how perception of magnitudes is unit-free, how we sometimes represent ratios of magnitudes, and how analogue computation works (namely by law-governed operations on the representing magnitudes which generate further representing magnitudes).
I have little to add here, other than to observe that the account is most plausibly understood as an account of the representation of magnitude properties, not as an account of singular reference to reified magnitudes. Indeed, much of the attraction of the account is that it applies also to simpler animals (such as birds) and to largely innate cognitive capacities (what (Carey 2009) calls ''core cognition'').
What is it, then, for a thinker to explicitly represent a number or some other abstract object? Some philosophers have argued that such representation is impossible, on the grounds that representation is always based on some causal interaction with the object represented, which is impossible when that object is abstract. CP dismisses these concerns and proposes an entirely non-causal explanation of how a thinker represents an abstract object. ''In cases where the principle 'Individuation Precedes Representation' holds good'', he writes, the representation of an abstract object ''involves only drawing in the right way on the metaphysics of the entities in question' ' (p. 209).
To put some flesh on the bone, let us consider a representative example of this explanatory strategy: to think of a natural number as 1, for instance, is to have tacit knowledge that for there to be 1 F is for it to be the case that there to something that is F, and nothing else is. (p. 229) Notice that, whereas the explanandum involves a cardinal number, the explanans involves only a cardinality property.
This may strike you as metasemantic alchemy. How can tacit knowledge of a cardinality property be transmuted into an explicit representation of the corresponding cardinal number, which is an abstract object? CP's response appeals to the shallow nature of the object vis-à-vis the property. This is an account in which Individuation Precedes Representation, […] because the condition for thinking of a natural number n mentions the condition, constitutive of n, for there being n Fs. If there were more to being n than that relation to numerical quantification holding, no doubt we would require more for thinking of or representing n. (pp. 230-231) The appeal to shallow nature shows that a response to the alchemy charge may well be possible. But by itself, this hardly constitutes a worked-out response. As Frege himself points out, knowledge of a cardinality property does not by itself suffice for representation of the corresponding cardinal number. 18 How can we bridge the gap between tacit knowledge of a cardinality property and explicit representation of the corresponding cardinal number? Frege's brilliant proposal from the famous §62 of his Foundations of Arithmetic is to base the representation of a cardinal number-and of other objects as well-on criteria of identity. To represent an object, Frege argues, it suffices to possess a criterion of identity for the would-be referent, where this criterion doesn't itself presuppose the very relation of reference we are trying to explain. Admittedly, Frege's proposal too is fairly programmatic, and much philosophical work remains. The neo-Fregeans have had a go at this, and so have I. 19 My parting question to CP is whether he too intends to appeal to criteria of identity to account for the constitution of reference to numbers and other abstract objects, and if not, how else he intends to bridge the mentioned gap-shallow though it may be.
Funding Open Access funding provided by University of Oslo (incl Oslo University Hospital) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creativecommons.org/licenses/by/4.0/. | 7,009.8 | 2020-09-27T00:00:00.000 | [
"Philosophy"
] |
Catalytic Cleavage of the C-O Bond in 2,6-dimethoxyphenol Without External Hydrogen or Organic Solvent Using Catalytic Vanadium Metal
Hydrogenolysis of the C-O bonds in lignin, which promises to be able to generate fuels and chemical feedstocks from biomass, is a particularly challenging and important area of investigation. Herein, we demonstrate a vanadium-catalyzed cleavage of a lignin model compound (2,6-dimethoxyphenol). The impact of the catalyst in the context of the temperature, reaction time, and the solvent, was examined for the cleavage of the methyl ethers in 2,6-dimethoxyphenol. In contrast to traditional catalytic transfer hydrogenolysis, which requires high pressure hydrogen gas or reductive organic molecules, such as an alcohol and formic acid, the vanadium catalyst demonstrates superior catalytic activity on the cleavage of the C-O bonds using water as a solvent. For example, the conversion of 2,6-dimethoxyphenol is 89.5% at 280°C after 48 h using distilled water. Notably, the vanadium-catalyzed cleavage of the C-O bond linkage in 2,6-dimethoxyphenol affords 3-methoxycatechol, which undergoes further cleavage to afford pyrogallol. This work is expected to provide an alternative method for the hydrogenolysis of lignin and related compounds into valuable chemicals in the absence of external hydrogen and organic solvents.
INTRODUCTION
It provides an alternative approach to addressing renewable fuel sources and their associated environmental issues that converting renewable lignocellulosic biomass to value-added chemicals and biofuels by catalyzing (Son and Toste, 2010;Ma et al., 2018;Rinesch and Bolm, 2018;Gao et al., 2019;Liu et al., 2019). In general, the composition of the biomass, lignocellulose, is mainly composed of cellulose, lignin, and hemicellulose (Son and Toste, 2010;Yamaguchi et al., 2017Yamaguchi et al., , 2020Rinesch and Bolm, 2018). Although the degradation and utilization of lignin is attractive, the development of reliable methods to access the aforementioned materials is still underdeveloped (Yamaguchi et al., 2020). For instance, most of the lignin obtained from the pulp and paper industry is either pumped into rivers as a black liquor or it is incinerated (Xu et al., 2014;Jiang et al., 2016;Wang et al., 2017). The recalcitrant and complex molecular structure of lignin accounts for the lack of enthusiasm to utilize it further (Pineda and Lee, 2016;Wang et al., 2017). Hence, an efficient and reliable method for degrading lignin by catalyzing convert to high-value products is urgently required.
Lignin is an amorphous three-dimensional hetero-polymer composed of three phenyl-propane units (sinapyl, p-coumaryl, and coniferyl alcohols) which are linked by relatively stable C-O and C-C bonds (Dai et al., 2016;Chen et al., 2018;Ji et al., 2018). Among them, the C-O bond is the most abundant, which accounts for 67-75% of the total linkages in lignin (Guadix-Montero and Sankar, 2018;Dong et al., 2019). Consequently, the catalytic cleavage of C-O bonds in lignin is vital to accessing high-value intermediates from the polymer; however, the selective cleavage of the C-O bonds in lignin is challenging because of the high C-O bond strength (209-348 kJ mol −1 ) (Dong et al., 2019). For the past few decades, many strategies, such as hydrogenolysis, oxidation, hydrolysis, and pyrolysis (Chu et al., 2013;Wang et al., 2013;Dai et al., 2016;Besse et al., 2017;Lin et al., 2018), have been examined for the cleavage of the C-O bond in lignin in addition to several model compounds. Among these methods, the hydrogenolysis has gained increasing attention for the degradation of lignin because of the relatively high yield and selectivity. For example, investigated the hydrogenolysis of C-O bonds using Ni@ZIF-8 as catalyst, in which they demonstrated that the C-O bonds could be cleaved in the presence of a hydrogen gas under high pressure. In another variation, Jiang et al. (2019) reported the cleavage of C-O bond in lignin model compounds using a Ni/Al 2 O 3 -T catalyst with isopropanol as the hydrogen source. To this end, the benzyl phenyl ether was converted into toluene and phenol, in addition to cyclohexanol from the exhaustive reduction of phenol. Despite significant advances in the field of transfer hydrogenolytic cleavage of C-O bonds, most hydrogenolysis reactions either require highpressure hydrogen gas or reductive organic molecules (such as alcohols and formic acid) as hydrogen donors (Hanson et al., 2010;Zhang et al., 2012;Rahimi et al., 2014;Díaz-Urrutia et al., 2016;Gomez-Monedero et al., 2017;Wang et al., 2017;Rinesch and Bolm, 2018;Kang et al., 2019;Liu et al., 2019;Yang et al., 2019). Nevertheless, hydrogen is challenging to handle and use under very high pressures and organic solvents are expensive and very often not environmentally friendly. Hence, from an environmental, economic, and practical standpoint, it is still desirable to develop new transfer hydrogenation catalytic systems that could efficiently convert lignin and the associated model compounds into valuable chemicals and thereby circumvent some of the associated limitations. Herein, we describe the ability to employ vanadium metal as a catalyst for the cleavage of C-O bonds in 2,6-dimethoxyphenol, which is a lignin model compound, in the absence of highpressure hydrogen gas. The impact of the reaction temperature, time, and solvent on the transfer hydrogenation activity were studied. We determined that catalytic vanadium metal has excellent activity for the hydrogenation of C-O bonds in water. Moreover, catalytic vanadium metal is effective for the transfer hydrogenation of benzyl phenyl ethers to furnish 4-benzylphenol and 2-benzylphenol.
Materials
All the reagents in this work were used as received without further purification.
Experimental Procedure
All the catalytic reactions were carried out in a stainless steel autoclave reactor. The general procedure is described as follows. 2,6-Dimethoxyphenol (3 g, X mmol) and vanadium powder (0.3 g, X mmol) were weighed into an autoclave reactor and suspended in solvent 50 mL (methanol and distilled water with different volume ratio). The reactor was sealed and the atmosphere purged five times with nitrogen for the purpose of discharging air from the reactor. The catalytic reactions were conducted at a range of reaction temperatures for a specific time course. After the designated time, the autoclave was cooled to ambient temperature and depressurized carefully.
Characterization of Catalysts
The phase structures of samples were determined by Shimadzu XRD-6000 X-ray diffractometer using Cu-Kα radiation. Tube voltage was 40 kV, Tube current was 30 mA and scan speed set to 8 • /min.
Extraction and Identification of Degradation Products
The reaction mixture was transferred into a separatory funnel, and partitioned with ethyl acetate. The organic phases were combined, dried (anhyd. CaCl 2 ) filtered, and concentrated in vacuo using a rotary evaporator with the bath temperature set to 35 • C to afford the crude material. The crude material was diluted with ethyl acetate, and then injected into organic filter membrane (0.45 µm) to permit the qualitative and quantitative GC-MS analysis of the degradation products.
RESULTS AND DISCUSSION
Transfer Hydrogenation of 2,6-dimethoxyphenol at Different Reaction Time Preliminary experiments focused on the examination of the influence of the reaction time on the hydrolysis of 2,6dimethoxyphenol with catalytic vanadium metal (Figure 1 and Frontiers in Chemistry | www.frontiersin.org could catalyze break of C-O bond in 2,6-dimethoxyphenol. Interestingly, a small quantity of 2,6-dimethoxyphenol was methylated to afford 1,2,3-trimethoxybenzene after 10 h, which is presumably the result of the activation of the C-O bond in methanol and the nucleophilic alkylation of 2,6dimethoxyphenol. The proportion of 3-methoxycatechol increased to 22% by increasing the reaction time to 48 h, which also results in more alkylation. Interestingly, this is a rather benign method for methylation, which generally employs toxic alkylating agents. In addition, the prolonged reaction time also leads to the formation of a new degradation product, namely pyrogallol (Table 1). Hence, 3-methoxycatechol is an intermediate product, which readily undergoes further cleavage; however, extending the reaction time further to 72.5 h leads to a decrease in the amount 3-methoxycatechol to 15%, which may be ascribed to the carbonization, and the reaggregation of decomposable fragments over a long period of time.
Transfer Hydrogenation of 2,6-dimethoxyphenol at Different Temperatures
The next phase of the study examined the influence of the reaction temperature on the transfer hydrogenolysis of 2,6dimethoxyphenol with vanadium metal as a catalyst, as outlined in Table 2. Reaction formula of 2,6-dimethoxyphenol catalyzed by vanadium under 280 • C show in Figure 3. When having enough energy, besides the products of 3-methoxycatechol, 1,2,3-trimethoxybenzene, and pyrogallol, there generated new product pyrocatechol. Gratifyingly, the conversion of 2,6dimethoxyphenol increased from 5 to 80% when the reaction temperature was increased from 220 to 280 • C over 48 h. Interestingly, traces of catechol appeared at 270 • C, which may be from the vanadium-catalyzed reduction of pyrogallol. Figure 4 illustrates the proportion of degradation, in which the cleavage of the 2,6-dimethoxyphenol is sensitive to the reaction temperature. Increasing the temperature to 280 • C furnished 3-methoxycatechol and pyrogallol as the main products and the proportions increased to 48 and 29%, respectively. The improved yield of pyrogallol illustrates that increased temperature improves the efficiency of C-O bond cleavage in 3-methoxycatechol. Based on the experimental results, the reaction pathway of catalyzing cleavage of C-O bond by vanadium is proposed to follow the pathway outline in Figure 5. Initially, the vanadiumcatalyzed the cleavage of C-O bond in 2,6-dimethoxyphenol affords 3-methoxycatechol, which then undergoes further cleavage to generate pyrogallol. While the origin of the formation of catechol is unclear, it could be formed from the direct aryl C-O cleavage, which means that catalyzing dehydroxylation of pyrogallol by vanadium may be feasible. Specific verification experiments on this hypothesis are outlined below.
Catalytic Degradation of 2,6-dimethoxyphenol by Vanadium in Different Solvent
In the next phase of this study, the nature and impact of the solvent was examined using distilled water/methanol different volume ratio mixtures for the transfer hydrogenolysis of 2,6dimethoxyphenol at 280 • C. The proportion of main products is illustrated in Figure 6. Notably, the ratio of distilled water and methanol impacts the efficiency of the cleavage, in which only distilled water as solvent is optimal for the formation of 3-methoxycatechol and pyrogallol. Hence, the amount of water impacts the cleavage of C-O bond in 3-methoxycatechol because the proportion of pyrogallol also increased from 14 to 43%. Figure 7 delineates a comparison of the degradation 2,6-dimethoxyphenol in pure, water, and alcoholic solvents. Interestingly, the degradation in pure water is significantly more efficient than the conversions in methanol and ethanol. Moreover, pyrogallol is not produced in any of the alcoholic solvents and the proportion of 3-methoxycatechol was <7%. Hence, methanol and ethanol are significantly less efficient in generating the hydrogen necessary for C-O bond cleavage.
Influence of Catalyst for the Transfer Hydrogenolysis of 2,6-dimethoxyphenol
In order to reveal the role of catalyst in the reaction, a control experiment was conducted in only distilled water as solvent at 280 • C for 48 h under a nitrogen atmosphere in the presence and FIGURE 10 | The products proportion and degradation rate of 3-methoxycatechol (distilled water as solvent).
FIGURE 9 | X-ray diffraction pattern before and after V catalytic reaction under 280 • C: (A) before catalytic reaction (B) after catalytic reaction (distilled water as solvent).
absence of the vanadium. The results are shown in Figure 8, which indicates the there is a background reaction because the C-O linkage in 2,6-dimethoxyphenol is cleaved in the absence of vanadium, albeit the conversion from 3-methoxycatechol to pyrogallol was not evident and the efficiency of the cleavage is lower. On the basis of this, we concluded that vanadium is necessary for the conversion of 3-methoxycatechol to pyrogallol. But as the Figure 9 shows, comparing Figures 9A,B, it is found that vanadium catalyst can maintain its basic phase structure. Differently, Figure 9B shows a another steamed bread peak around 23 • , This indicates that the vanadium catalyst coking occurred at 280 • C.
Reaction Pathways of the Transfer Hydrogenolysis
In order to verify the reaction path mentioned in section Transfer Hydrogenation of 2,6-Dimethoxyphenol at Different Reaction Time, that 3-methoxycatechol is intermediate product, which means vanadium can catalyze the cleavage of C-O linkage in 3-methoxycatechol to product pyrogallol. Therefore, 3-methoxycatechol was selected as the substrate to conduct experiment with only distilled water as solvent. After 48 h at 280 • C, pyrogallol and catechol was formed. Figure 10 indicates the conversion of 3-methoxycatechol was 93%, which resulted in 89% of pyrogallol, in which there was only a trace of catechol formed. Hence, the vanadium catalytic cleavage of C-O bond in 3-methoxycatechol is delineated in Figure 11, in which it is confirmed that 3-methoxycatechol is the intermediate product of 2,6-dimethoxyphenol catalyzed by vanadium. Vanadium can Frontiers in Chemistry | www.frontiersin.org catalyze the breaking of C-O linkage in 2,6-dimethoxycatechol or 3-methoxycatechol in distilled water, which provides a low cost and environmentally friendly process.
The Transfer Hydrogenolysis Activity for Other Lignin Model Compounds
In order to verify the effect of catalytic vanadium metal on the breaking of α-O-4 bond, benzyl phenyl ether was selected as the model compound. As illustrated in Table 3 and Figure 12, vanadium catalyzed the breaking of α-O-4 bond with a 98% conversion of benzyl phenyl ether. The main product was 4benzylphenol and 2-benzylphenol. Unfortunately, the vanadium catalyst does not cleave the C-C bond, so the benzyl phenyl ether was not converted into monomer phenol. Meanwhile, the selectivity of products was not high, and many other sideproducts were produced. In this regard, further studies to improve the selectivity for the hydrogenation of benzyl phenyl ether and cleavage of the C-C bond to obtain monomer phenol by vanadium catalyst are required.
CONCLUSION
In summary, vanadium metal was demonstrated to be a catalyst for the cleavage of the C-O bonds in lignin model compounds, such as 2,6-dimethoxyphenol and benzyl phenyl ether. Detailed investigations indicate that the catalyst promoted cleavage of the C-O bonds is related to reaction temperature, time, and solvent. The catalyst can efficiently catalyze the cleavage of C-O bonds in water as solvent in the absence of high-pressures of hydrogen gas and organic solvents. This work represents a promising perspective on the utilization of vanadium metal for cleaving lignin model compounds and ultimately lignin into valueadded chemicals using an economical and environmentallyfriendly method.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/supplementary material. | 3,245.2 | 2020-07-28T00:00:00.000 | [
"Chemistry"
] |
Generation of two induced pluripotent stem cell (iPSC) lines from an ALS patient with simultaneous mutations in KIF5A and MATR3 genes
Fibroblasts from an amyotrophic lateral sclerosis patient with simultaneous mutations in the MATR3 gene and KIF5A gene were isolated and reprogrammed into induced pluripotent stem cells via a non-integrating Sendai viral vector. The generated iPSC clones demonstrated normal karyotype, expression of pluripotency markers, and the capacity to differentiate into three germ layers. The unique presence of two simultaneous mutations in ALS-associated genes represent a novel tool for the study of ALS disease mechanisms.
Resource details
Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disease resulting from the loss of upper and lower motor neurons leading to progressive paralysis and eventual death typically between 2 and 5 years after diagnosis. The need for novel models to study disease mechanisms is essential for the development of improved treatments. Here we report novel, patient-derived iPSC lines which were derived from a patient that was identified to have simultaneous mutations in two ALS related genes (MATR3 and KIF5A). This cell line provides an opportunity to study individual and interacting effects of multiple mutations.
To generate the iPSCs, fibroblasts were isolated from a 50-year-old male ALS patient that carried the p.F115C mutation in MATR3. This patient served as the proband of family known as USALS#3 (Johnson et al., 2014). Subsequent to initial identification of the MATR3 mutation in this patient, further investigations also demonstrated the presence of an intronic mutation in KIF5A gene (Saez-Atienzar et al., 2020). Mutations in KIF5A had previously been identified as an ALS-causing gene (Brenner et al., 2018;Nicolas et al., 2018). Fibroblasts were converted to iPSC using Cytotune 2 Sendai Kit (Life Technologies) according to the manufacturer instructions. Resulting colonies were subsequently treated as separate clones, with clones 6 (cell line BNIi001-A) and 12 (cell line BNIi001-B) being chosen for further characterization because of their speed of growth and ability to form colonies (Fig. 1A). Karyotyping revealed there were no chromosomal abnormalities resulting from the reprogramming (Fig. 1B). Sequencing indicated retention of both mutations in the MATR3 and KIF5A genes (Fig. 1C) which were previously described (Johnson et al., 2014;Saez-Atienzar et al., 2020) in the patient. Further, RT-qPCR was used to demonstrated significantly elevated mRNA levels of pluripotency markers NANOG, OCT4, and SOX2 mRNA in iPSCs, including stem cell line CS25 which was obtained from the Cedar Saini IPSC core. Expression levels were compared to fibroblasts cells from the patient normalized to actin mRNA (Fig. 1D). Immunofluorescence was used to demonstrate protein expression of NANOG, OCT4, and SOX2 in iPSCs (Fig. 1E). To confirm the ability of iPSCs to generate all three germ layers, iPSCs were allowed to spontaneously differentiate in culture. Differentiation potential was assessed using the TaqMan hPSC Scorecard (Thermo Fisher) (Fig. 1F), and both clones demonstrated ability to differentiate into all 3 germ layers.
Materials and methods
Brief summary of culture conditions, methodology used for cell line generation, genetic modification, and analyses performed. Describe in detail any new methodology used, summarise established protocols. Please include details of antibodies and primers separately in Table 3.
Cell culture
Small pieces of a 2-3-mm forearm skin biopsy were plated on gelatin-coated tissue culture dishes. Fourteen to 28 days after initial plating, intense outgrowth of fibroblasts from the skin fragments were observed. Patient derived fibroblasts were grown in DMEM media with 10% FBS and Pen/Strep. Early (p2-p9) passages were used for reprogramming. iPS cell lines were generated by reprogramming fibroblasts with Cytotune 2 Sendai Kit from Life Technologies, using a variation of their recommended protocol as described in (Beers et al., 2015). iPSCs were plated and grown as a monolayer on Matrigel coated plates in MTeSR medium (Stem Cell Technologies) at 37 °C/5% CO 2 /95% humidity. Cells were passaged at 1:10 every 4-7 days.
Immunofluorescence assay
iPSCs were plated on coverslips, fixed in 4% paraformaldehyde for 5 mins, permeabilized using 0.03% v/v Triton-X in 1× Phosphate Buffered Saline (PBS). Cells were then blocked using Superblock at room temperature for 1 hr. Cells were incubated in primary antibodies at 4 °C overnight, washed, then incubated in secondary antibody for 1 h at room temperature. Coverslips were mounted with Vectashield with DAPI (Vector Laboratories) and an Observer Z1 (Zeiss) confocal microscope was used was used to image cells.
RNA extraction and qPCR
RNA for quantitative PCR was extracted from iPSC at passage 8 using the PureLink™ RNA mini kit (Invitrogen) following the manufacturer's instructions. Following extraction, cDNA was created using SuperScript IV VILO Master Mix (Thermo Fisher Scientific) following the manufacturer's instructions. Finally, qPCR was performed using the StepOnePlus RT-PCR (Applied Biosystems) machine. qPCR was performed with SYBR green (Applied Biosystems) conducted at 95 °C for 2 min, and then 40 cycles of 95 °C for 3 s and 60 °C for 30 s. Comparative ΔΔCt was analysed using the StepOne Software to calculate relative fold change.
Sequencing
Genomic DNA obtained from iPSCs, using the Wizard® SV Genomic DNA Purification System (Promega), was subject to PCR amplification using the primers shown in Table 3. The PCR product was then used as template for a standard sequencing reaction at the ASU Genomics Facility (MATR3) and at the Laboratory of Neurogenetics, NIA (KIF5A). Sanger sequencing was performed with capillary electrophoresis.
Karyotyping
Karyotyping of cells was performed by the Molecular Medicine Laboratory at St. Joseph's Hospital and Medical Center (Phoenix, Arizona). Cells were analysed at passage 10, and 20 cells were assessed at a resolution of 400-425 bands per haploid set.
Mycoplasma test
Mycoplasma contamination was analysed with the LookOut® My-coplasma PCR Detection Kit.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. | 1,317.4 | 2020-12-24T00:00:00.000 | [
"Biology"
] |
Exploring Data-Reflection Technique in Nonparametric Regression Estimation of Finite Population Total: An Empirical Study
In survey sampling statisticians often make estimation of population parameters. This can be done using a number of the available approaches which include design-based, model-based, model-assisted or randomization-assisted model based approach. In this paper regression estimation under model based approach has been studied. In regression estimation, researchers can opt to use parametric or nonparametric estimation technique. Because of the challenges that one can encounter as a result of model misspecification in the parametric type of regression, the nonparametric regression has become popular especially in the recent past. This paper explores this type of regression estimation. Kernel estimation usually forms an integral part in this type of regression. There are a number of functions available for such a use. The goal of this study is to compare the performance of the different nonparametric regression estimators (the finite population total estimator due Dorfman (1992), the proposed finite population total estimator that incorporates reflection technique in modifying the kernel smoother), the ratio estimator and the design-based Horvitz-Thompson estimator. To achieve this, data was simulated using a number of commonly used models. From this data the assessment of the estimators mentioned above has been done using the conditional biases. Confidence intervals have also been constructed with a view to determining the better estimator of those studied. The findings indicate that proposed estimator of finite population total that is nonparametric and uses data reflection technique is better in the context of the analysis done.
Introduction
Many non-parametric techniques have in the recent past been used in regression estimation. They include techniques such as the k-nearest neighbors, local polynomial regession, spline regression, and orthogonal series [9,19]. Besides this and in an attempt to correct the unpleasant boundary bias induced by the conventional Nadaraya-Watson estimator, many statisticians have endeavoured to modify it. Some of these include Gasser-Müller [13] and Priestley-Chao (1972). The drawback of these techniques is that their bias components were managed but at the expense of higher variability. In the framework of the modelbased approach, regression estimation is paramount in obtaining estimates of the non-sample population. The flexible nature of the non-parametric technique has made it an attractive option in statistical researches [6]. The technique entails use of kernel smoothers that assign weights to observations used in estimation. In this paper we explore yet another new technique of reflection as a way of modifying the kernel smoothers with a view to minimizing the boundary bias the shortcoming of the Nadaraya-Watson estimator.
This paper has been organized as follows: in section 2, we give a brief review of the literature regarding non-parametric regression, in section 3; a new nonparametric regression estimator for finite population total is proposed. The estimator whose properties have been stated makes use of a modified kernel smoother obtained through reflection of data technique. Empirical analysis has been done in section 4 using some artificially simulated datasets. Discussion of results and conclusion is given in section 5.
Literature Review
A model-based non-parametric model ( ) ξ is conventionally of the form: where Y i -is the variable of interest X i -is the auxiliary variable m-is an unknown function to be determined using sample data e i -is error term-assumed to be N(0, 2 In nonparametric regression estimation ( ) i m X is an unknown function and can therefore be determined by the data sampled. Since this is a sample statistic, there are many estimators in place that have been developed by statisticians. They include the famous Nadaraya-Watson estimator which many have attempted to modify because of its weakness at the boundary. These can be found in Eubank [11] and Gasser and Müller [13].
A simple kernel estimator at an arbitrary point x as presented by Priestley and Chao (1972) can be written as: where h is the bandwidth, sometimes referred to as the tuning parameter or window width. K(.) denotes a kernel function which is also twice continuously differentiable, symmetrical and having support within the bounded interval [-1, 1] such that: For the derivation of the asymptotic bias term and even the variance term, one can see Kyung-Joon and Shucany [15]. They are respectively given by: and The direct proportionality of the bias and the bandwidth means a small bandwidth will reduce it. While this is true for the bias a similar action of decreasing the bandwidth increases the variance making the regression curve to be wiggly. The implication of this scenario is that an optimal bandwidth that minimizes the mean square error (MSE) is necessary. Although with the use of the knowledge of calculus it is possible to obtain, such a bandwidth has never provided a solution to the boundary menace. Following this, Gasser and Müller [13] proposed optimal boundary kernels to address the problem. They suggested multiplying the truncated kernel at the boundary by a linear function. A generalized jackknife approach was proposed by Rice [16]. Eubank and Speckman [12] suggested the use of "bias reduction theorem" to remove the boundary effects. Schuster [18] gave another technique of correcting the boundary bias by using reflection of data method in density estimation. The same idea has also been reviewed by Albert and Karunamuni [1] among others, but notably within density estimation. This technique has further been examined in this paper but in the context of regression estimation. The technique is applied in estimating the finite population total and its performance has been analysed against other known estimators such as: The ratio estimator given by: Unbiased Predictor (BLUP), Cochran [7], Cox [8] and Brewer [5].
Another approach to estimation is the design-based estimator suggested by Horvitz-Thompson [14] is given by: While the nonparametric regression estimator proposed by Dorfman [10] for finite population total is: x is the Nadaraya-watson estimator. As noted above, this estimator suffers from boundary effects. But even with that weakness the nonparametric techniques in regression estimation have been known to outperform its counterparts-the fully parametric and semiparametric techniques. Dorfman [10] did a comparison between the population total estimators constructed from the famous design-based Horvitz-Thompson estimator and the Nadaraya-Watson estimator-the nonparametric regression estimator where he found out that the nonparametric regression estimator better reflects the structure of the data and hence yields greater efficiency. This regression estimator, however, suffered the so called boundary bias besides facing bandwidth selection challenges. Breidt and Opsomer [3] did a similar study on nonparametric regression estimation of finite population total under two-stage sampling. Their study also reveals that the nonparametric regression with the application of local polynomial regression technique dominated the Horvitz-Thompson estimator and improved greatly the Nadaraya-Watson estimator. Breidt et al [4] carried out estimation of population of finite population total under two-stage sampling procedure and their results also show that the nonparametric regression estimation is superior to the standard parametric estimators when the model regression function is incorrectly specified, while being nearly as efficient when the parametric specification is correct.
We also propose an estimator under this nonparametric regression in the model-based framework.
Proposed Estimator
where the first term ∑ is the non-sample total term that is to be estimated nonparametrically using the reflection technique. The datareflected technique therefore provides the data through reflection method so that this information is put on the negative axis thereby supplying the kernel with the information required on this section.
Data Reflection Procedure
The following simple steps give the procedure on how reflection of data is done. Let the {(X 1 , Y 1 ), (X 2 , Y 2 ),…, (X n , Y n )} be the set of n observations in the sample. If the data is augmented by adding the reflections of all the points in the boundary, to give the set {(X 1 , Y 1 ), (-X 1 , Y 1 ), (X 2 , Y 2 ), (-X 2 , Y 2 )..., (-X n , Y n ), (X n , Y n )}. If a kernel estimate m*(x) is constructed from this data set of size 2n, then an estimate based on the original data can be given by puttinĝ ( ) 2 *( ) m x m x = , for 0 x ≥ , and zero otherwise. This gives the modified general weight function given by: It can be shown that the estimate will always have zero derivative at the boundary, provided the kernel is symmetric and differentiable. The estimate has also been shown under the section on properties of the data-reflected technique that it is a p.d.f for the symmetric kernel. In practice it will not usually be necessary to reflect the whole data set, since if X i /h is sufficiently large, the reflected point -X i /h will not be felt in the calculation of m*(x) for x> 0, and hence reflection of points near 0 is all that is needed. Silverman [17] in his example, states that if K is the Gaussian kernel there is no practical need to reflect points beyond X i > 4h.
Asymptotic Properties of the Proposed Estimator
It can be shown (one can see Albers [2] for similar derivation under the density estimation) that the asymptotic bias and the variance of the proposed estimator are respectively given by: and ( )
Empirical Study
To examine the performance of the proposed estimator, simulation was done from various common distributions and analysis was done to compare them based on their confidence lengths and conditional biases. Table 1 gives the models used in simulation.
Unconditional 95% C.I for the Respective Population Total Estimators
The 95% confidence interval of each of the estimators was also computed using the formula given by; 2( ) T T Z Var T α = ± and the interval length is therefore the difference between the upper limit and the lower limit. The results are presented in table 2. Notice that the confidence lengths given by the proposed estimator in the first column are the least of all except for the ratio estimator under the linear model.
Conditional Performance of the Respective Population Total Estimators
To study the conditional performance of the estimators, the sample means ' The figures portray that the proposed estimator is better placed than the other estimators examined in terms of posting a smaller conditional bias.
Conclusion
The proposed estimator of the finite population total that uses the reflection technique shows narrower confidence lengths as opposed to the others considered in the study. The smaller 95% confidence lengths is a characteristic of a better estimator that is more precise and accurate.
Further the graphs given in the figures above shows that the proposed estimator outwits the others. The graphs show that the proposed estimator is almost conditionally unbiased.
It can therefore be concluded that based on the analysis done in this study reflection technique can be of benefit in correcting the boundary bias usually experienced with the use of kernel estimators in regression estimation. | 2,543.8 | 2020-06-04T00:00:00.000 | [
"Mathematics"
] |
Green and Mechanochemical One-Pot Multicomponent Synthesis of Bioactive 2-amino-4H-benzo[b]pyrans via Highly Efficient Amine-Functionalized SiO2@Fe3O4 Nanoparticles
An ecofriendly, magnetically retrievable amine-functionalized SiO2@Fe3O4 catalyst was successfully synthesized and affirmed by several physicochemical characterization tools, such as scanning electron microscopy (SEM), transmission electron microscopy (TEM), Fourier-transform infrared spectroscopy (FT-IR), vibrating sample magnetometry (VSM), energy-dispersive X-ray spectroscopy (EDX), and powder X-ray diffraction. Thereafter, the catalytic performance of this environmentally benign NH2@SiO2@Fe3O4 catalyst was investigated in the one-pot multicomponent synthesis of 2-amino-4H-benzo[b]pyran derivatives. The reaction was simply achieved by grinding of various substituted aromatic aldehydes, dimedone, and malononitrile at room temperature under solvent and waste-free conditions with excellent yields and high purity. Moreover, the developed catalyst not only possesses immense potential to accelerate the synthesis of bioactive pyran derivatives but also exhibits several remarkable attributes like broad functional group tolerance, durability, improved yield, reusability, and recyclability. Besides, various other fascinating advantages of this protocol are milder reaction conditions, cost effectiveness, short reaction time, and simple work up procedures.
■ INTRODUCTION
Environmentally benign methods like high efficiency, selectivity, high yield, and simple reaction procedures have become the most important targets to achieve in the field of organic chemistry. To achieve these, multicomponent reactions (MCRs) have recently emerged as the most powerful tools in the synthesis of organic compounds and chemotherapeutic drugs by forming carbon−carbon and carbon−heteroatom bonds using a one-pot procedure. 1,2 In MCRs, a number of different starting materials (for example three or more components) are allowed to react to give a desired product using one-pot synthesis. 3,4 These reactions have great impact in organic synthesis as they provide various advantages, such as less reaction time, simple separation steps, and cost effectiveness, which eventually provides better yield as compared with multistep synthesis. 5 In addition to this, the solvent-free approach is also a widely acceptable greener methodology especially in terms of an economic as well as synthetic point of view, as use of organic solvents has several disadvantages, including toxicity, a tedious work-up procedure in synthesis, and expense. 6,7 Further, such reactions are performed under environment-friendly conditions without using strong acids like HCl, H 2 SO 4 , etc. which can in turn cause corrosion, safety issues, and pollution problems. In this context, one-pot mechanochemical reactions, i.e., reactions attained by grinding the reactants altogether using a mortar and pestle (also known as "grindstone chemistry") offers significant advantages such as no column chromatography, no tedious work up, cost effectiveness, and less reaction time over multistep reactions. 8a Nowadays, nanoparticles are considered as the building blocks for various nanotechnology applications, which frequently display unique size-dependent physical and chemical properties. 8b Sometimes, nanoparticles cannot be used directly as they are associated with certain limitations, such as toxicity, hydrophobicity, unnecessary interactions, etc. These problems can often be overcome by introducing an intermediate (layers or shells). Therefore, derivatization for any application of nanoparticles is prerequisite, which can be either by stabilizing the functional cores or by activating the surfaces. In this context, silica-coated magnetic nanoparticles have attracted great attention owing to their various remarkable properties, such as ease of synthesis, functionalization, thermal stability, low toxicity, and effortless separation from the reaction medium using an external magnet. Silica is considered as one of the most flexible and robust surfaces known, 8c which is associated with various advantages, such as it is chemically inert and optically transparent (so that chemical reactions can be monitored spectroscopically). Hence, the modified silica shell increases the mechanical stability as well as enables functionalization and thus has the potential for many new applications. Keeping this background in mind, we proposed an ecofriendly grinding technology for the synthesis of 2-amino-4H-benzo[b]pyrans using amine-functionalized silica magnetic nanoparticles (NH 2 @SiO 2 @Fe 3 O 4 ). In the last few years, tetrahydrobenzo-[b]pyrans and its analogues have attracted great attention as they are part and parcel of various heterocyclic natural products and drugs that exhibit anticoagulant, antitumor, anticancer, antiallergic, diuretic, and antibacterial properties. 9−13 Additionally, they exhibit a broad spectrum of applications as cognitive enhancers that are used for treating neurodegenerative diseases, including Alzheimer's disease, Parkinson's disease, acquired immune deficiency syndrome (AIDS), and Down's syndrome. 14,15 4H-benzo[b]pyran or chromene scaffold is found in several drugs that are pharmacologically active, for example, 2,7,8-triamino-4-(3-bromo-4,5-dimethoxyphenyl)-4H-chromene-3-carbonitrile (A), 16 10 18 2-amino-4-(furan-3yl)-6,6-dimethyl-5-oxo-5,6,7,8-tetrahydro-4H-chromene-3-carbonitrile (E), 19 and 2-amino-6,6-dimethyl-5-oxo-4-(thiophen-3-yl)-5,6,7,8-tetrahydro-4H-chromene-3-carbonitrile (F) 19 shown in Figure 1. Because of the versatile utilization of substituted pyran analogues in medicinal chemistry, there is an upsurge to develop simple, inexpensive, and high yielding methods for their synthesis. 20,21 Our research group is focused on the design and synthesis of newer antimalarial drugs, singlecrystal structure analysis, and catalysis of small molecules. 22−24 In continuation to this search, we had synthesized new and efficient economically benign catalyst NH 2 @SiO 2 @Fe 3 O 4 to optimize its efficacy in the synthesis of tetrahydrobenzo[b]pyrans. Adopting the fascinating advantages of MCRs, herein, we wish to report a library of 2-amino-4H-benzopyran derivatives via condensation of the three-component (aromatic aldehyde, malononitrile, and dimedone) one-pot reaction catalyzed by amine-functionalized silica magnetic nanoparticles (ASMNPs). A large number of derivatives can be rapidly synthesized in high yield using the grinding multicomponent solvent-free technique at room temperature.
■ RESULTS AND DISCUSSION Catalyst Preparation. The procedure for synthesis of Fe 3 O 4 and SMNPs is provided in the Experimental Section. To obtain the amine-functionalized SiO 2 @Fe 3 O 4 , 3-aminopropyltriethoxysilane (APTES) (0.5 mL) was slowly added to 100 mL of the ethanolic solution of SMNPs (0.1 g) and then the resulting mixture was allowed to stir at room temperature for 24 h. 25 The resulting NH 2 @SiO 2 @Fe 3 O 4 (ASMNPs) was separated magnetically and washed several times with ethanol to remove any unreacted silylating agent and dried under vacuum. The overall synthesis is depicted in Figure 2.
Catalytic Activity Test. The catalytic efficiency of NH 2 @ SiO 2 @Fe 3 O 4 (ASMNPs) was investigated in the synthesis of tetrahydrobenzo[b]pyran analogues, and the reaction conditions were optimized in terms of the amount of catalyst, reaction time, and yields. A model reaction between 4-bromobenzaldehyde, malononitrile, and dimedone was monitored for optimization of various parameters, as demonstrated in Table 1.
Effect of the Amount of Catalyst and Solvent. The effect of the amount of catalyst was observed on the selected model reaction and summarized in Figure 3. It is evident from Figure 3 that there was no product formation in the absence of catalyst NH 2 @SiO 2 @Fe 3 O 4 . Further, there was an increase in percentage yield of the product on increasing the amount of catalyst by 2−10 mg. On further increasing the amount of the catalyst by 15 mg, there was no such augmentation in the yield. Moreover, it was also observed that on varying the solvents such as water, ethanol, and N,N-dimethylformamide (DMF), there was no such hike in the percentage yield of the product, which eventually proves that the catalyst plays a crucial role in ensuring a very efficient reaction time period with excellent yields.
Quantification of the Active Amine Sites. Quantification of the number of amine sites on the surface of SiO 2 nanoparticles can be easily analyzed by the simple acid−base back titration method, as reported elsewhere. 26 In brief, the amine-modified silica nanoparticles (10 mg) were dispersed in 1.0 mM HCl (20 mL) and the contents were stirred for around 45 min. Later on, nanoparticles were separated with the help of centrifugation at 15 000 rpm for 15 min and 10 mL of the supernatant was collected to be titrated with the standardized 1.0 mM NaOH solution, till neutralization point is reached by taking phenolphthalein as an indicator to evaluate the total active amine sites. The number of amine sites calculated with the help of acid−base back titration was found to be 2.62 ea/nm 2 .
Green Chemistry Metrics. Next, we examined the green chemistry parameters for compound 4c. Table 2 outlines several metrics used for evaluation of the green approach in organic synthesis under optimized conditions. It can be seen clearly from Table 2 that the calculated values of green metrics namely Efactor, process mass intensity, reaction mass efficiency, carbon efficiency, and atom economy, are closer to the ideal values. All calculations are provided in the Supporting Information.
Further, on the basis of literature survey, we had investigated the efficacy of ASMNPs in comparison with previously reported catalysts (Table 3). It is evident from Table 3 that the presence of ASMNPs results in high yielding products. Additionally, it was also observed that the presence of ASMNPs provided milder reaction conditions and a shorter reaction time with no use of solvents.
General Method for Synthesis of 2-Amino-4H-benzopyrans (4a−4p). Using the optimized reaction conditions, we investigated the use of various electron-releasing and electronwithdrawing benzaldehydes keeping malononitrile and dimedone constant. Equivalent amounts of benzaldehyde, dimedone, and malononitrile were taken and catalyst ASMNP (10 mg) was added and all ingredients were ground for about 2−10 min at room temperature, which resulted into a vax-or jellylike reaction mixture. Subsequently, 3−4 mL of 95% ethanol was added into Mechanistic Pathway. The mechanism of the reaction of tetrahydrobenzo[b]pyran derivatives in the presence of ASMNP catalyst via a three-component coupling strategy is driven by specifically the basic amino sites. It is suggested to occur through the formation of arylidiene malononitrile via Knoevenagel condensation between malononitrile and aromatic aldehyde in the first step. Michael addition of dimedone to arylidiene malononitrile occurs to form the intermediate in the second step. Finally, intramolecular cyclization occurs followed by protonation to the intermediate, which results in the desired product and regenerate catalyst in the reaction mixture. The plausible mechanism for the synthesis of the desired product tetrahydrobenzo[b] pyran is shown in Figure 4.
Further, the molecular structure of compound 4m was established by utilizing single-crystal X-ray diffraction (XRD) analysis. Single X-ray crystallography is one of the most comprehensive techniques that is utilized to determine the chemical structure of compounds. One of the most important requirements for obtaining the high accuracy of crystallographic structures is that a "good crystal" of the synthesized compound must be found. To perform the single XRD analyses, it is always a pleasure to look at a crystal that seems to be perfect in the Oxford Diffraction Xcalibur diffractometer. Although all synthesized 2-amino-4H-benzopyrans (4a−4p) compounds were crystallized, the crystal of one of the compounds in the series viz 4m seemed to have well-formed faces and edges associated with no cracks, striations, and bubbles in the diffractometer. Hence, we have selected compound 4m for single-crystal X-ray analysis to confirm the structure of synthesized 2-amino-4H-benzopyrans. Figure 5 represents the crystal packing structural arrangement of compound 4m, which further proves the stability of tetrahydrobenzo[b] pyran compounds. The crystal structure was solved by using olex 2.1.2 38 and winGX software (Shelx86 method). 39 All parameters, such as the crystal data and structure refinement table (Table S1), fractional atomic coordinates (×10 4 ) and equivalent isotropic displacement parameters (Å 2 × 10 3 ) (Table S2), anisotropic displacement parameters (Å 2 × 10 3 ) for compound 4m (Table S3), bond lengths (Table S4), and bond angles (Table S5) are provided in the Supporting Information of this article.
Recycling Procedure of the Catalyst. After completion of the first reaction using NH 2 @SiO 2 @Fe 3 O 4 (ASMNPs), the
ACS Omega
http://pubs.acs.org/journal/acsodf Article reaction mixture was diluted with ethanol and then the catalyst was separated by using an external magnet. The recovered catalyst was washed several times with ethanol to insure no contamination, dried, and reused in the second reaction. Similar steps were carried out after the second reaction. The recovered catalyst can be used upto eight cycles with no significant loss of catalytic activity of the catalyst. To further ensure no loss in activity of the catalyst, it was tested three times for the model reaction. The plot of the conversion percentage and number of runs for eight cycles, which is repeated three times, has been demonstrated in Figure 6. We had observed that there was not much deviation in the conversion percentage in all three runs. Fourier Transform Infrared Spectroscopy (FT-IR). FT-IR spectroscopy was employed for qualitative detection and confirmation of different functional groups present in all three nanocomposites. The comparative study for FT-IR spectra of all three powdered samples was carried out using KBr pellets in the range of υ ̅ = 400−4000 cm −1 . It is obvious from Figure 7a (Figure 8b), which indicated the retention of the crystalline magnetic core. The PXRD pattern of SMNPs (Figure 8b) showed a weak broad hump centered at 2⊖ = 20−25°, which confirmed the presence of amorphous silica coating around the magnetic core. 42 Apart from these, no extra peaks were obtained, demonstrating highly pure magnetic nanoparticles. Figure 8c,d represented the powder X-ray diffraction analysis of ASMNPs and recovered ASMNPs, respectively.
Transmission Electron Microscopy (TEM) Analysis. To know about surface morphologies and size of synthesized nanoparticles, TEM analysis of MNPs, SMNPs, ASMNPs, and recovered ASMNPs was conducted. As the coating of silica was done on iron oxide nanoparticles, the size of nanoparticles increased, and after the incorporation of amine sites, we found that the size of nanoparticles was approximately 28 nm. Further, the size of the recycled nanoparticles become less (approximately 20 nm after eight runs) due to the grinding process. The high-resolution TEM micrograph (Figure 9a) of Fe 3 O 4 NPs showed that it is composed of tiny particles possessing a spherical shape with an average diameter of 20 nm. Furthermore, the HRTEM of ASMNPs also confirmed the spherical morphology of these nanoparticles (Figure 9c). The HRTEM of the recovered catalyst after eight consecutive runs was also done (Figure 9d), which showed no significant changes in the morphology.
Scanning Electron Microscopy (SEM) Analysis. For particle morphology and texture elucidation, SEM images of the synthesized MNPs, SMNPs, ASMNPs, and recovered ASMNPs were also obtained and shown in Figure 10. The rougher structures of SMNPs and ASMNPs could be attributed to successful surface coating. Also, it can be concluded that the size of nanoparticles after coating with silica and anchoring of amine group is not significantly changed, revealing that MNPs were coated by a thin layer of silica. The thickness of silica onto surface of MNPs could be increased by varying the molar ratio of H 2 O/tetra-ethyl orthosilicate (TEOS). 43 On increasing the molar ratio, the size of silica coating also increases. The SEM images supported the formation of spherically shaped Fe 3 O 4 NPs, which was in accordance with TEM analysis.
Elemental and Compositional Analysis. Energy-dispersive X-ray (EDX) spectroscopy represents a powerful tool in the Vibrating Sample Magnetometric Analysis (VSM). Magnetization measurements were investigated at room temperature using a vibration sample magnetometer (VSM) at room temperature in the external magnetic range of −10 000 to +10 000 Oe. The magnetic hysteresis curves of MNPs, SMNPs, and ASMNPs indicated the super magnetic behavior of these nanoparticles. 44 It is obvious from Figure 12a
■ CONCLUSIONS
In conclusion, we have successfully synthesized a variety of tetrahydrobenzo [b]pyrans in good to excellent yields using efficient and economic amine-functionalized magnetic nanoparticles under solvent and waste-free reaction conditions. High tolerance of this procedure toward different functional groups, easy work up of the desired products, high reusability of the catalyst, and shorter reaction time are the additional advantages for its application to academic and industrial purposes. From a
■ EXPERIMENTAL SECTION
General Remarks. Ferric sulfate and ferrous sulfate were purchased from Sisco Research Laboratory (SRL). Tetra-ethyl orthosilicate (TEOS) and APTES were obtained from Sigma-Aldrich. All other reagents used were of analytical grade and obtained from Spectrochem and Merck. Double-distilled water was used throughout the experiment. Thin-layer chromatography was performed on Merck precoated silica gel aluminum plates with 60 F 254 indicator. The structural assignments of synthesized compounds were based on 1 H NMR, 13 C NMR, mass spectroscopy, and single X-ray diffraction analysis. Nuclear magnetic resonance (NMR) was acquired at 400 and 100 MHz for 1 H NMR and 13 C NMR, respectively, on a JEOL JNM-ECS 400 spectrometer instrument using CDCl 3 and dimethyl sulfoxide (DMSO)-d 6 as solvents. Tetramethylsilane (TMS) was taken as reference in NMR, and data were processed with its delta software. The coupling constant (J) is reported in Hertz, and chemical shift values are reported in ppm for 1 H NMR and multiplicities: s (singlet), d (doublet), and m (multiplet). Highresolution mass spectroscopy was generated by an Agilent ESI-TOF mass spectrometer. X-ray analysis was carried out on an Oxford Diffraction Xcalibur Four-circle Diffractometer with an Eos CCD detector using graphite monochromatized Mo-Ka radiation (λ = 0.71073 Å).
The morphology of the synthesized MNPs and its derivatives obtained after modifications was examined through a TECNAI 200 kV transmission electron microscope (Fei, Electron Optics) equipped with digital imaging and a 35 mm photography system and scanning electron microscopy (SEM) (Jeol Japan Mode: JSM 6610LV). The X-ray diffraction patterns of Fe 3 O 4 NPs and SiO 2 @Fe 3 O 4 NPs were recorded using Cu Kα radiation (l1/4 1.5406 Å) on a powder X-ray diffractometer (Bruker, D8 Advanced, Germany) at room temperature in a 2⊖ interval of 10−80. The FT-IR spectra were recorded using a PERKIN ELMER 2000 FT-IR spectrophotometer in the range of 400− 4000 cm −1 at room temperature using KBr pellets. The magnetic properties of bare and immobilized nanoparticles were determined with a vibrating sample magnetometer (EV-9, Microsense, ADE) in the magnetic field sweeping between −10 000 and +10 000 Oe at room temperature.
Experimental Procedure for the Synthesis of Fe 3 O 4 (MNPs). Fe 3 O 4 NPs were prepared by the coprecipitation method, as reported elsewhere. 25 Briefly, Fe 2 (SO 4 ) 3 (6.0 g) and FeSO 4 (4.2 g) were dissolved in 250 mL of deionized water and the reaction mixture stirred at 60°C till the appearance of a yellowish-orange solution. Then, ammonium hydroxide (25%) was added slowly to adjust the pH of the solution to 10 and the reaction mixture was allowed to stir continuously for 1 h at 60°C . NPs were precipitated as a black substance, which were separated by an external magnet and washed with deionized water and ethanol several times until the filtrate showed pH 7 and then finally dried under vacuum.
Experimental Procedure for the Synthesis of SMNPs. The coating of silica over prepared MNPs was achieved using the sol−gel approach. 43 Briefly, a suspension of 0.5 g of MNPs and 0.1 m HCl (2.2 mL) was prepared in the mixture of ethanol (200 mL) and water (50 mL) under sonication for 1 h at room temperature. After this period, 25% NH 4 OH (5 mL) was added to this solution, followed by the addition of TEOS (1 mL) dropwise and the resulting solution was stirred at 60°C for 6 h. The resulting SMNPs were then separated magnetically and washed several times with ethanol and then dried under vacuum.
Procedure for Catalytic Activity Test for Compound 4c. A mixture of 4-bromobenzaldehyde (1 mmol), dimedone (1 mmol), malononitrile (1 mmol), and ASMNPs (10 mg) was taken in a mortar and ground, till it converts into a thick pastelike reaction mixture, at room temperature. After completion of the reaction, 3−4 mL of 95% ethanol was added into the reaction mixture to dissolve the thick paste substance. Subsequently, ASMNPs were recovered with the help of a magnet, washed with ethanol thoroughly, dried overnight, and reused. The crude product was obtained by the simple recrystallization technique with ethanol. Recrystallized Calculation of green chemistry parameters; single X-ray crystallographic parameters; 1 H NMR and 13 C NMR chemical shift values; 1 H NMR, 13 C NMR, and ESI−MS spectra of all compounds (PDF) | 4,582.2 | 2020-02-20T00:00:00.000 | [
"Chemistry"
] |
MsDD: A novel NDN producer mobility support scheme based on multi-satellite data depot
The Named Data Networking (NDN) is currently an important future network framework, and the mobility issue of producers within NDN is a primary challenge. However, in environments characterized by frequent producers mobility, traditional producer mobility support schemes still encounter issues such as excessive consumer delays and interest packet loss. With the development of The 6th generation communication technology (6G), integrating ground networks with satellites has emerged as a potential solution to address the aforementioned problems. In this paper, we propose an NDN producer mobility support scheme based on multi-satellite data depot, named MsDD. The proposed scheme proactively caches producer data into a data depot built on a low-earth orbit satellite constellation to minimize the impact of NDN producer mobility on network performance. We design data depot construction strategy, in-network caching strategy, and routing strategy based on forwarding hint to facilitate effective communication in satellite networks. Experimental results using ndnSIM demonstrate that compared with other existing schemes, MsDD can effectively shield the impact of producer mobility on consumer delay, delivery ratio, and signaling overhead, and MsDD has a clear advantage in terms of consumer delay and delivery ratio.
Introduction
With the rapid development of network communication technology and hardware, the connectivity provided by the Internet and its low storage costs have made a vast amount of new content accessible.As a result, the volume of data on the network is growing at an astonishing rate.Statistics show that in 2008, the amount of information on the Internet reached 500 exabytes, surpassing a zettabyte in 2010, growing to 1.8ZB in 2011, and reaching 44ZB in 2020 [1].Consequently, network users are increasingly focusing on the content itself rather than just its storage location.Additionally, the growth rate of global mobile data traffic is nearly twice that of fixed IP traffic, and consumer Video-on-Demand (VoD) traffic is expected to increase by nearly double, indicating that mobile multimedia communication will gradually become mainstream.With the explosive growth in content volume and the rise of video-on-demand and live streaming, future network architectures will have higher requirements for bandwidth, latency, and other aspects of content transmission [2].
Named Data Networking (NDN) is one of the future network architectures that meets these higher requirements [3].Unlike traditional IP networks, NDN is content-centric rather than host-centric, placing content at the center of network communication rather than relying solely on host addresses [4].NDN, as a popular new generation network architecture in recent years, has many unique advantages and potentials.At the same time, it also brings corresponding challenges for researchers, such as mobility support [5].Mobility support has been a hot topic in NDN research, involving both consumer and producer mobility issues [6].Due to NDN's inherent support for consumer mobility through the retransmission mechanism of interest, the current focus of NDN mobility support research is on supporting producer mobility [7].
Most existing NDN producer mobility support schemes use reactive techniques to restore the network after producer mobility.Therefore, it is crucial to utilize caching-based schemes by moving the produced data to an easily accessible location to support seamless producer mobility [8].Currently, caching-based schemes use ground routers and other devices as cache points or aggregation points for content data.However, with the rise of 6G communication technology and advancements in satellite communication hardware, supporting seamless producer mobility in an integrated space-ground environment has become achievable [9].Therefore, in this paper, to minimize the impact of producer mobility on the NDN network and ensure communication quality, we propose an NDN producer mobility support scheme based on a multi-satellite data depot called MsDD.MsDD utilizes the characteristics of low orbit satellites such as low latency, strong signal, low cost, wide coverage, and easy access by ground devices to form a distributed data depot with multiple low orbit satellite nodes.The data packets generated by producers can be cached in this data depot, allowing interest packets and data packets to aggregate within the depot to maintain communication between consumers and mobile producers.Specifically, the main contributions of this paper are as follows: • We construct a multi-satellite data depot model consisting of a Walker constellation and GEO(Geostationary Orbit) satellites.This data depot aggregate interest packets and data packets within the depot to effectively shield the impact of producer mobility on network performance.
• We design a forwarding-hint-based routing strategy that takes into account the unique attributes of the Walker constellation.This routing strategy ensures the effective transmission of data packets and interest packets within the satellite network.
• We design a probability-based in-network caching strategy for MsDD.This strategy caches data packets with different probabilities based on the popularity of the data packets and the priority of the satellite nodes.It reduces cache redundancy and decreases the retrieval time of interest packets within the data depot.
The remaining sections of this paper are organized as follows.Section 2 provides an overview of the relevant background technologies of NDN and related work on producer mobility support.Section 3 introduces the proposed MsDD producer mobility support scheme.Section 4 explains the results of the simulation experiments.Finally, in section 5, we summarize our work.
Background and related work
Named Data Networking (NDN), as an implementation architecture of Information-Centric Networking (ICN), is a highly promising future network architecture.In NDN, there are mainly two types of packet transmissions: Interest packets and Data packets, which form the basis for all communications [10].In NDN, the requester of content is called the consumer, and the provider of content is called the producer.Interest packets and data packets carry a common name indicating the content required for that communication.Consumers request content by sending interest packets to producers, and these interest packets are routed based on the longest prefix match of the content name, using the Forwarding Information Base (FIB) to guide them towards the producers.When producers receive interest packets, they respond with data packets containing the requested content [11].
In NDN, although the content name is decoupled from its location, however, the content name is not separated from its location [11].When producers move, interest packets may still be routed to their old locations, resulting in packet loss as the desired content cannot be retrieved [12].After the event of packet loss, relying solely on interest packet retransmission mechanisms to restore communication is not feasible in large-scale networks.Therefore, designing a more reliable NDN producer mobility support scheme is a crucial topic in current NDN research.
Article [13][14][15] proposed the anchor-based approach, which are designed based on the Mobile IP protocol [16] used in the current Internet.Article [13] proposed a classic scheme: KITE, which allocates a non mobile anchor point for all mobile nodes to support the mobility of producers, but the anchor point may cause problems such as single point of failure and long forwarding path.The scheme proposed in article [14] uses intermediate content router (CR) and home domain content router (CRH) to build a home domain router, and transmits interest and data packets through tunnels.But like mobile IPv6, the home domain router tunnel will use encapsulation and deencapsulation, which increases the interest retransmission rate and packet drop rate.The scheme proposed in article [15] uses the anchor node to forward interest packets, and uses an update packet to update the prefix generation information of the anchor node.Compared with KITE [13], this scheme shows better performance and reduces retransmission of interest packets.However, the proposed scheme does not determine the selection method of anchor nodes, and produces problem such as inefficient data forwarding path.
Scheme [17] enhances seamless mobility support in NDN by modifying FIB and NDN packets that access routers.Its advantage is that it can provide interrupted content and reduce handoff latency, but its communication overhead in the wide area network is high, and it increases the distance and hops of communication, which is easy to cause the loss and retransmission of interest packets.Scheme [18] uses a naming server to track the location of producers.The naming server facilitates the communication between producers and consumers to a certain extent, but in the environment where producers move frequently, the naming server may provide outdated locations, which may lead to the loss of interest packets and interest retransmission.The article [19] proposed an anchorless mobility support scheme named MAP-ME, which supports real-time communication during producer mobility by dynamically updating the FIB tables of minimal routers in the network.To address packet loss caused by FIB update delays, the scheme includes an additional protocol named "Notification/Discovery".The most notable drawback of MAP-ME is the triangular routing problem, which can result in unnecessary delays.
For most caching-based schemes [20][21][22][23], they employ various methods and techniques to mitigate the impact of producer mobility.The article [20] designed a scheme named PNPCCN, where the producer caches requested content to neighboring routers based on popularity and rarity before moving.This allows consumers to effortlessly retrieve these contents from the neighboring routers.However, this scheme only supports specific content during producer mobility, and requests for unpopular content might be lost.The scheme proposed in article [21] not only pushes content packets but also maintains content availability by placing copies of the data packets, which can lead to significant unnecessary overhead due to excessive redundant copies.Article [22] introduced a scheme named T-Move, which supports producer mobility by caching selected content on an edge router within the network.This scheme enhances router functionality by adding content names, trends, and frequency.Additionally, it introduces control messages GETT (GET Trendiness) and REPT (REPort Trendiness) to obtain recent router information.However, T-Move requires broadcasting these messages to update FIB and cache messages, which inevitably increases signaling overhead during handovers.Article [23] designed a proactive caching scheme based on predictive techniques, utilizing location prediction and user access patterns to proactively cache potential data in real-time at an optimal location near the consumer.This allows consumers to retrieve the required content without their interest packets reaching the mobile producer, thereby avoiding unnecessary delays.Nonetheless, this scheme generates additional signaling and computational overhead, and the unpopular content requests lead to loss and invoke interest retransmission issue.
Caching-based producer mobility support schemes need to prioritize the design of content caching strategies.Our proposed MsDD focuses primarily on in-network caching schemes.LCE (Leave Copy Everywhere) [24] is the default in-network caching scheme in NDN, which allows routers to cache all incoming data packets.However, this leads to high cache redundancy and low cache utilization.LCD (Leave Copy Down) [25], originally proposed for hierarchical web caching systems, only caches content at the next hop of the current serving node along the path to the requester whenever content requests are served from the cache or content source.In Prob [26], cache nodes decide whether to cache incoming data based on a certain probability.ProbCache [27] is a dynamic probabilistic caching mechanism that calculates the caching probability based on the cache capacity of remaining routers along the transmission path and the hop count from the current cache node to the server.Betw/EgoBetw [28] considers the centrality of cache nodes.However, this strategy has high complexity and requires global node information and communication overhead between nodes before network operation.CCS/CES [29] is a lightweight and reactive NDN-compliant caching scheme that applies two different caching strategies by dividing the network into edge and core segments.These strategies collectively consider content popularity and freshness, and do not require global node information.The PTF proposed in article [30] caches content by calculating cache benefit, which comprehensively considers the content popularity, cache location, and content freshness, and PTF predicts the cache benefit of new content by using Grey Model.
Multi-satellite data depot
In this section, we will systematically introduce how MsDD operates to address the mobility issues of NDN producers.The structure of this section is illustrated in Fig 1.
The model of MsDD
The MsDD model consists of three layers: the ground layer, the LEO (Low Earth Orbit) layer, and the GEO (Geostationary Orbit) layer, as illustrated in Fig 2 .In ground layer, consumers and mobile producers within the coverage of satellites can directly communicate with LEO layer satellites by using hardware devices.The LEO and GEO layers collectively form a distributed data depot.The LEO layer comprises m ordered polar orbit planes, forming a Walker constellation, with each plane uniformly distributing n ordered LEO satellites.Each LEO satellite is uniquely defined by the prefix /sat/OP h /SP i , where /sat denotes it as a satellite node, /SP i denotes its orbit, and /SP i denotes its sequence in the orbit, thus there exists a set of low-earth orbit satellite nodes denoted as The GEO layer consists of three GEO satellites evenly distributed at intervals above the equator, forming a GEO constellation whose orbital plane coincides with the equatorial plane.These three GEO satellites completely cover the entire LEO layer and are responsible for sending update information to the LEO satellites.
In addition to the FIB (Forwarding Information Base), PIT (Pending Interest Table ), and CS (Content Store) tables inherent to NDN nodes, we introduce a Data List table, denoted as DL.Within each orbit, several LEO satellites will be selected as managers according to MsDD manager rules.Each manager and three GEO satellites carry a same DL.Each DL entry comprises the location (node prefix) where data packets are first cached and the prefix of the data packet name.
Manager rule of MsDD
In MsDD, we specify that each orbit is managed by M managers, spaced at intervals of d n M e À 1 nodes within the orbit.Additionally, in MsDD, we define that if a manager on orbit /OP h is S h,i , then the manager on the adjacent orbit /OP h+1 is S h + 1, i+1 , and this pattern continues for other orbits.This arrangement method ensures a roughly equal number of managers across different latitudes.
Establishment of SFIB in MsDD
In MsDD, in addition to the FIB carried by ground devices and LEO satellites, LEO satellites also maintain an SFIB (Satellite FIB).The distinction between FIB and SFIB in MsDD is as follows: • FIB: Responsible for satellites to ground communication.FIB entries on LEO satellites include different faces defined by frequency bands in the downlink.
• SFIB: Responsible for inter-satellite communication.An SFIB entry includes the definition of a satellite prefix, faces, and manager identifiers.The FIB is inherent to NDN nodes, so we mainly explain the process of establishing SFIB in this section.The establishment of SFIB is crucial preparation for data communication.It is essential to note that when the constellation structure is determined, the completed SFIB entries should not be limited by lifetime and deleted.Therefore, SFIB entries are not subject to lifespan considerations once established.
In constructing SFIB entries for satellites on the same orbit in MsDD, the process follows these steps: Step 1: Satellite S h,i sends a Pub-A message from each of its two relay faces communicating with other satellites on the same orbit.This Pub-A message includes prefix information, hop count, and manager identifiers of S h,i .
Step 2: Face f a of Satellite node S h,i + receives the Pub-A message sent by S h,i and processes it as follows: • If an SFIB entry with prefix /sat/OP h /SP i exists in node S h,i + and the hop count of Pub-A is lower than the existing one, update the SFIB entry.
• If an SFIB entry with prefix /sat/OP h /SP i exists in node S h,i + but the hop count of Pub-A is not lower than the existing one, then no adjustment is made.
• If no SFIB entry with prefix /sat/OP h /SP i exists in node S h,i +, S h,i + creates a new SFIB entry with a prefix of /sat/OP h /SP i , face of f a , and records its manager identifier.
Step 3: End of processing.
Since the sequence of satellites on the same orbit remains unchanged, once SFIB entries for satellites on the same orbit are established, there is no need for nodes to resend Pub-A messages to establish these entries again.
Due to the characteristics of polar orbit satellite constellations, when a satellite passes through the North and South Poles, its adjacent orbits will undergo left-right substitution, so it is necessary to dynamically adjust the SFIB table entries for different orbits.MsDD constructs SFIB table entries for different orbits according to the following steps: Step 1: Satellite S h,i sends a Pub-B message from each of its two relay faces communicating with satellites on different orbit at regular time interval τ. τ is determined by the characteristics of different satellite constellations.This Pub-B message includes prefix information of S h,i .
Step 2: Face f b of Satellite node S h þ ;i þ receives the Pub-B message sent by S h,i and processes it as follows: • If an SFIB entry with prefix /sat/OP h exists in node S h þ ;i þ but face is not f b , update the SFIB entry.
• If an SFIB entry with prefix /sat/OP h exists in node S h þ ;i þ and face is f b , then no adjustment is made.
• If no SFIB entry with prefix /sat/OP h exists in node with a prefix of /sat/OP h /SP i and face of f b .
Step 3: End of processing.
By completing these steps, each LEO satellite node in MsDD creates an SFIB.Each satellite node can utilize its SFIB to communicate with other satellites in the LEO layer.Fig 4 illustrates the situation after satellite S 1,1 completes the construction of its SFIB.
Routing strategy of MsDD
Due to the dynamic characteristics of satellite constellations, traditional NDN routing strategies are not applicable in satellite constellations.Our idea is to first forward the interest packet to the orbit where the destination node is located, and then forward it within the orbit, in order to reduce the impact of the instability of inter satellite links between different orbits on the transmission path.To achieve this goal, we use forwarding hint.The forwarding hint is a locator carried in the interest packet, indicating where to forward the interest packet.By forwarding hint, the core network of NDN can only announce its location in the form of a prefix, which is more scalable than announcing data name prefix [31].Since we have set a unique prefix name for each satellite in the LEO layer when constructing it, we use the prefix of the destination node as a forwarding hint to route interest packets.Since ordinary nodes do not carry DL, it is necessary to obtain the prefix name of the target node through the DL carried by the manager.Therefore, ordinary nodes need to first forward interest packets to the nearest manager on the same orbit through FIB entries, and then the manager adds forwarding hint for the interest packets.After receiving a forwarding hint, the interest packet is routed by the manager according to Algorithm 1.
Algorithm 1 Routing algorithm for interest packet D int . Require:
Caching strategy of MsDD
Due to the fact that the data depot of MsDD is composed of multiple LEO satellites, it is necessary to design an in-network caching strategy to enhance cache hit rates, reduce cache redundancy, and reduce data retrieval delay.
The in-network caching strategy specifies which satellite nodes along the reverse path should cache the data packet when it returns to the consumer.Our goal is to incorporate content popularity and node priority into mathematical formulas to calculate the probability of caching the data packet at a given node.This aims to reduce the retrieval time of interests in the data depot and improve cache hit rates.The manager rules and routing strategy of MsDD facilitate achieving this objective because interests are routed through manager during each forwarding, leading to a concentration of interests at these nodes.Consequently, we conclude that management nodes have the highest priority, and nodes closer to management nodes have higher priority.Therefore, we have devised the following design for MsDD: • Introducing a TLV element named ISLhop into the interest packet, responsible for recording the number of hops the interest packet has been forwarded between different orbits.When the interest packet is forwarded within the same orbit, ISLhop = 0.
• The node is responsible for recording and updating the ISLhop of Interest packets received from the face that communicates with satellites in two different orbits.
Based on these conclusions and designs, we define the probability P (h, i), D of caching a data packet D at a LEO satellite node S h,i as: Where, P dif ðh;iÞ;D is the probability of S h,i caching data packet D when the data packet is forwarded through different orbits, P same ðh;iÞ;D is the probability of S h,i caching data packet D when the data packet is forwarded through the same orbit.P D is the probability of packet D being cached at any node, and our scheme uses the method proposed in reference [29] to calculate the value.Through this method, the content popularity and freshness of D can be jointly calculated to obtain the value of P D .ε is reduce weight and ε 2 (0, 1).The larger the value of ε, the greater the trend of P (h, i), D decrease.ISLhop h,i is the ISLhop of S h,i , hop max is the maximum number of hops from a regular node to the manager within the same orbit, and hop h,i is the number of hops between S h,i and the nearest manager within the same orbit.We assume that after querying the SFIB, S h,i identifies its nearest manager as S h,j .Thus, hop max , hop h,i and ISLhop h,i can be calculated as follows: Where, ISLhop ðh;iÞ;f a is the current ISLhop value of face f a recorded by node S h,i , ISLhop ðh;iÞ;f b is the current ISLhop value of face f b recorded by node S h,i , f a and f b are two relay faces communicating with satellites on different orbit.When a node caches data packets but the Cache Store is full, cache replacement is necessary.We adopt the Least Recently Used (LRU) as the cache replacement strategy.Although it is very simple, it can ensure good performance [32].
Communication process of MsDD
This section will elucidate how interest packets request data packets for data communication within the MsDD framework.Initially, LEO satellite which covers the producer need to collect data packets from producer.This process is driven by a request-based "PULL" method, which does not contravene the fundamental paradigm of NDN.As depicted in Fig 5, the satellite S h,i will undertake the following actions to collect data packet D from the ground producer P within its coverage: Step 1: When the mobile producer P in ground layer hands over to a satellite S h,i , P sends a Collect message to update the FIB of S h,i .The Collect message contains the name prefixes of the data objects that P possesses.
Step 2: S h,i then sends an update message to the GEO controller.The GEO controller performs a global update, which is responsible for updating the DL, including the prefix of S h,i and the name prefixes of the data objects possessed by P.
Step 3: When S h,i receives the interest packet that requests data packet D, S h,i sends an SReq request for the data packet D to P according to FIB.After receiving D, S h,i caches it.
Step 4: End of processing.
When a consumer from the ground forwards an interest packet D int requesting the data packet D to the LEO satellite S h þ ;i þ , the data packet retrieval process in MsDD begins: Step 1: S h þ ;i þ forwards the D int to the nearest manager S h þ ;j according to the SFIB entries.
Step 2: S h þ ;j checks DL table entries and takes the following actions: • If there is an entry in the DL with the name of D, and the cached location is S h,i , proceed to Step 3.
• If there is no entry in the DL with the name of D, the D int will wait at S h þ ;j and repeat Step 2.
Step 3: The D int uses the prefix of S h,i as the forwarding hint, and then S h þ ;j forwards the D int according to Algorithm 1.
Step 4: The D int attempts to hit D in the cache.If the cache miss occurs at S h,i , S h,i will start collecting D.
Step 5: End of processing.
The data packet retrieval process is illustrated in Fig 6 .The left side of Fig 6 shows the process of sending the interest packet, while the right side shows the process of returning data packet.
To prevent packet loss due to link switching at the cross-seam of Walker constellation in the reverse path of data packets, MsDD proposed the following methods to address this issue: When a satellite node needs to send an interest packet through the inter-satellite link of the cross-seam, the satellite node will send an Auxiliary Interest packet (A-Interest) to its neighboring satellite nodes in the same orbit.The difference between the A-interest and the ordinary interest packet is that the hop limit of the A-interest is 2. The process of sending the A-Interest packet is described in Algorithm 2. The purpose of sending the A-Interest packet is to reconstruct a reverse path for the data packet.if there is a table entry named D in the PIT of S h,i then 6: finish forwarding 7: end if 8: if RemHops == 0 then 9: finish forwarding 10: else if RemHops == 2 then 11: forward D A−int to two adjacent nodes on the same orbit according to SFIB entries of S h,i+1 and S h,i−1 12: RemHops−− 13: else 14: while S h,i+1 or S h,i−1 reestablish a new link with RemHops−− 17: end while 18: end if 19: end for 4 Simulation and analysis
Simulation environment
To evaluate the performance of MsDD, we conducted simulation experiments using ndnSIM, a simulation software specifically designed for NDN based on ns3 [33].We built the required experimental environment according to the network model of MsDD, which consists of an Iridium constellation, 3 GEO satellites, and equipment.The topology of the ground-layer equipment is shown in Fig 8 and is composed of 4 equally sized interconnected network areas, each covered by a different LEO satellite.Moreover, each network area has a stationary consumer and a mobile producer within that area, both of which can communicate directly with the LEO satellites.To simulate a realistic environment, we selected four suitable global locations to place the 4 network areas and used the Satellite Tool Kit (STK) to simulate the dynamics of the Iridium constellation.Specific parameters are listed in Table 1.
Schemes and evaluation metrics
To evaluate the performance of MsDD, we initially need to conduct experiments by varying the number of managers in each orbit within the LEO layer to evaluate the impact of the number of managers on the performance of MsDD.Subsequently, we will verify the advantages of the proposed in-network caching scheme in MsDD.Finally, we compare MsDD with other mobility support schemes on three main indicators that reflect consumer satisfaction [34]: (1) Consumer delay: Calculated as the time difference between the initial attempt to send an Interest and the successful reception of the data at the consumer.This metric is crucial as it measures the responsiveness of the system from the consumer's perspective.(2) Delivery Ratio: Represents the proportion of successfully received data packets to the total number of Interest packets sent by the consumer.This ratio indicates the efficiency of the system in delivering data to the consumer.(3) Signaling Overhead: The number of messages required to ensure that the consumer's Interest packets can reach the mobile producer during a handover event.This metric reflects the overhead imposed on the network to maintain continuous communication as the producer moves.
Simulation result
The first scenario evaluates the impact of the number of managers on the performance of MsDD.The results depicted in Fig 9 indicate a clear downward trend in consumer delay as the number of managers increases.This is attributed to the fact that the more managers on a orbit, the fewer additional hops the interest and data packets need to forward to their nearest manager.When the number of managers on each orbit reaches 11, the additional hops become 0, hence the consumer delay is at its minimum.
The results from Fig 10 indicate that as the number of managers increases, the signaling overhead for each update of the DL during the data collection phase also becomes higher.This is because during the data collection phase, managers need to send update information to the GEO satellite, and as the number of managers increases, more managers will be sent update information from the GEO satellites.Comparing with Fig 9, we can see that when there is 1 manager per orbit, although the signaling overhead is minimal, it results in a larger consumer delay.Conversely, when there are 11 managers per orbit, although the consumer delay is reduced, it leads to a significant signaling overhead.Therefore, we believe that placing 1 or 11 managers per orbit does not optimize the performance of MsDD, so in subsequent experimental processes, we have excluded these two schemes.
We have named the schemes with 2, 3, and 4 managers per orbit as MsDD-2, MsDD-3, and MsDD-4, and compared their delivery ratios as the request rate increases.when the request rate increases, the delivery ratio of all three schemes tends to decline due to the increase in data volume within the network, with MsDD-2 and MsDD-3 showing a more pronounced decrease.This is because in MsDD, the manager nodes carry a large amount of network traffic, and the fewer managers there are, the greater the load on each manager and the inter-satellite links between different orbits.Therefore, we believe that when there are 2 or 3 managers per orbit, the performance of MsDD is also not optimal, so in subsequent experimental processes, we have excluded the schemes MsDD-2 and MsDD-3 and selected MsDD-4 for further experiments.
The second scenario assesses how in-network caching strategy of MsDD-4 performs compared to LCE [24], LCD [25], and CCS [29] in terms of cache hit rate and consumer delay when the request rate changes, as depicted in Fig 12 .From Fig 12, it can be seen that as the request rate increases, the cache hit rate and consumer delay of MsDD-4 are significantly better than other schemes.This is because in-network caching strategy of MsDD-4 prioritizes caching more popular content in the network, reducing cache redundancy and also reducing the probability of requested content being replaced.At the same time, MsDD-4 increases the probability of data packets cached near the manager, allowing interest packets to hit the cache with fewer hops.
The final scenario evaluates the performance of MsDD-4 when the producer's mobility speed changes, comparing it with several other NDN producer mobility support schemes.We selected the inherent forwarding strategy of NDN (Pure NDN), two popular schemes from the references (KITE [13] and MAP-ME [19]), and a caching-based scheme from the references (T-Move [22]) for comparative analysis.We first evaluate the changes in consumer delay for these mobility support schemes when the producer mobility speed changes.The results in Fig 13 indicate that with the increase of producer mobility speed, except for Pure NDN, all other schemes have good performance in consumer delay and remain stable within a reasonable numerical range.And MsDD-4 performs the most stably, and increases consumer delay by about 5% compared to other solutions.The main reasons are threefold: • MsDD converges consumers and data packets on low Earth orbit satellites, so the consumer delay of MsDD is only affected by the satellite network, not the terrestrial network.
• Apart from satellite handover, MsDD does not have other handoff latency, whether it is layer 2 latency or mobility management latency, which greatly reduces the consumer delay of MsDD.
• Due to the superior cache hit rate of MsDD's in-network caching strategy within the data depot, this allows data packets to be obtained with fewer hops, reducing consumer delay.
The results from Fig 14 demonstrate that the delivery ratio and average interest packet loss ratio of MsDD-4 is significantly superior to other schemes, and this advantage becomes more pronounced as the producer's mobility speed increases.This is because while other schemes can mitigate packet loss during handover to a certain extent, an increase in handover events inevitably leads to a decrease in the delivery ratio.For instance, with KITE, as the producer's mobility speed increases, the frequency of switches between APs also rises, leading to stale path issues and resulting in packet loss.However, in MsDD, packet loss typically occurs during satellite handover and inter-satellite link switching, the former being a low-probability and acceptable event, and for the latter, our proposed routing strategy and reverse seam strategy provide effective solutions.
Fig 15 illustrates the changes in signaling overhead as the producer mobility speed increases, where we calculate the signaling overhead of MsDD during satellite handover.It can be observed that the signaling overhead of MsDD remains stable with changes in the producer's mobility speed and is superior to that of KITE and T-Move.This is because when the producer moves, KITE needs to frequently send TI/TD packets to the producer to update the tracking path, while T-Move requires sending messages to update the FIB before and after the handover.In contrast, the signaling overhead of MsDD is only related to the number of managers, as the GEO controller only sends update information to the managers.
Conclusion and future work
To address the seamless communication issue in NDN under producer mobility, we have proposed MsDD.MsDD leverages the characteristics of a LEO satellites constellation, procaching the data packets generated by producers onto LEO satellites, and guiding the interest packets to retrieve data through satellites.We design the basic model of the data depot, a routing strategy based on forwarding hint, and a probability-based in-network caching strategy to support our approach, and validated the rationality and advantages of our proposed scheme through simulation.The simulation results show that, compared to several other mobility support schemes, MsDD maintains stable consumer delay, delivery ratio, and signaling overhead under the environment of frequent producer mobility, and the consumer delay and delivery ratio are superior to other schemes in the scenarios we proposed.The simulation results demonstrate that MsDD can effectively shield the impact of frequent producer mobility on network performance.
Our future work includes: (1) further improving the feasibility and practical relevance of MsDD in more real-world scenarios, such as desert and marine environments.(2) researching the impact of satellite capacity and energy consumption on MsDD as network traffic increases.
(3) resolving consumer mobility issues caused by the dynamic nature of LEO constellations in MsDD. | 8,023.4 | 2024-09-12T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Effect of Co and Gd Additions on Microstructures and Properties of FeSiBAlNi High Entropy Alloys
FeSiBAlNi (W5), FeSiBAlNiCo (W6-Co), and FeSiBAlNiGd (W6-Gd) high entropy alloys (HEAs) were prepared using a copper-mold casting method. Effects of Co and Gd additions combined with subsequent annealing on microstructures and magnetism were investigated. The as-cast W5 consists of BCC solid solution and FeSi-rich phase. The Gd addition induces the formation of body-centered cubic (BCC) and face-centered cubic (FCC) solid solutions for W6-Gd HEAs. Whereas, the as-cast W6-Co is composed of the FeSi-rich phase. During annealing, no new phases arise in the W6-Co HEA, indicating a good phase stability. The as-cast W5 has the highest hardness (1210 HV), which is mainly attributed to the strengthening effect of FeSi-rich phase evenly distributed in the solid solution matrix. The tested FeSiBAlNi-based HEAs possess soft magnetism. The saturated magnetization and remanence ratio of W6-Gd are distinctly enhanced from 10.93 emu/g to 62.78 emu/g and from 1.44% to 15.50% after the annealing treatment, respectively. The good magnetism of the as-annealed W6-Gd can be ascribed to the formation of Gd-oxides.
Introduction
Recently, a new concept was proposed for high entropy alloys (HEAs), which has aroused wide attention and interest [1][2][3]. Generally, HEAs with equiatomic or near-equiatomic alloying elements mainly consist of face-centered cubic (FCC), body-centered cubic (BCC), or hexagonal closed-packed (HCP) solid solutions, and some intermetallic or amorphous phases. Owing to the special phase structure, HEAs usually possess excellent mechanical properties [4,5] and corrosion resistance [6], especially magnetic properties [7][8][9]. Several studies have reported that additions of certain elements into HEAs could induce the transformation of crystalline structures and further affect the related properties of HEAs [9][10][11]. The addition of Al, Ga, and Sn to the CoFeMnNi HEA induced the phase transition from FCC to ordered BCC phases, and further led to the significant improvement of the saturation magnetization (M s ) [9]. The microstructural evolution of (FeCoNiCrMn) 100-x Al x HEA system transformed from the initial single FCC structure to final single BCC structure as Al concentration increased from 0 to 20 at. % [10]. Both the tensile fracture and yield strength were enhanced with increasing Al concentration. The HEAs phases are metastable in thermodynamics, therefore, would transform to the stable microstructure after subsequent annealing, which obviously affects the properties of HEAs to some degree [3,8,[12][13][14][15]. Annealing under a given condition can lead to the occurrence of phase transition from a FCC to BCC phase for FeCoNi(CuAl) 0. 8 HEAs, resulting in a substantial increase in the M s from 78.9 Am 2 /kg to 93.1 Am 2 /kg [15].
Recently, our group has prepared a series of as-milled FeSiBAlNi-based HEA powders using a mechanical alloying (MA) process [11,14,16]; these displayed an interesting microstructural evolution and magnetic properties. In the present study, the equiatomic FeSiBAlNiM (M = Co, Gd) HEAs were fabricated by a copper-mold spray casting technique. The effects of Co and Gd additions and subsequent annealing treatment on the microstructures, microhardness, and magnetism of the FeSiBAlNi HEAs were systematically investigated.
Experimental
The ingots of FeSiBAlNi, FeSiBAlNiCo, and FeSiBAlNiGd HEAs (denoted as W5, W6-Co, and W6-Gd, respectively) were prepared by an arc melting technique. The melting of these ingots was repeated at least five times to ensure the composition homogeneity in a Ti-gettered high-purity argon atmosphere. Then the ingots were remelted and made into 8 mm diameter rods by copper mold spray casting in an argon atmosphere. Then, the rods were annealed at given temperatures for two hours and cooled inside the furnace in the argon atmosphere. The annealing temperatures were set as two segments denoted as T I and T II in a low and high temperature region, respectively. There are 600 and 1000 • C for W5 HEA, 600 and 1000 • C for W6-Co HEA, and 650 and 1050 • C for W6-Gd HEA. The relatively high annealing temperatures selected for W6-Gd are attributed to the higher melting point (T m ) than that of the other two samples, as shown in the differential scanning calorimetry (DSC) curves.
Microstructural characterization of the as-cast and as-annealed HEAs were conducted by X-ray diffraction (XRD, Rigaku D8 Advance, Bruker, Germany) using Cu Kα radiation, field emission scanning electron microscopy (FESEM, QUANTA FEG 250 operated at 15 kV, Japan) coupled with energy dispersive spectrometry (EDS). The working distance used in SEM measurements was less than 10 mm. The thermal properties were analyzed by differential scanning calorimetry (DSC, TGA/DSC1, Mettler-Toledo, Greifensee, Switzerland) used under a continuous flow (30 mL/min) high-purity argon atmosphere at a heating rate of 10 K/min scanned from room temperature to 1400 • C. Microhardness of the tested HEAs was determined by a Vickers hardness tester (HV-10B), with a load of 200 g and a duration time of 15 s. The HV measurement for every tested sample was repeated ten times in order to obtain the average values. The coercive force (H c ), M s , and remanence ratio (M r /M s , M r : remanence) were determined by an alternating gradient magnetometer (AGM) at room temperature with a maximum applied field of 14000 Oe.
Results and Discussion
The XRD patterns of the as-cast W5, W6-Co, and W6-Gd HEAs are shown in Figure 1a. The as-cast W5 HEA consists of BCC1 (a = 4.475 Å) solid solution and FeSi-rich phase. The XRD pattern of the as-cast W6-Co HEA mainly displays the FeSi-rich phase of solution of other principal elements. In addition, other phase peaks may overlap with FeSi-rich phase peaks. Compared with the W5 and W6-Co HEAs, the effect of Gd addition on phase composition presents an obvious difference. The as-cast products of W6-Gd HEA exhibit the formation of new BCC2 (a = 4.484 Å) and FCC solid solutions. However, the FeSi-rich phase doesn't appear. The phase products of the as-cast W5, W6-Co, and W6-Gd HEAs are presented in Table 1. Figure 1b shows the DSC curves of the as-cast HEAs. The T m value of W6-Co HEA is 1129 • C, which is lower than that of W5 (1152 • C). However, the Gd addition increases the value of T m , which reaches 1185 • C. To further investigate the difference in morphologies and compositions caused by the Co and Gd additions, FESEM coupled with EDS analysis was carried out and is presented in Figure 2 and Table 2. As shown in Figure 2a, a larger number of polygonous light-grey phases distribute dispersedly in the matrix, as well as the irregular black phases with small size. The inset of Figure 2a-1 reveals one rhombic grain with edge sizes less than 8 μm. It needs to be noted that metalloid B as a light element can't be accurately measured. Moreover, the B content of some samples is very small in most regions, therefore, they are omitted in the present study. According to the EDS results, matrix (A) contains more Al and Ni elements, and a certain amount of Fe and Si elements. Furthermore, there is a partial FeSi-rich phase in region (A) and the black region (B) is enriched with Fe and Si elements. This indicates that the FeSi-rich phase mainly exists in region (B), presenting an evenly distribution in the matrix. The rhombic grain (C) is mainly composed of Fe and B elements (instead, region (C) contains more B element above 10 at. %.). The Co addition induces the refinement of the precipitated grains ( Figure 2b). The inset in Figure 2b-1 shows that the larger number of dark regions (D) with the smaller size exhibit a uniform distribution state and enrich Fe and Si elements. The gray region (E) is rich in the Ni element, and the bright-grey region (F) is poor in the Al element. Moreover, the component ratio of Fe and Si in regions (E) and (F) is close to 1:1. Although several phases appear in the SEM images, no peaks other than the FeSi-rich phase can be seen in the XRD results of as-cast W6-Co HEA ( Figure 1). It suggests that the precipitates probably have a similar lattice constant and crystal structure concerning the matrix [17]. Moreover, it could be a complex compositional fluctuation in the as-cast W6-Co HEA [18]. Figure 2c and inset (Figure 2c-1) present that the as-cast W6-Gd HEA consists of coarse rod-like dendrites as the FCC-matrix phase. They are rich in each principal element with near-equiatomic ratio except for Al element (region (G)). However, the Al element segregates in the interdendritic grains ((H): dark-and deep-grey regions) corresponding to the BCC2 phase with less contents. Usually the precipitation pathways in HEAs can be very complex, and it is a particularly challenging topic, which remains to be studied [19]. Table 1. Phase products of as-cast and as-annealed W5, W6-Co, and W6-Gd high entropy alloys (HEAs) at T I and T II , identified distinctly from XRD patterns.
HEAs
As-Cast As-Annealed To further investigate the difference in morphologies and compositions caused by the Co and Gd additions, FESEM coupled with EDS analysis was carried out and is presented in Figure 2 and Table 2. As shown in Figure 2a, a larger number of polygonous light-grey phases distribute dispersedly in the matrix, as well as the irregular black phases with small size. The inset of Figure 2a-1 reveals one rhombic grain with edge sizes less than 8 µm. It needs to be noted that metalloid B as a light element can't be accurately measured. Moreover, the B content of some samples is very small in most regions, therefore, they are omitted in the present study. According to the EDS results, matrix (A) contains more Al and Ni elements, and a certain amount of Fe and Si elements. Furthermore, there is a partial FeSi-rich phase in region (A) and the black region (B) is enriched with Fe and Si elements. This indicates that the FeSi-rich phase mainly exists in region (B), presenting an evenly distribution in the matrix. The rhombic grain (C) is mainly composed of Fe and B elements (instead, region (C) contains more B element above 10 at. %.). The Co addition induces the refinement of the precipitated grains ( Figure 2b). The inset in Figure 2b-1 shows that the larger number of dark regions (D) with the smaller size exhibit a uniform distribution state and enrich Fe and Si elements. The gray region (E) is rich in the Ni element, and the bright-grey region (F) is poor in the Al element. Moreover, the component ratio of Fe and Si in regions (E) and (F) is close to 1:1. Although several phases appear in the SEM images, no peaks other than the FeSi-rich phase can be seen in the XRD results of as-cast W6-Co HEA (Figure 1). It suggests that the precipitates probably have a similar lattice constant and crystal structure concerning the matrix [17]. Moreover, it could be a complex compositional fluctuation in the as-cast W6-Co HEA [18]. Figure 2c and inset (Figure 2c-1) present that the as-cast W6-Gd HEA consists of coarse rod-like dendrites as the FCC-matrix phase. They are rich in each principal element with near-equiatomic ratio except for Al element (region (G)). However, the Al element segregates in the interdendritic grains ((H): dark-and deep-grey regions) corresponding to the BCC2 phase with less contents. Usually the precipitation pathways in HEAs can be very complex, and it is a particularly challenging topic, which remains to be studied [19]. Figure 3 shows the XRD patterns of the as-annealed W5, W6-Co, and W6-Gd HEAs at different temperatures; their annealing products are also listed in Table 1. After annealing at TI, the annealed products of W5 HEAs consist of a new BCC3 (a = 4.033 Å) solid solution with a FeSi-rich phase. However, the contents of the FeSi-rich phase obviously decrease compared to that of the as-cast state. Moreover, the BCC1 solid solution disappears (Figure 3a). Via annealing at TII the two obtained phases still exist, but the diffraction peak intensity becomes strong, indicating the further Figure 3 shows the XRD patterns of the as-annealed W5, W6-Co, and W6-Gd HEAs at different temperatures; their annealing products are also listed in Table 1. After annealing at T I , the annealed products of W5 HEAs consist of a new BCC3 (a = 4.033 Å) solid solution with a FeSi-rich phase. However, the contents of the FeSi-rich phase obviously decrease compared to that of the as-cast state. Moreover, the BCC1 solid solution disappears (Figure 3a). Via annealing at T II the two obtained phases still exist, but the diffraction peak intensity becomes strong, indicating the further growth and coarsening of the grains. Being distinct from the W5 HEA, no new phase transformation occurs in the W6-Co HEA after annealing at T I and T II , indicating that the W6-Co HEA possesses a good thermal stability (Figure 3b). It suggests that the Co addition leads to the transformation from the metastable characteristic of W5 HEA to a more stable state in thermodynamics. However, the inset presents that the main diffraction peak of the W6-Co HEA shifts to the lower angle with the increased annealing temperature, suggesting a serious lattice distortion caused by the expansion of the lattice. Figure 3c reveals the formation of new phases of AlNi, AlGd, Gd-oxides, besides the primary BCC2 and FCC solid solutions for the as-annealed W6-Gd HEA at T I . The as-annealed products are unchanged at T II , except for the increased phase amounts of Gd-oxides. Moreover, compared with the W5 and W6-Co HEA, the highest T m value of the W6-Gd HEA may be attributed to the high T m values of precipitated intermetallic compounds of AlNi (1638 • C) and AlGd (1200 • C).
Entropy 2018, 20, x FOR PEER REVIEW 5 of 9 growth and coarsening of the grains. Being distinct from the W5 HEA, no new phase transformation occurs in the W6-Co HEA after annealing at TI and TII, indicating that the W6-Co HEA possesses a good thermal stability (Figure 3b). It suggests that the Co addition leads to the transformation from the metastable characteristic of W5 HEA to a more stable state in thermodynamics. However, the inset presents that the main diffraction peak of the W6-Co HEA shifts to the lower angle with the increased annealing temperature, suggesting a serious lattice distortion caused by the expansion of the lattice. Figure 3c reveals the formation of new phases of AlNi, AlGd, Gd-oxides, besides the primary BCC2 and FCC solid solutions for the as-annealed W6-Gd HEA at TI. The as-annealed products are unchanged at TII, except for the increased phase amounts of Gd-oxides. Moreover, compared with the W5 and W6-Co HEA, the highest Tm value of the W6-Gd HEA may be attributed to the high Tm values of precipitated intermetallic compounds of AlNi (1638 °C) and AlGd (1200 °C). Figure 4 shows the Vickers hardness (HV) of the as-cast and as-annealed HEAs. The W5 HEA displays the highest HV among the tested HEAs, and the as-cast W5 HEA possesses the highest hardness of 1210 HV. The additions of Co and Gd cause the decline of HV values, and the as-annealed W6-Gd HEA (TII) displays the maximal decline of HV (738). It suggest that the annealing treatment plays a negative effect on the HV of the as-cast samples, which is in agreement with Salishchec's results [20]. With the increased annealing temperature, the internal stress of the as-cast HEAs gradually decreases as well as the microstructural coarsening. The effect of solid solution strengthening became the smaller, and strain softening was revealed in HEAs [21]. Compared with W6-Co and W6-Gd HEAs, the FeSi-rich phase, as the second strengthening phase in the W5 HEAs (especially the as-cast one) evenly distributes in the BCC solid solution matrix, which can contribute to the high HV values.
2(degree)
GdO-oxide Figure 4 shows the Vickers hardness (HV) of the as-cast and as-annealed HEAs. The W5 HEA displays the highest HV among the tested HEAs, and the as-cast W5 HEA possesses the highest hardness of 1210 HV. The additions of Co and Gd cause the decline of HV values, and the as-annealed W6-Gd HEA (T II ) displays the maximal decline of HV (738). It suggest that the annealing treatment plays a negative effect on the HV of the as-cast samples, which is in agreement with Salishchec's results [20]. With the increased annealing temperature, the internal stress of the as-cast HEAs gradually decreases as well as the microstructural coarsening. The effect of solid solution strengthening became the smaller, and strain softening was revealed in HEAs [21]. Compared with W6-Co and W6-Gd HEAs, the FeSi-rich phase, as the second strengthening phase in the W5 HEAs (especially the as-cast one) evenly distributes in the BCC solid solution matrix, which can contribute to the high HV values. The mass magnetization (M) as a function of the magnetic field intensity (H) for the as-cast and as-annealed samples was tested. The Hc, Ms, and Mr/Ms of these HEAs are shown in Figure 5. All Hc values of the tested HEAs are in the range from 10 to 180 Oe (Figure 5a), indicating the soft magnetism nature of these HEAs. It suggests that the annealing treatment induces a weak decrease of Hc values for the W5 and W6-Co HEAs, but the Hc of W6-Gd HEA becomes large after annealing. The Hc is mainly affected by impurity, deformation, crystallite size, and stress, and the subsequent heat-treatment process [22]. Therefore, the Hc values of as-annealed W5 and W6-Co HEAs are slightly lower than those of the as-cast samples, suggesting that the former possess a little larger average crystallite size according to the well-known coercivity-crystal size relationship [8]. Moreover, the origin of the lower Hc can be attributed to the low number density of domain-wall pinning sites [23]. The as-annealed products of W6-Gd HEA contain complex phase compositions, and display the inhomogeneous characteristics, which obviously work towards the pinning effect of domain wall movement. Therefore the Hc values can be enhanced after the annealing treatment.
The variations of Ms are exhibited in Figure 5b. In the as-cast state, there is no distinct difference in Ms for all the tested samples, and W5 HEA emerges with a slightly higher Ms of 12.91 emu/g. After annealing at TI, the Ms of W5 HEA remains nearly unchanged, whereas declines with a reduction of 27.7% at TII. There is no obvious magnetism changes revealed for the as-cast and as-annealed W6-Co HEAs, and the Ms values become stabilized at about 11 emu/g. This stability of Ms is resulted from the stable phase characteristic of W6-Co HEA in the annealing stage. Unlike W5 and W6-Co HEAs, the magnetism of W6-Gd HEA is enhanced during annealing treatment. The Ms value of the as-annealed W6-Gd HEA increased from 10.93 emu/g to 31.91 emu/g at TI, and further up to 62.78 emu/g at TII, suggesting increased soft magnetic properties.
From Figure 5c, the Mr/Ms values of as-cast W5 and W6-Co HEAs are similar to their as-annealed states, which depend on their similar phase compositions. The as-annealed products of W6-Gd HEAs are significantly different from the as-cast one, and the Mr/Ms values are enhanced from 1.44% (as-cast) to 15.5% (at TI). Moreover, the as-annealed W6-Gd HEAs reveal the highest Mr/Ms values among the tested samples, indicating a better soft magnetism. The mass magnetization (M) as a function of the magnetic field intensity (H) for the as-cast and as-annealed samples was tested. The H c , M s , and M r /M s of these HEAs are shown in Figure 5. All H c values of the tested HEAs are in the range from 10 to 180 Oe (Figure 5a), indicating the soft magnetism nature of these HEAs. It suggests that the annealing treatment induces a weak decrease of H c values for the W5 and W6-Co HEAs, but the H c of W6-Gd HEA becomes large after annealing. The H c is mainly affected by impurity, deformation, crystallite size, and stress, and the subsequent heat-treatment process [22]. Therefore, the H c values of as-annealed W5 and W6-Co HEAs are slightly lower than those of the as-cast samples, suggesting that the former possess a little larger average crystallite size according to the well-known coercivity-crystal size relationship [8]. Moreover, the origin of the lower H c can be attributed to the low number density of domain-wall pinning sites [23]. The as-annealed products of W6-Gd HEA contain complex phase compositions, and display the inhomogeneous characteristics, which obviously work towards the pinning effect of domain wall movement. Therefore the H c values can be enhanced after the annealing treatment. The mass magnetization (M) as a function of the magnetic field intensity (H) for the as-cast and as-annealed samples was tested. The Hc, Ms, and Mr/Ms of these HEAs are shown in Figure 5. All Hc values of the tested HEAs are in the range from 10 to 180 Oe (Figure 5a), indicating the soft magnetism nature of these HEAs. It suggests that the annealing treatment induces a weak decrease of Hc values for the W5 and W6-Co HEAs, but the Hc of W6-Gd HEA becomes large after annealing. The Hc is mainly affected by impurity, deformation, crystallite size, and stress, and the subsequent heat-treatment process [22]. Therefore, the Hc values of as-annealed W5 and W6-Co HEAs are slightly lower than those of the as-cast samples, suggesting that the former possess a little larger average crystallite size according to the well-known coercivity-crystal size relationship [8]. Moreover, the origin of the lower Hc can be attributed to the low number density of domain-wall pinning sites [23]. The as-annealed products of W6-Gd HEA contain complex phase compositions, and display the inhomogeneous characteristics, which obviously work towards the pinning effect of domain wall movement. Therefore the Hc values can be enhanced after the annealing treatment.
The variations of Ms are exhibited in Figure 5b. In the as-cast state, there is no distinct difference in Ms for all the tested samples, and W5 HEA emerges with a slightly higher Ms of 12.91 emu/g. After annealing at TI, the Ms of W5 HEA remains nearly unchanged, whereas declines with a reduction of 27.7% at TII. There is no obvious magnetism changes revealed for the as-cast and as-annealed W6-Co HEAs, and the Ms values become stabilized at about 11 emu/g. This stability of Ms is resulted from the stable phase characteristic of W6-Co HEA in the annealing stage. Unlike W5 and W6-Co HEAs, the magnetism of W6-Gd HEA is enhanced during annealing treatment. The Ms value of the as-annealed W6-Gd HEA increased from 10.93 emu/g to 31.91 emu/g at TI, and further up to 62.78 emu/g at TII, suggesting increased soft magnetic properties.
From Figure 5c, the Mr/Ms values of as-cast W5 and W6-Co HEAs are similar to their as-annealed states, which depend on their similar phase compositions. The as-annealed products of W6-Gd HEAs are significantly different from the as-cast one, and the Mr/Ms values are enhanced from 1.44% (as-cast) to 15.5% (at TI). Moreover, the as-annealed W6-Gd HEAs reveal the highest Mr/Ms values among the tested samples, indicating a better soft magnetism. Residual stress exists in the as-cast HEAs which can deteriorate the soft magnetic properties. Appropriate heat treatment can induce stress relief, which is beneficial to improve the soft magnetic properties [24]. Therefore, except for the as-annealed W5 HEA at TII, the soft magnetic properties of the tested HEAs are properly improved by the structural relaxation through stress-relief annealing [25]. Notably, it suggests that Ms of the as-annealed W6-Gd HEA at TII is about five times higher than that obtained for the as-cast one. Moreover, the magnetic properties are strongly dependent on The variations of M s are exhibited in Figure 5b. In the as-cast state, there is no distinct difference in M s for all the tested samples, and W5 HEA emerges with a slightly higher M s of 12.91 emu/g. After annealing at T I , the M s of W5 HEA remains nearly unchanged, whereas declines with a reduction of 27.7% at T II . There is no obvious magnetism changes revealed for the as-cast and as-annealed W6-Co HEAs, and the M s values become stabilized at about 11 emu/g. This stability of M s is resulted from the stable phase characteristic of W6-Co HEA in the annealing stage. Unlike W5 and W6-Co HEAs, the magnetism of W6-Gd HEA is enhanced during annealing treatment. The M s value of the as-annealed W6-Gd HEA increased from 10.93 emu/g to 31.91 emu/g at T I , and further up to 62.78 emu/g at T II , suggesting increased soft magnetic properties.
From Figure 5c, the M r /M s values of as-cast W5 and W6-Co HEAs are similar to their as-annealed states, which depend on their similar phase compositions. The as-annealed products of W6-Gd HEAs are significantly different from the as-cast one, and the M r /M s values are enhanced from 1.44% (as-cast) to 15.5% (at T I ). Moreover, the as-annealed W6-Gd HEAs reveal the highest M r /M s values among the tested samples, indicating a better soft magnetism.
Residual stress exists in the as-cast HEAs which can deteriorate the soft magnetic properties. Appropriate heat treatment can induce stress relief, which is beneficial to improve the soft magnetic properties [24]. Therefore, except for the as-annealed W5 HEA at T II , the soft magnetic properties of the tested HEAs are properly improved by the structural relaxation through stress-relief annealing [25]. Notably, it suggests that M s of the as-annealed W6-Gd HEA at T II is about five times higher than that obtained for the as-cast one. Moreover, the magnetic properties are strongly dependent on the microstructure of the materials. The microstructure contribution to magnetism arises from morphology: properties such as magnetic anisotropy, magnetostriction, coercivity, and volume fraction of the precipitates. The decrease in M s for as-annealed W5 at T II can be related to the enhanced density of grain boundaries and the increase of volumetric fraction of BCC3 solid solutions around FeSi-rich phases, which reduce the magnetic moment. According to the effect of phase compositions on magnetic properties, the increase in M s for the as-annealed W6-Gd HEA can be ascribed to the formation of Gd-oxides. Moreover, M s is enhanced by increasing the contents of Gd-oxides after elevating the annealing temperature.
Conclusions
The phase composition, microstructures, microhardness, and magnetic properties of as-cast and as-annealed W5, W6-Co, and W6-Gd HEAs have been investigated. The as-cast and as-annealed W6-Co HEAs maintain the same phase compositions, and are composed of single FeSi-rich phases, indicating the stable phase characteristic. The addition of Gd obviously enhances T m (1185 • C) compared with W5, owing to the exhibition of AlNi and AlGd with high melting points. As-cast W5 possesses the highest hardness of 1210 HV, which is attributed to the uniform distribution of the FeSi-rich phase in the matrix. All the tested HEAs display soft magnetic properties. Moreover, the M s and M r /M s values of W6-Gd were enhanced from 10.93 emu/g to 62.78 emu/g and from 1.44% to 15.50% via the annealing process, respectively. It suggests that Gd-oxides are beneficial to the enhancement of magnetic properties in W6-Gd.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,337.8 | 2018-06-22T00:00:00.000 | [
"Materials Science"
] |
Resource Theory of Imaginarity: New Distributed Scenarios
The resource theory of imaginarity studies the operational value of imaginary parts in quantum states, operations, and measurements. Here we introduce and study the distillation and conversion of imaginarity in distributed scenario. This arises naturally in bipartite systems where both parties work together to generate the maximum possible imaginarity on one of the subsystems. We give exact solutions to this problem for general qubit states and pure states of arbitrary dimension. We present a scenario that demonstrates the operational advantage of imaginarity: the discrimination of quantum channels without the aid of an ancillary system. We then link this scenario to LOCC discrimination of bipartite states. We experimentally demonstrate the relevant assisted distillation protocol, and show the usefulness of imaginarity in the aforementioned two tasks.
The development of quantum information science over the last two decades has led to a reassessment of quantum properties, such as entanglement [30,31] and coherence [32,33], as resources, which led to the development of quantitative theories that captured these phenomena in a mathematically rigorous fashion [34,35].Nevertheless, imaginarity had not been studied in this framework until the last few years [16,18,20,21].In this setting, imaginarity is regarded as a valuable resource that cannot be generated or increased under a restricted class of operations known as real operations (RO).Quantum states whose density matrices (in a fixed basis) contain imaginary parts are viewed as resource states, and thus cannot be created freely by RO.In this Letter, we study the resource theory of imaginarity in distributed scenarios.(At least) two parties, Alice (A) and Bob (B) are involved, who share a bipartite state ρ AB .In this setting, imaginarity is considered a resource only in Bob's system, while Alice can perform arbitrary quantum operations on her system.The duo is further allowed to communicate classically with one another.Overall, we refer to the allowed set of operations in this protocol as Local Quantum-Real operations and Classical Communication (LQRCC) borrowing the notion from the theory of entanglement [30] and quantum coherence [32].This framework leads to a variety of problems, which we address and solve in this Letter.In particular, we consider assisted imaginarity distillation, where Alice assists Bob in extracting local imaginarity.If only one-way classical communication is used, we provide a solution of this problem for arbitrary two qubit states.We also study assisted state conversion, where the goal is to obtain a specific target state on Bob's side.We solve this problem for any target state, if Alice and Bob share a pure state initially.Furthermore, we study the role of imaginarity in ancilla-free channel discrimination, showing two real channels that are perfectly distinguishable in the ancilla-free scenario once we allow imaginarity, but become completely indistinguishable if we have access only to real states and real measurements.Additionally, we prove how this task is related to LOCC (Local Operations and Classical Communication) discrimination of quantum states, specifically to the LOCC discrimination of their normalized Choi matrices.Finally, we experimentally implement the above protocols in a quantum photonic setup, performing the proof of principle experiment testing the usefulness of imaginarity in such quantum tasks.Our work opens new avenues towards both theoretical and experimental exploration of imaginarity as a quantum resource.
RESOURCE THEORY OF IMAGINARITY
The starting point of our work is the resource theory of imaginarity, introduced very recently in Refs.[16,18,20].The free states in imaginarity theory are identified as real states, which are real density matrices in a given basis {| j }.The set of all real states is denoted by R, which can be described by R = {ρ : j|ρ|k ∈ R for all j, k}.A quantum operation specified by Kraus operators {K j } satisfying j K † j K j = 1, is considered to be free, i.e., real, if it contains only real elements in the chosen basis: m|K j |n ∈ R for all j, m, n [16,18].It is known that the set RO coincides with the set of completely non-imaginarity creating operations [16].Moreover, RO coincides with the set of operations which have a real dilation [16].The golden unit, i.e. the maximally resourceful state, is the same in any Hilbert space, regardless of its dimension.In particular, the maximally imaginary states are the two eigenstates of Pauli matrix σ y , . (1) One maximally imaginary qubit is referred to as an imbit in the following.Within the framework of quantum resource distillation [35][36][37][38], general quantum states can be used for single-shot or asymptotic distillation of imbits via ROs.In the single-shot regime, the answer was already given in Refs.[18,20].In particular, the fidelity of imaginarity F I , which quantifies the maximum achievable fidelity between a state ρ and the imbit was used as the figure of merit for single-shot distillation, where . The exact value of fidelity of imaginarity for general ρ was shown to be equal to where I R (ρ) = min τ {s ≥ 0 : (ρ + sτ) / (1 + s) ∈ R} is the robustness of imaginarity [18].When we consider the asymptotic setting, for large n, the fidelity of imaginarity exponentially converges to 1 (for any non-real states).The exponent, for large n, is given by − log Tr ρρ T .For real states, the fidelity of imaginarity is independent of n, and is 1/2 [39].Details of the proof can be found in the Appendix.One of the key motivations for us to study the resource of imaginarity is that we can simulate arbitrary operations or measurements with one imbit at hand, even if all devices allow only real ones in our lab, as we show explicitly in the Appendix.In entanglement theory, one maximally entangled qubit state (ebit) has a clear operational meaning: it can be used to teleport the state of an unknown qubit deterministically to a remote lab.In imaginarity theory, if all the devices are restricted to implement ROs, e.g., we have only half-wave plate in an optical setup [18,20], we can still prepare arbitrary states or implement arbitrary measurements if we get one imbit at hand.We refer to the Appendix for more details.
BIPARTITE IMAGINARITY THEORY
The results studied so far concern imaginarity as resource in a single physical system.We now extend our considerations to the bipartite setting.As mentioned earlier, the task involves a bipartite state ρ AB shared by Alice and Bob, and the goal is to maximize imaginarity on Bob's side under LQRCC.If both parties are restricted to real operations, the corresponding set is called local real operations and classical communication (LRCC) [40].It is clear that via LQRCC it is possible to create only states of the form where ρ A j is an arbitrary state on Alice's side, and σ B j is a real state on Bob's side.States of this form will be called Quantum-Real (QR).In the appendix, we show that the choi matrices corresponding to LQRCC are "invariant" under partial transpose over Bob (Bob is restricted to real operations).This also holds for more general LQRCC maps, which are trace non-increasing (similar to SLOCC in entanglement theory).Using this, we now show that, for arbitrary initial state ρ AB and the target pure state |ψ A B , the optimal achievable fidelity for a given probability of success p (given by F p ), can be upperbounded by a SDP.
under the constraints, In the case of LRCC operations, one has to add an additonal constraint, given by X T AA ABA B = X ABA B .For the details about the proof, please refer to the appendix.In the special case when the target state is a local pure state of Bob |ψ B , one can replace |ψ A B by |0 ⊗ |ψ B , in the objective function.
ASSISTED IMAGINARITY DISTILLATION
Having extended the theory of imaginarity to multipartite systems, we are now ready to present assisted imaginarity distillation.In this task, Alice and Bob aim to extract imaginarity on Bob's side by applying LQRCC operations, which is in analogy to assisted entanglement distillation [41][42][43] and assisted distillation of quantum coherence [44].We assume that Alice and Bob share an arbitrary mixed state ρ AB , and the process is performed on a single copy of the state and only one-way classical communication from Alice to Bob is used.If Alice performs a general measurement M A j on her side, the probability p j and the corresponding post-measurement state of Bob ρ B j are given respectively by As a figure of merit we now introduce the assisted fidelity of imaginarity, quantifying the maximal single-shot fidelity between Bob's final state and the maximally imaginary state | + : The maximum is taken over all POVMs on Alice's side, and all real operations Λ j on Bob's side.For two-qubit states, we can derive the exact analytic expression.Consider a two-qubit state ρ AB , which can be written as ρ = Equipped with these tools, we are now ready to give a closed expression for the assisted fidelity of imaginarity for all two-qubit states.
Theorem 2. For any two-qubit state ρ AB the assisted fidelity of imaginarity is given by where the vector s = (E 12 , E 22 , E 32 ).
The proof is presented in the Appendix.
We will now extend our results to stochastic state transformations, where the goal is to achieve a transformation with the maximum possible probability.To this end, we introduce the geometric measure of imaginarity and the concurrence of imaginarity, presented in Refs.[40,45] respectively as where {λ 1 , λ 2 , . . .} are the eigenvalues (in decreasing order) With this in place, we now extend this scenario to the bipartite regime where we will show how Alice can assist Bob (ρ B ) to get the target state σ B with optimal probability.Now we use the following parameterization: sin 2 α = 1 − I c ρ B /2 and sin 2 β = I g σ B with α, β ∈ (0, π 2 ).
Lemma 3.For any bipartite pure state ψ AB , the optimal probability of Bob preparing a local state σ B , getting assistance from Alice, is given by The proof of Lemma 3 is presented in the Appendix.In Ref. [40] the authors provided tight continuity bounds for the geometric measure.Using these bounds, along with Lemma 3, we can provide an analytical expression for the optimal probability of Bob preparing a local state with an allowed error, with assistance from Alice.Similarly, we can also find a closed expression for the optimal achievable fidelity, for a given probability of success.The following theorem collects these results.
Theorem 4. For any bipartite pure state ψ AB , the optimal probability P f of Bob preparing a local state σ B , with a fidelity f via assistance from Alice, is given by where γ = cos −1 f .The optimal achievable fidelity for a given probability of success p, can be expressed as: Details of the proof for the above theorem can be found in the Appendix.
Imaginarity in channel discrimination-We will now discuss the role of imaginarity in channel discrimination.Specifically, here we focus on the variant of channel discrimination which we call ancilla-free, in that it does not involve an ancillary system (cf.Refs.[46,47]).It can be regarded as a game, where one has access to a "black box" with the promise that it implements a quantum channel Λ j with probability p j .The goal of the game is to guess Λ j by choosing optimal initial state ρ and positive operator-valued measure (POVM) M j , which is used to distinguish the Λ j (ρ)'s.Theoretically, the probability of guessing the channel Λ j correctly is given as Recently, it has been shown that any quantum resource has an operational advantage in the channel discrimination task [46,47], namely a resource state ρ (i.e. a quantum state that is not free) outperforms any free σ in a specific channel discrimination task.
Now we put the above protocol into imaginarity theory by considering the task of discrimination of real channels.To see an advantage, we need imaginarity both in the probe state and in the measurement, since, as we show in the Appendix, this task is equivalent to LOCC discrimination of their corresponding normalized Choi states, in which we need imaginarity in the measurements of both particles.To better illustrate this idea, we will provide an example of two real channels that cannot be distinguished in the ancilla-free scenario by using only real states and measurements, but they become instead perfectly distinguishable once we have access to imaginarity for states and measurements.To this end, let us consider two real qubit channels prepared with equal probability: where σ x and σ z are Pauli matrices.If we input a real state ρ into either of these two channels, they will produce exactly the same output 1/2, thus we cannot distinguish them better than making a random guess, even if we allowed imaginarity in our measurements.On the other hand, if imaginarity is forbidden in measurements, no matter how we choose the probe state (even if itis non-real), we cannot still distinguish them at all, because the only way to discriminate between the outputs of the two channels would be to perform a measurement associated with the σ y Pauli matrix.Indeed, if the probe state has an off-diagonal entry ρ 01 with non-zero imaginary part, wherever the output of N has Im ρ 01 , the output of M will show −Im ρ 01 in its place.Only if we implement a projective measurement of σ y can we perfectly distinguish these two channels.Therefore, the only way to achieve a success probability better than random guessing is to introduce imaginarity into both the initial state ρ and the measurement.
It is worth noting that the same two channels N and M become perfectly distinguishable even with no imaginarity in the probe state and in the measurement if we remove the requirement of ancilla-free discrimination.If we allow an ancilla R, we need to consider a bipartite input state ρ RA and a bipartite POVM M RA 1 , M RA 2 , with success probability where ).If we feed φ + to both channels, we get where ), and As noted in Ref. [18], these two output states can be perfectly distinguished by the real POVM {M 1 , M 2 }, where This shows that the two real channels can be distinguished perfectly with the aid of an ancilla, only using real states and real measurements.
EXPERIMENTS
We experimentally implement the aforementioned assisted imaginarity distillation and channel discrimination protocols.The whole experimental setup is illustrated in Fig. 1, which consists of three modules: module A enables us to prepare a two-qubit entangled state via spontaneous parametric down conversion (SPDC) process: with arbitrary a and b with |a| 2 +|b| 2 = 1 which can be tuned by changing the angles of 404 nm HWP and QWP.Note that we have conventionally set |0 := |H and |1 := |V .Module B utilizes an unbalanced Mach-Zehnder interferometer together with module A to prepare a class of Werner states: where p denotes the purity of the two-qubit state.Module B also allow us to implement single-qubit channels in ancillafree scenario.Module C allows us to perform quantum-state tomography (QST) to identify the final two-qubit polarizationencoded states concerned, or perform assisted imaginarity distillation by performing local measurement on Alice's photons In both experiments, red disks represent the calculated fidelity of imaginarity by assistance using Theorem 2 for experimentally reconstructed two-qubit states, and blue disks represent actual obtained average fidelity of imaginarity in experiments using the optimal measurement on Alice's system.and identifying the exact amount of imaginarity by QST of Bob's state.Moreover, this module allows us to implement channel discrimination by performing local measurement on the polarization state of a single-photon when the other is used as a trigger.We refer to the Appendix for more details.
We then perform proof of principle experiments of the oneshot assisted imaginarity distillation and the ancilla-free channel discrimination tasks.Results are shown in Figs. 2 and 3 respectively.
For assisted imaginarity distillation, we experimentally prepare two classes of two-qubit states.The first class of states as in Eq. (18).Theoretically, the upper bound for single-shot assisted imaginarity distillation can be calculated from Theorem 2 as F I |ψ AB = 2 |ab|.From Fig. 2(a), we can see that the experimentally obtained average imaginarity after assistance (blue disks) approximately equals to the experimentally obtained upper bound (red disks) within reasonable experimental imperfections.The second class of states are generated as Werner states in Eq. (19).Theoretically, the maximum average fidelity of imaginarity after assistance is calculated as F I (ρ AB ) = p.Fig. 2(b) details the relevant experimental results.From both results we see that experimentally obtained average fidelity of imaginarity data and upper bound obtained from two-qubit state tomography agree well with theoretical predictions.
We then show the usefulness of imaginarity in channel discrimination for various discrimination tasks.Fig. 3 details these results for two discrimination tasks.The first discrimination task involves two channels given by Note that the two channels preserve real density matrices.The experimental results of this discrimination task are shown in Fig. 3(a).If we can use imaginarity in measurements and initial states, we can perfectly distinguish the two channels [orange disks in Fig. 3(a)].However, if we allow only real density matrices as initial states or real measurement operators, we get a theoretical optimal guessing probability of 1/2 + |2p − 1| /4 for the ancilla-free channel discrimination.
Experimental data are in agreement with the theoretical predictions [see green disks in Fig. 3(a)].Here we note that the two channels are exactly the same as in Eqs. ( 14) when p = 1/2.For the second discrimination task, we consider The results are shown in Fig. 3(b).If non-real states and measurement operators are allowed, then we get a theoretical optimal distinguishing probability as 3/4 + w/4, which is plotted as the upper orange line in Fig. 3(b).The relevant experimentally obtained distinguishing probabilities are shown as orange disks.If imaginarity is prohibited in this task, then the optimal distinguishing probability reads 1/2 + w/4, and is plotted as the lower green line, together with experimental values represented by green disks.We can draw a similar conclusion to the first discrimination task.
DISCUSSION
The results presented above are mainly based on the new set of LQRCC operations which was introduced and studied in this article.We considered assisted imaginarity distillation in this setting, and completely solved the problem for general two-qubit states.Moreover, we discussed the task of singleshot assisted imaginarity distillation for arbitrary pure states in higher dimensions.The usefulness of imaginarity in channel discrimination is both theoretically and experimentally shown for a class of real channels.
There are in fact many scenarios of practical relevance where the task of assisted imaginarity distillation can play a central role.For instance, think of a remote or unaccessible system on which imaginarity is needed as a resource (e.g., in the task of local discrimination of quantum states): our results give optimal prescriptions to inject such imaginarity on the remote target by acting on an ancilla.The results provide insight into both the operational characterization as well as the Experimental results for discrimination tasks.Two channel discrimination tasks are tested : (a) Using imaginarity one can perfectly distinguish the two channels.However, if only real operators are allowed, then the optimal guessing proba- The optimal probabilities for successful guessing are 3/4 + w/4 and 1/4 + w/4 for the case where imaginarity is allowed, and where only real states and measurements are allowed, respectively.mathematical formalism of the resource theory of imaginarity, contributing to a better understanding of this fundamental resource.
Implementing general quantum operations
Here, we show that one imbit is necessary and sufficient to implement arbitrary quantum operation.To see this, let's say we want to implement a quantum operation Λ on ρ with Kraus operators given by {K j }, such that j K † j K j = P ≤ 1.
To implement this, we construct a real quantum operation (Λ r ) with Kraus operators given by and The last inequality follows from the fact that, This shows that one imbit is sufficient to implement general quantum operations.Now we show that, there exists a quantum channel, which necessarily requires one imbit, to implement via real operations.As an example, consider the following map (Λ + ) given by We now show, by contradiction, that the above quantum map requires one imbit to implement.Let's say there is a implementation (with a real operation Λ r ) such that, here, if σ is not an imbit and ρ = |0 0|, its easy to see that the state transformation in Eq. ( 25) is not possible.This is because I g (|0 0| ⊗ σ) = I g (σ) < I g (| + +|).
Properties of LRCC operations
For any real CP map Λ : R → R , Γ Λ RR is the corresponding choi matrix of Γ Λ RR , given by Any LQRCC map (Λ) can be represented in the following way Here, λ i is a CP (trace non increasing) map acting locally on Alice's hilbert spapce and Λ r i is a local real CP map on Bob's hilbert space.The choi matrix of Λ i ⊗ Λ r i is given by Let's now take the transpose of this choi matrix over BB (Γ In the second line we used the fact that, real operations commute with transpose.Since any LQRCC operation can be represented as (27), the choi matric of any LQRCC operation is invariant under partial transpose over Bob's systems.For LRCC operations, additionally the choi matrix is always real.
Proof of Theorem 1
In the following, we assume that A and B is a qubit.A general two-qubit state ρ AB can be written as A general single-qubit POVM element on Alice's side can be written as with probabilities 0 ≤ q n ≤ 1, n q n = 1, and vectors α n such that |α n | ≤ 1 and n q n α n = 0.The measurement {M A n } gives outcome n with probability and the Bloch vector of Bob's post-measurement state is After Alice communicates her measurement outcome n to Bob, he applies a real operation Λ n to his post-measurement state ρ B i .For each measurement outcome n, Bob aims to maximize the fidelity between Λ n [ρ B n ] and the maximally imaginary state | + .The maximum is given by the fidelity of imaginarity F I which for single-qubit states ρ B n reduces to Using this result together with Eqs. ( 31) and ( 32) we can express our figure of merit F a as follows: where the maximization in the last expression is performed over all vectors α n and probabilities 0 ≤ q n ≤ 1 such that n q n = 1, |α n | ≤ 1 and n q n α n = 0.If |b 2 | ≥ |s|, then using the conditions |α n | ≤ 1 and n q n α n = 0 we immediately obtain for any choice of q n and α n .This directly implies that F a (ρ AB ) = 1/2+|b 2 |/2 in this case, in accordance with Eq. ( 8).
We now consider the case if |b 2 | < |s|.We will show that in the maximization in Eq. (34) it is enough to consider POVMs consisting of two elements.For a given set of vectors α n and probabilities q n we introduce two sets, depending whether b 2 + s • α n is positive or negative: Using these sets, we express the sum n q n |b 2 + s • α n | as follows: In the next step, we introduce the probabilities q0 = n∈S 0 q n , q1 = j∈S 1 q j and vectors α0 = n∈S 0 q n α n n∈S 0 q n , (38a) Noting that we further obtain the following result: The vectors αn and probabilities qn fulfill the conditions n qn = 1, | αn | ≤ 1, and n qn αn = 0.This implies that they correspond to a two-element POVM on Alice's side via the relation in Eq. (30).
The arguments just presented show that the maximum in Eq. ( 34) can be achieved with two vectors α 0 and α 1 and two probabilities q 0 and q 1 having the properties 0 ≤ q 0 ≤ 1, q 1 = 1 − q 0 , |α n | ≤ 1, i q n α n = 0. To complete the proof, we will show that the optimal solution is obtained for Recalling that |b 2 | ≤ |s|, the values in Eq. ( 41) immediately give a lower bound on the assisted fidelity of imaginarity: Let now q n and α n be optimal probabilities and vectors [not necessarily coinciding with Eq. ( 41)].Without loss of generality we can assume that 1 For the assisted fidelity of imaginarity we thus obtain Since q 0 + q 1 = 1, it must be that either q 0 ≤ 1/2 or q 1 ≤ 1/2.In the first case we rewrite Eq. ( 44) as follows: In the second case (q 1 ≤ 1/2), we rewrite Eq. ( 44) as Thus, for |b 2 | < |s| the assisted fidelity of imaginarity is bounded above as 1 Otherwise, if b 2 + s • α n is positive (or negative) for all n, we obtain this means that we will not be able to reach the maximal value.
Together with Eq. (42) this proves that F a (ρ AB ) = 1/2 + |s|/2 in this case, and the proof of the theorem is complete.
Theorem 2 has few surprising consequences.If a two-qubit state has the property |b 2 | ≥ |s|, then the assisted fidelity of imaginarity coincides with the fidelity of imaginarity of Bob's local state: F a (ρ AB ) = (1 + |b 2 |)/2.Thus, in this case Bob will not gain any advantage from assistance, as he can obtain the maximal fidelity by performing a local real operation without any communication.For example, let us consider a quantum state shared by Alice and Bob where we have b 2 = p and s = (0, p−1, 0).Then if p = 1, then ρ AB is a product pure state, then no matter what Alice does, Bob can always get the maximal imaginary state | + .If 1 2 < p < 1, the state ρ AB has nonzero entanglement, but we have |b 2 | > |s|.If Alice chooses a projective measurement along α, then Bob will get states with Bloch vector b ± E T • α with equal probability.Then the average fidelity with maximally imaginary state reads 1 2 (|p then the average fidelity reads p.For all other two-qubit states the proof of Theorem 2 provides an optimal procedure for obtaining maximal fidelity of imaginarity on Bob's side.For this, Alice needs to perform a von Neumann measurement in the basis {|ψ 0 , |ψ 1 }, where |ψ 0 has the Bloch vector s/|s|.The outcome of the measurement is communicated to Bob, who leaves his state untouched if the outcome was 0, and otherwise applies the real unitary iσ 2 .Needs to be checked
Proof of Lemma 1
Note that, the geometric measure of imaginarity and the concurrence of imaginarity are given by [40,45] In the above max e and min e are maximisation and minimisation over pure state ensembles of ρ.Whereas, {λ 1 , λ 2 ...} are the eigenvalues (in decreasing order) of ( In general, for probabilistic transformations, the following inequality holds It was further shown in [40], that the optimal probability of converting a pure state ψ to a arbitrary quantum state ρ is given by In a one way LQRCC procedure, Alice performs a general quantum measurement and corresponding to the outcomes (with probabilites {p j }) of Alice, Bob's local state is found in the state ρ j , such that, {p j , ρ i } is an ensemble of ρ B .Conditioned on the outcome of Alice (i), Bob can perform a local stochastic real operation on ρ i , probabilistically converting it into σ B .Using Eq. (51) and Eq. ( 52), it follows (53) The second inequality follows from Eq.( 49), G(ρ j ) is calculated by minimising over all pure state ensembles of ρ j .Therefore, the second inequality holds for any pure state decomposition of ρ j , like {q k , ψ jk }.Note that {p j q k , ψ jk } is a pure state decomposition of ρ B .Note that, any pure state decomposition of ρ B can be realised by a suitable local measurement by Alice.Using this fact, along with Eq.( 49) and Eq.( 52) implies that Here, min e is the minimisation over pure state ensembles of ρ B .This completes the proof.
Proof of Theorem 4
From Lemma 1, we know that optimal probability for Bob to locally achieve σ B from a shared bipartite pure state ψ AB with unit fidelity, via LQRCC is given by If we want to achieve σ B with fidelity at least f , the best strategy is to go to a state (σ B ), within the fidelity ball around σ B , with a minimal geometric measure of imaginarity.Therefore, From [40], we know that We now define First, consider the case when m ≥ 0, which implies We know that and sin −1 (60) Using these results, we get For the case when 1−I c (ρ B ) 2 > 0, the above inequality implies This shows that P f (ψ AB → σ B ) = 1 when m ≥ 0. Now, we look at the other case when m < 0, i.e., From the above inequality and Lemma 1, we have Using the above result, a closed expression can also be found for F p .Let's first consider the case when p ≤ 1−I c (ψ AB ) 2I g (σ B ) < 1, in this case F p (ψ → σ B ) = 1 (follows from Lemma 1).When 1 ≥ p > I g (ψ) I g (σ B ) , the optimal achievable fidelity can be obtained by solving Eq. ( 11) for f , which gives This completes the proof.
SDP upperbounds for state transformations
As we already mentioned, for any real CP map Λ : R → R , Γ Λ RR is the corresponding choi matrix of Γ Λ RR , given by It follows that (see Eq. (4.2.12) of [48]), For any pure state |ψ Using the fact that, choi matrices of LQRCC operations are invariant under partial transpose, one can give a SDP computable upperbound for the optimal achievable fidelity for a given probability F p (ρ AB → |ψ AB ): Maximise: under the constraints, Quantum Chernoff divergence and scaling of asymptotic imaginarity distillation Fidelity of imaginarity F I , quantifies the maximum achievable fidelity between a state ρ and the maximally imaginary state.It can be expressed as Here, the maximisation is performed over all real CPTP maps.
If we have n copies of ρ, we can write If ρ is a pure state, i.e., ρ = |ψ ψ|, then we can calculate fidelity of imaginarity of multiple copies as For general states, to see the behaviour of F I (ρ ⊗n ), with increasing n, consider the quantity P = 1 − F I (ρ ⊗n ).From Ref. [49], it follows that the following limit exists and is equal to the quantum Chernoff divergence between ρ and ρ T : One can analytically perform this minimisation and show that minimum value is attained at s = 1/2.In order to show this fact, let's assume that the spectral decomposition of ρ is given by and therefore The Chernoff divergence is given by √ p j p k .This follows from AM-GM inequality, which says a+b 2 ≥ √ ab for all a, b ≥ 0. This lower bound (minimum value) is attained at s = 1/2.This proves that χ(ρ, ρ T ) = − log(Tr ρρ T ). (78) Therefore, from Eq. ( 73), it follows that asymptotically the fidelity of imaginarity behaves as
Proof of the relation between channel discrimination and state discrimination
Here we demonstrate a clear link between the task of ancilla-free channel discrimination and the task of LOCC discrimination of bipartite states, the latter studied in Refs.[4,18,20].Specifically, we consider the following two scenarios: 1. Let N and M be two real channels from A to B, chosen with equal probability 1 2 .If we want to discriminate between them in an ancilla-free scenario better than with a random guess, we must find a real state ρ of A and a real POVM element E of B such that Tr EN (ρ) Tr EM (ρ) .Notice that this protocol does not involve any bipartite input states and bipartite effects.
2. Let N and M be two real channels from A to B. This time, we bring in the maximally entangled state φ + = |φ + φ + | AA , between systems A and A (A is a copy of A), where |φ + = j | j j / √ d A , and d A is the dimension of A. We apply N and M only to the A part of this maximally entangled state.This results in two bipartite states between systems A and B, N AB and M AB , respectively, which are the normalized Choi states of the two channels N and M. Now consider the task of discriminating between these two bipartite states of AB using only local real measurements.Again, if we want to discriminate between them better than with a random guess, we must find a real POVM element E of system A and a real POVM element F of system B such that Tr (E ⊗ F) N AB Tr (E ⊗ F) M AB .
In the following we show that these two scenarios produce the same probabilites when POVMs are applied to states.Note that we can reconstruct the action of a channel on a state from its normalized Choi state: if N is a channel from A to B, ρ is a state of A, we have that N (ρ) can be written in terms of the normalized Choi state N AB as where d A is the dimension of the input system A. Thus, if E is a (real) POVM element on B, omitting system superscripts for simplicity, we have E are both valid (real) POVM elements on A and B, respectively.So now we have an LOCC discrimination scenario on the normalized Choi state N AB that yields exactly the same probability as the original ancilla-free channel discrimination scenario.
Conversely, let us consider the LOCC discrimination scenario of normalized Choi states.Let N AB be the normalized Choi state of a channel N from A to B. If E and F are POVM elements on A and B, respectively, we want to calculate the probability Tr (E ⊗ F) N AB .Note that, assuming E 0, ρ := 1 Tr E E is a valid quantum state, so Tr (E ⊗ F) N AB = Tr E Tr (ρ ⊗ F) N AB .Then, we have where we have used Eq. ( 80), and we have defined F := Tr E d A F. Now, ρ T is still a valid quantum state of A, and F is still a valid POVM element on B because Tr E d A ≤ 1.So now we have an ancilla-free discrimination scenario on the channels associated with the bipartite normalized state that yields exactly the same probability as the original bipartite LOCC discrimination scenario.In this way, we have proven that all probabilities arising in one of the two scenarios can be completely reproduced by the other scenario, so they are in some sense equivalent in terms of the probabilities they can generate.
Having established the relation of channel discrimination and local discrimination of their corresponding Choi states, we can see that the advantage of imaginarity in real channel discrimination shows up when both initial probe state and measurement contain imaginarity.We accomplish this by mapping the ancilla-free channel discrimination scenario into the LOCC state discrimination scenario, using (normalized) Choi matrices, as discussed above.Let us consider the example of a qubit channel N. Note that its (normalized) Choi state can be written as where i, j ∈ {x, y, z}, and the σ j 's are Pauli matrices.If N is a real operation, then we can conclude that the only term containing σ y must only be σ y ⊗ σ y .Recall that Tr S σ y = 0 for any real symmetric 2 × 2 matrix S (cf.Ref. [18]).For this reason, any POVM element M AB = E A ⊗ F B , with real symmetric matrices E or F, cannot be used to detect the presence of the σ y ⊗ σ y term in a Choi matrix of a real operation.Consequently, there are some real operations that are perfectly distinguishable, but become indistinguishable using an ancillafree protocol if we only use real states and measurements.However, if we are still restricted to real probe states and measurements, but we allow an ancilla, then the same real operations become perfectly distinguishable again.To understand why, notice that when we allow an ancilla, we can use the state φ + as probe state for all real operations, thus producing their normalized Choi states.Then the task becomes distinguishing between their Choi states, but without any LOCC constraints (recall that the LOCC constraint comes from the ancilla-free scenario).Removing the LOCC constraint from the discrimination of the Choi states makes the advantage provided by imaginarity disappear.Consequently, with an ancilla, we can perform as well with just real states and measurements as we do with non-real ones.
Experimental details
In Module A, two type-I phase-matched β-barium borate (BBO) crystals, whose optical axes are normal to each other, are pumped by a continuous laser at 404 nm, with a power of 80 mW, for the generation of photon pairs with a central wavelength at λ = 808 nm via a spontaneous parametric downconversion process (SPDC).A half-wave plate (HWP) and a quarter-wave plate (QWP) working at 404 nm set before the lens and BBO crystals is used to control the polarization of the pump laser.Two polarization-entangled photons are generated and then distributed through two single-mode fibers (SMF), where one represents Bob and the other Alice.Two interference filters (IF) with a 3 nm full width at half maximum (FWHM) are placed to filter out proper transmission peaks.HWPs at both ends of the SMFs are used to control the polarization of both photons.
In Module B for preparing Werner states, two 50/50 beam splitters (BSs) are inserted into one branch.In the transmission path, the two-photon state is still a Bell state.In the reflected path, three 400λ quartz crystals and a HWP with angles set to 22.5 • are used to dephase the two-photon state into a completely mixed-state 1 1 AB /4.The ratio of the two states mixed at the output port of the second BS can be changed by the two adjustable apertures (AA) for the generation of Werner states.This setup also allows us to implement a class of quantum channels which are specified in the main text.
Theorem 1 .
Achievable fidelity for a given probablity of success (F p (ρ AB LQRCC − −−−− → |ψ A B ), of transforming ρ AB into |ψ A B via LQRCC operations can upper bounded by the following semidefinite programme.Maximise: where the σ k 's are Pauli matrices, a = (a 1 , a 2 , a 3 ) and b = (b 1 , b 2 , b 3 ) describe local Bloch vectors of Alice and Bob, respectively, and
FIG. 2 .
FIG. 2. Experimental results for assisted imaginarity distillation.(a) Initial pure states |ψ AB = a|00 + b|11 ; (b) initial Werner FIG. 3.Experimental results for discrimination tasks.Two channel discrimination tasks are tested : (a)M p (ρ) = pρ + (1 − p) σ x σ z ρσ z σ x , N (ρ) = (σ x ρσ x + σ z ρσ z ) /2.Using imaginarity one can perfectly distinguish the two channels.However, if only real operators are allowed, then the optimal guessing proba- University of Science and Technology of China is supported by the National Key Research and Development Program of China (No. 2018YFA0306400), the National Natural Science Foundation of China (Grants Nos.12134014, 12104439, 61905234, 11974335, 11574291, and 11774334), the Key Research Program of Frontier Sciences, CAS (Grant No. QYZDYSSW-SLH003), USTC Research Funds of the Double First-Class Initiative (Grant No. YD2030002007) and the Fundamental Research Funds for the Central Universities (Grant No. WK2470000035, WK2030000063).The work at Poland was supported by the National Science Centre, Poland, within the QuantERA II Programme (No 2021/03/Y/ST2/00178, acronym ExTRaQT) that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733 and the "Quantum Optical Technologies" project, carried out within the International Research Agendas programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. CMS acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through the Discovery Grant "The power of quantum resources" RGPIN-2022-03025 and the Discovery Launch Supplement DGECR-2022-00119. | 9,690.8 | 2023-01-12T00:00:00.000 | [
"Computer Science"
] |
Flavon Inflation
We propose an entirely new class of particle physics models of inflation based on the phase transition associated with the spontaneous breaking of family symmetry responsible for the generation of the effective quark and lepton Yukawa couplings. We show that the Higgs fields responsible for the breaking of family symmetry, called flavons, are natural candidates for the inflaton field in new inflation, or the waterfall fields in hybrid inflation. This opens up a rich vein of possibilities for inflation, all linked to the physics of flavour, with interesting cosmological and phenomenological implications. Out of these, we discuss two examples which realise flavon inflation: a model of new inflation based on the discrete non-Abelian family symmetry group A_{4} or Delta_{27}, and a model of hybrid inflation embedded in an existing flavour model with a continuous SU(3) family symmetry. With the inflation scale and family symmetry breaking scale below the Grand Unification Theory (GUT) scale, these classes of models are free of the monopole (and similar) problems which are often associated with the GUT phase transition.
Introduction
Although the inflationary paradigm was proposed nearly thirty years ago to solve the horizon and flatness problems [1], it is only relatively recently that its main predictions of flatness and density perturbations have been firmly demonstrated to be consistent with observation [2,3]. However the precise nature of the inflation mechanism remains obscure and several versions of inflation have been proposed, bearing names such as old inflation, new inflation, natural inflation, supernatural inflation, chaotic inflation, hybrid inflation, hilltop inflation, and so on [4]. Furthermore, the relation of any of these mechanisms for inflation to particle physics remains unclear, despite much effort in this direction [4]. Indeed it is not even clear if the "inflaton" (the scalar field responsible for inflation) resides in the visible sector of the theory or the hidden sector.
On the other hand, one of the great problems facing particle physics is the flavour problem, i.e. the origin of the three families of quarks and leptons and their Yukawa couplings responsible for their masses and mixings. In the past decade, the flavour problem has been enriched by the discovery of neutrino mass and mixing, leading to an explosion of interest in this area [5]. A common approach is to suppose that the quarks and leptons are described by some family symmetry which is spontaneously broken at a high energy scale by new Higgs fields called "flavons" [6]. In particular, the approximately tri-bimaximal nature of lepton mixing provides a renewed motivation for the idea that the Yukawa couplings are marshalled by a spontaneously broken non-Abelian family symmetry which spans all three families, for example SU (3) [7], SO(3) [8], or one of their discrete subgroups such as ∆ 27 or A 4 [9]. Furthermore such family symmetries provide a possible solution to the supersymmetric (SUSY) flavour and CP problems [10].
In this paper we suggest that the phase transition associated with the spontaneous breaking of family symmetry is also responsible for cosmological inflation, a possibility we refer to as flavon inflation. We emphasise that flavon inflation does not represent a new mechanism for inflation, but rather a whole class of inflation models associated with the spontaneous breaking of family symmetry. For example, the flavons themselves are natural candidates for inflaton fields in new inflation. Since most of the family symmetry models rely on SUSY, we shall work in the framework of SUSY inflation, with supergravity (SUGRA) effects also taken into account. In family symmetry models there may also be other fields associated with the vacuum alignment of the flavons, often called "driving" superfields, and these can alternatively be considered as candidates for the inflaton, with the flavons being identified as the "waterfall fields" of SUSY hybrid inflation. 1 As we shall show, flavon inflation is exceptionally well suited for driving cosmological inflation. In new and hybrid inflation, the inflationary scale is well known to lie below the GUT scale, causing some tension in models based on GUT symmetry breaking. One advantage of flavon inflation is that the breaking of the family symmetry and hence inflation, can occur below the GUT scale. Of course an earlier stage of inflation may have also occurred at the GUT scale, but it is the lowest scale of inflation that is the relevant one for determining the density perturbations. Another advantage of flavon inflation is that in inflationary models associated with the breaking of a GUT symmetry are often plagued by the presence of magnetic monopoles which tend to overclose the Universe. In the case of flavon inflation, since the family symmetry is completely broken 2 , no monopoles result and therefore the monopole problem is absent, and in addition any unwanted relics associated with the GUT scale breaking are inflated away.
In Section 2 we discuss in general terms how the idea of flavon inflation opens up a rich vein of possible inflationary scenarios, all linked to the physics of flavour, and discuss some of their interesting cosmological and phenomenological implications. In Section 2.1 we briefly review the motivation for family symmetry and flavons. In Section 2.2 and 2.3 we consider two concrete examples of flavon inflation using flavour models. In the first example we show how new inflation can arise with the flavons in fundamental representations of an A 4 family symmetry group playing the role of inflatons. In the second example, we show how the driving superfields responsible for the vacuum alignment of the flavons can play the role of inflatons, with the flavons being the waterfall fields of hybrid inflation. In Section 3 we discuss possible implications of flavon inflation for cosmology and particle physics. Section 4 concludes the paper.
Family symmetry breaking and inflation 2.1 Family Symmetry and Flavons
One of the greatest mysteries facing modern particle physics is that of the origin of quark and lepton masses and mixings. In the Standard Model (SM) they arise from Yukawa matrices and (in the see-saw extended SM) right-handed neutrino Majorana masses. In order to understand the origin of fermion masses and mixing, a common approach is to assume that the SM is extended by some horizontal family symmetry G F , which may be continuous or discrete, and gauged or global. It must be broken completely, apart from possibly remaining discrete symmetries, at some high energy scale in order to be phenomenologically consistent, and such a symmetry breaking requires the introduction of new Higgs fields called flavons, φ, whose vacuum expectation values (vevs) break the family symmetry φ = 0.
The Yukawa couplings are forbidden by the family symmetry G F , but once it is broken, effective Yukawa couplings may be generated by non-renormalizable operators involving powers of flavon fields, for example (φ/M c ) n ψψ c H leading to an effective Yukawa coupling ε n ψψ c H where ε = φ /M c and ψ, ψ c are SM fermion fields, H is a SM Higgs field, and M is some high energy mass scale associated with the exchange of massive particles called messengers. Phenomenology requires typically ε ∼ 0.1.
If in addition to the family symmetry, the SM gauge group is unified into some GUT gauge group G GUT (for example SU (5), SO(10), etc.) then the high energy theory has the symmetry structure G F × G GUT . In such frameworks, the theory has additional constraints arising from the fact that the messenger sector must not spoil unification. This implies that either the messenger sector scale M c has to be very close to the GUT scale M GUT (thus pushing also the family symmetry breakdown close to the GUT scale) or the messengers must come in complete GUT representations, leading to consequences for low energy phenomenology. Assuming that the flavon sector is responsible for inflation provides additional information on the scale of family symmetry breaking, as we now discuss in the framework of two examples.
Example 1: Flavon(s) as inflaton(s)
The first example we discuss is where the flavon plays the role of the inflaton in a new inflation model, similar to the one discussed in [11]. However, we make use of the fact that when the inflatons are representations of family symmetry groups instead of GUT groups, new possibilities for inflation models arise. To be more explicit, in the considered class of inflation models, in addition to the invariant combination of fields (φφ) n /M 2n−2 * (with φ andφ being two fields in conjugate representations) studied in [11] we can now write any combination of family-symmetry-invariant fields. For example with a non-Abelian discrete family symmetry A 4 or ∆ 27 , superpotentials of the form Without loss of generality the Yukawa coupling κ can be set equal to unity as in [11]. At the global minimum of the potential the . In the following we analyse the phenomenological predictions of this class of models. We assume a Kähler potential of the non-minimal form (analogous to [11]) and study the supergravity F-term scalar potential 3 , which is given by In order to study this potential, we assume S ≪ M P l and φ i ≪ M ≪ M P l , where M P l is the reduced Planck mass M P l = (8πG N ) −1/2 ∼ 10 18 GeV. During inflation, we consider the case in which the driving field S acquires a large mass and therefore goes rapidly to a zero field value (which can be achieved by choosing κ 3 < −1/3 such that it is heavier than the Hubble scale [11]). Furthermore, we focus on the situation in which the component fields φ i start moving from close to zero (the local maximum of the potential) and roll slowly towards the true minimum of the potential where φ i ∼ M (vacuum dominated inflation [13]). It is possible to show that a generic inflationary trajectory occurs when all components, φ i , of the triplet field are equal. Therefore, we concentrate on this trajectory in what follows 4 . Defining the real field components as |φ i | ≡ ϕ/ √ 2 and β = (κ 2 − 1), λ = (β(β + 1) + 1/2 + κ 1 /12) and γ = 2/(6) 3n/2 0.14, we obtain the potential during inflation [14]: In the following, we consider the situation where |γ ϕ 3n Thus the quartic term ϕ 4 /M 4 P l can be neglected 5 and we find that the spectral index n s , can be expressed in terms of the parameters of the potential and the number of e-folds N as: 6 The results are illustrated in figure 1. The predictions for n s are close to the WMAP 5 year data [3] n s = 0.960 ± 0.014 for n ≥ 2 and β 0.03. In all cases we have taken N = 60.
The scale M , which governs the size of the flavon vev φ, and the inflation scale µ are determined by the temperature fluctuations δT /T of the CMB (assuming that inflation and δT originate from φ) for a given generation scale M * of the effective operator in Eq. (2.1). Specifically, we can relate the scale M * to M and µ via the amplitude of the density perturbation when it re-enters the horizon, If we write this equation explicitly in terms of µ 2 and M and relate it to M * , as defined 4 When this is not the case, we have in general a multifield flavon inflationary scenario. This can arise in family symmetry models as considered for instance in [12], as part of a multistage inflationary model. A detailed analysis of this situation is left for a future publication. 5 When |γ ϕ 3n M 3n | ≪ | λ is not very useful information about the family symmetry breaking scale. Inflation can be obtained but it requires some fine tuning of the parameter λ [17]. 6 In standard slow roll inflation ns − 1 = 2η − 6ǫ, where η, ǫ are the standard slow roll parameters. In the present case we have ǫ ≪ η. below Eq. (2.1) we get: For n = 1 and N ∼ 60 (β in the relevant range), M * is not longer a free parameter and in fact it is determined to be around 10 24 GeV. In this case we are free to choose µ and thus we could in principle have low scale inflation under this condition. Also, in order to have M < M GUT , µ would have to be below about 10 10 − 10 11 GeV. However, since M * is found to be larger than the Planck Scale, it cannot be regarded as a fundamental generation scale of the effective operator but itself has to emerge as an effective scale.
When (at least part of) the family symmetry breaking takes place below M GUT , this has interesting phenomenological consequences as we discuss in Sec. 3.
Example 2: Driving superfield(s) as inflaton
In supersymmetric theories the superpotentials which determine the flavons' vevs contain another class of fields in addition to the flavons, the so-called driving superfields. The driving superfields are singlets under the family symmetry, in contrast to the flavons.
As an example of how inflation may be realised from the driving superfields, we consider a vacuum alignment potential as studied in the SU(3) family symmetry model of [12]. We assume the situation that φ 23 ∝ (0, 1, 1) T and Σ = diag(a, a, −2a) are already at their minima, and that the relevant part of the superpotential which governs the final step of family symmetry breaking is given by [12] W = κS(φ 123 φ 123 − M 2 ) + κ ′ Y 123φ23 φ 123 + κ ′′ Z 123φ123 Σφ 123 + ... . (2.12) S is the driving superfield for the flavon φ 123 , i.e. the contribution to the scalar potential from |F S | 2 governs the vev φ 123 . In addition we assume a non-minimal Kähler potential of the form Although the theory is rather complicated we emphasise that this is taken from the existing family symmetry literature. For the purposes of inflation, we are interested in the epoch where the fields with larger vevs Y 123 , φ 123 ,φ 123 do not evolve, and inflation is provided by the fields with smaller vevs. We note that due to the vevs of φ 23 and Σ, SU (3) is already broken. In order to proceed, we analyse the supergravity scalar potential (2.3) focusing on the D-flat directions which are potentially promising for inflation. Setting Y 123 = φ 123 =φ 123 = 0 since the fields obtain large masses from the superpotential, the tree level scalar potential takes the simple form (expanded in powers of fields over M P l ) 7 where we have defined |S| = σ/ √ 2, |Z 123 | = ξ/ √ 2 and γ = κ SZ − 1. From this expression, we see that if the two coefficients in front of the mass terms for σ and/or ξ are sufficiently small both/one of them can drive inflation.
If we assume that σ acts as the inflaton (choosing for instance γ < −1/3 such that the mass of ξ exceeds the Hubble scale) and taking into account loop corrections to the 7 Here we show only the relevant terms for inflation. However one should keep in mind that quartic terms are present such that the fields are evolving from large values to small ones. The details of this model have been presented in [16]. potential, it has been shown in [15,16] that for κ S ≈ (0.005 − 0.01) and κ ≈ (0.001 − 0.05), a spectral index consistent with WMAP 5 year data [3], n s = 0.96 ± 0.014, is obtained. Finally, the scale M of family symmetry breaking along the φ 123 -direction is determined from the temperature fluctuations δT /T of the CMB to be M ≈ 10 15 GeV , (2.15) about an order of magnitude below the GUT scale. After having analysed two example scenarios, let us now turn to a general qualitative discussion of possible consequences of flavon inflation.
Discussion and Implications of Flavon Inflation
The connection between family symmetry breaking and inflation has several implications for theories of inflation as well as for theories of flavour. In this section we discuss some of the cosmological and particle physics consequences. Many important implications are related to the fact that the scale of family symmetry breaking (which is connected to the scale of inflation) is determined by the temperature fluctuations of the CMB, i.e. inflation predicts the scale of family symmetry breaking. In the two examples presented in sections 2.2 and 2.3, we have found that (at least the relevant part of the) family symmetry breaking takes place at about 10 15 to 10 16 GeV, that is, below the GUT scale. Another intriguing possibility would be flavon inflation at TeV energies, such that both, the flavour sector and the inflationary dynamics, might be observable at the LHC.
One attractive feature of having inflation after a possible GUT phase transition is that unwanted relics from the GUT phase transition, such as monopoles, are diluted. Further-more, after spontaneous family symmetry breaking the symmetry is commonly completely broken, which means that, in particular, no continuous symmetry remains. Possibly created domain wall networks from remaining discrete symmetries are in general unproblematic, because they are effectively blown away by the pressure generated by higher-dimensional operators which break the discrete symmetries (c.f. comment in footnote 2).
The fact that in some cases (as discussed in the text) inflation can predict the scale of family symmetry breaking to be below the GUT scale, can give rise to other additional consequences. For example, the renormalisation group (RG) evolution of the SM quantities from low energies to the GUT scale is modified by the new physics at intermediate energies.
Thus, the predicted GUT scale ratios of Yukawa couplings from low energy data will be modified due to the intermediate family symmetry breaking scale, which would affect, for instance, the possibility of third family Yukawa unification. Furthermore, the predictions for the fermion masses and mixings emerge at the family symmetry breaking scale. The knowledge of this scale is important for precision tests of these predictions. In particular, the renormalisation group running between this scale and low energy has to be taken into account.
The idea of flavon inflation yields new possibilities for inflation model building, and in general predicts a rich variety of possible inflationary trajectories for single field and also multi-field inflation. In the example studied in section 2.2 we found that if the flavon(s) in fundamental representations of an A 4 or ∆ 27 family symmetry act as the inflation(s), there are new invariant field combinations in the (super)potential which have not been considered for inflation model building so far. In the example in section 2.3 we have seen that in addition to types of models similar to standard SUSY hybrid inflation, it is also generic (depending on the parameters of the Kähler potential) that the scalar components of more than one driving superfield participates in inflation. In addition, it is typical that family symmetry breaking proceeds in several steps, which would mean that before the observable inflation there could have been earlier stages of inflation.
Besides the rich structure of the potentials during inflation, there is also an interesting dynamics of the flavon fields after inflation. The potentials are usually such that not only the moduli of the vevs of the flavons are determined, but also that they point into specific directions in flavour space. When the flavons are moving towards their true values after inflation, the dynamics of the field evolution often has a much larger complexity and diversity than conventional inflation models. This is due to the fact that typically several flavon components are moving, and that the potentials have special shapes in order to force the flavon vevs to point in the desired directions in flavour space in the true minimum.
This can have consequences for the density perturbations (non adiabaticities, non-Gaussianities, etc.) as well as for baryogenesis during preheating and during (and after) reheating. Non-thermal leptogenesis, for example, would be connected to the physics of family symmetry breaking and new possibilities for generating the baryon asymmetry will appear. Since flavons can play the role of either the inflaton or the waterfall fields, their decays into leptons will be determined by couplings which are associated with some particular flavour model. Thus successful non-thermal leptogenesis following flavon inflation can provide further constraints on flavon inflation models, leading to possible bounds on right-handed neutrino masses and so on. There could also be further constraints coming from the proton decay in the unified flavon inflation models. For instance, as the scale of family symmetry breaking M approaches the GUT scale M GUT , the efficiency of monopole dilution is expected to fall. If, on the other hand, M is significantly below M GUT , the monopoles will be inflated away, but the flavour messenger sector will affect gauge coupling unification, with implications for proton decay. It is also interesting to note that when global family symmetries break, there could be pseudo-Goldstone bosons appearing with interesting phenomenology.
Conclusion
In conclusion, we have shown that existing models based on a spontaneously broken family symmetry, proposed to resolve the flavour problem are naturally linked to cosmology. They introduce new and promising possibilities for cosmological inflation, which we have referred to generically as flavon inflation. In flavon inflation, the inflaton can be identified with either one of the flavon fields introduced to break the family symmetry, or with one of the driving fields used to align the flavon vevs. In either case the scales of inflation and family symmetry breaking result to be typically below the GUT scale in the presented examples. Since the family symmetry is broken completely (c.f. comment in footnote 2) this provides a natural resolution of GUT scale cosmological relic problems, without introducing further relics. Moreover flavon inflation has a large number of interesting consequences for particle physics as well as for cosmology, which we have only briefly touched on here but which are worth exploring in more detail in future studies [17]. | 5,230.4 | 2008-05-02T00:00:00.000 | [
"Physics"
] |
Channel estimation using variational Bayesian learning for multi-user mmWave MIMO systems
This paper presents a novel variational Bayesian learning-based channel estimation scheme for hybrid pre-coding-employed wideband multiuser millimetre wave multiple-input multiple-output communication systems. We first propose a frequency variational Bayesian algorithm, which leverages common sparsity of different sub-carriers in the frequency domain. The algorithm shares all the information of the support sets from the measurement matrices, significantly improving channel estimation accuracy. To enhance robustness of the frequency variational Bayesian algorithm, we develop a hierarchical Gaussian prior channel model, which employs an identify-and-reject strategy to deal with random outliers imposed by hardware impairments. A support selection frequency variational Bayesian channel estimation algorithm is also proposed, which adaptively selects support sets from the measurement matrices. As a result, the overall computational complexity can be reduced. Validated by the Bayesian Cramér-Rao bound, simulation results show that, both frequency variational Bayesian and support selection-frequency variational Bayesian algorithms can achieve higher channel estimation accuracy than existing methods. Furthermore, compared with frequency variational Bayesian, support selection-frequency variational Bayesian requires significantly lower computational complexity, and hence, it is more practical for channel estimation applications.
INTRODUCTION
In recent years, millimetre wave (mmWave) communication, which can provide gigabit-per-second data rates, has received considerable attention [1]. Combined with hybrid large array techniques, applying mmWave has become crucial for successful development of the forthcoming fifth-generation (5G) mobile networks [2]. Nevertheless, estimating channel state information (CSI) is challenging for hybrid pre-coding-employed wideband multi-user mmWave systems due to large and compressed channel matrices [3]. Random outliers imposed by hardware impairments also need to be considered, because they will degrade the channel estimation (CE) 1 performance of CE algorithms [4]. Therefore, it is necessary to devise innovative CE algorithms to address the challenges in mmWave communication systems.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2020 The Authors. IET Communications published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology 1 Note that the meaning of term "CE" is identical with the estimation of the CSI. Therefore, throughout this paper these two terms will be used interchangeably.
Prior work
Most of CE algorithms in mmWave systems concentrate on exploiting the channel sparsity [5]. By using compressed sensing theory, a sparse CE problem can be formulated as a sparse signal recovery problem. Then, the required pilot overhead can be significantly reduced as compared to conventional algorithms, which uses least square (LS) [6] or minimum mean squared error methods [7]. Assuming that the estimated amplitudes of non-zero coefficients are channel gains of each path, applying sparse exploration is to compare the pairs of angle of departure (AoD) and angle of arrival (AoA) of each path in mmWave systems [8]. Most existing CE methods use narrow-band flat channel models [8][9][10]. However, in practice, mmWave channels generally have wideband and frequency-selective fading characteristics.
There are some publications that propose CE algorithms for wideband mmWave channels [11][12][13][14][15][16]. By leveraging joint sparsity of angular and delay domains, a time-domain CE algorithm using quantised sparse recovery is proposed in [11]. However, this algorithm is only applicable for single-carrier systems. Based on a multi-resolution codebook, some alternative CE algorithms are presented. A distributed grid matching pursuitbased CE algorithm is proposed in [12] to estimate the channel parametres of frequency-selective fading channel. By exploiting the codebook structure, a combined time-frequency approach is proposed, which can estimate the wideband CSI with low pilot overhead [13]. Authors in [14] propose a support detection (SD)-based CE algorithm. By observing the structural characteristics of mmWave beamspace channel, this algorithm can detect the support of the sparse beamspace [14]. Moreover, a simultaneous weighted orthogonal matching pursuit (SW-OMP) algorithm is proposed in [15]. Since this algorithm considers realistic frequency-selective fading channel models, it can exploit spatially common sparsity of mmWave channels. Nevertheless, all these algorithms [12][13][14][15] assume that there is no multi-user interference. Therefore, they can only estimate one user CSI at a time. In addition, when applying these algorithms, grid errors would be induced to channel coefficients, because multi-resolution codebook-based CE strategies are employed [17,18].
In order to improve the overall CE accuracy, sparse Bayesian learning [19][20][21] is widely used to estimate the channel parameters. This method can avoid the grid errors, and hence, improve the estimation performance. Specifically, an expectation maximisation-based sparse Bayesian learning is proposed in [19]. By using Gaussian mixture prior models, this method can achieve high estimation accuracy with low pilot overhead. [20,21] use generalised approximate message passing-based methods [20,21] to estimate CSI of mmWave multiple-input multiple-output (MIMO) systems. However, all these methods do not consider hardware impairments [4], which affects the performance of CE [22]. To solve this problem, [23] proposes a Bayesian compressed sensing-based CE algorithm for mmWave channels with random outliers. However, this algorithm [23] is only applicable for point to point narrow-band flat channels.
Contributions
Motivated by above, we propose a novel CE scheme based on variational Bayesian inference in the frequency domain. This scheme is designed for wideband multi-user mmWave MIMO communication systems, where hybrid pre-coding architectures and frequency-selective fading channels are considered. A frequency variational Bayesian (FVB) algorithm is presented, which exploits the common sparsity of different subcarriers. The algorithm formulates the CE problem as a sparse Bayesian problem, and thus, it can be solved by using variational Bayesian inference. By simplifying the inverse derivations, a support selection-FVB (SS-FVB) algorithm is also proposed to reduce the computational complexity. In addition, we provide a Bayesian Cramér-Rao bound (BCRB) to evaluate the estimation performance of the proposed algorithms. To the best of our knowledge, our paper is the first publication that considers the effect of the random outliers imposed by hardware impairments for the multi-user wideband CE with hybrid architectures. We summarise the specific contributions as follows.
1. We propose a method that incorporates the machine learning technique into CE of wideband mult-iuser mmWave MIMO systems. Specifically, the sparse signal recovery problem is formulated as a sparse Bayesian learning problem, which is extensible and robust to inference. Considering the random outliers, a hierarchical Gaussian prior channel model is developed based on the identify-and-reject strategy, and thus, the sparsity of the vectorised CSI is encouraged by hyperparameters. As a result, the robustness against random outliers can be significantly enhanced. 2. The variational Bayesian inference is employed to solve the sparse Bayesian learning problem. The Kullback-Leibler divergence is also introduced to tack CSI. We calculate all the posterior distributions of the latent variables from the parameter. Hence, the estimation accuracy of the proposed CE algorithms is significantly improved. 3. The proposed scheme achieves an excellent performance in terms of CE accuracy. Specifically, the FVB algorithm shares all the information of support sets from the measurement matrices, and provides a better estimation performance. For the SS-FVB algorithm, by setting some columns of measurement matrices to zeros, the computational complexity can be reduced.
The work in paper substantially extends our previous work [25], where a robust frequency-distributed variational Bayesian estimation algorithm is presented, and its performance is assessed by comparing the estimation accuracy with some existing algorithms. However, [25] does not provide the analysis of algorithm complexity, and does not solve the problem of high computational complexity caused by using the proposed algorithm. The computation of BCRB is also omitted in [25]. To make our work complete and deep, in this paper, we propose an SS-FVB algorithm to reduce the overall computational complexity. We analyse the computational complexity of the proposed two CE algorithms in detail and summarise it. We also provide the detailed deducing of BCRB in Section IV.
The rest of this paper is organised as follows. Section II presents the wideband multi-user mmWave MIMO system model including a frequency-selective fading geometric channel model and a hybrid pre-coding architecture. In Section III, the CE problem in the frequency domain is further formulated as a sparse Bayesian learning framework. Section IV proposes a CE scheme based on variational Bayesian learning in detail, along with the derivations of the computational complexity and the BCRB. The numerical results in Section V are presented to compare the estimation performance of the two proposed algorithms. Finally, Section VI concludes the main conclusions derived from the simulation results with some ideas for further research work.
Notation
The notations used throughout this paper are denoted as follows. A, a and a denote matrix, column vector and scalar value, respectively. A ij is used to denote the i, j -th element of the matrix A. A :,j represents the j -th column and A i,: denotes the i-th row of A. ℂ M×ℕ represents the space of M × N complexvalued matrices. (.) T , (.) H , ||.|| 2 , ||.|| F and ⟨.⟩ denote the conjugate transpose, transpose, l 2 -norm, Frobenius norm, and expectation, respectively. ⊗ is the Kronecker product and vec(⋅) is the vectorised operation by the columns of the matrix. ⟨x⟩ p(x) is to take the expectation of x according to the distribution of p(x).
SYSTEM MODEL
As illustrated in Figure 1, we consider an orthogonal frequency division multiplexing (OFDM)-based multi-user mmWave MIMO communication system, which employs hybrid precoding architectures and frequency-selective fading geometric channel models. In this system, we assume that there are K user equipments (UEs) equipped with N UE antennas and M UE radio frequency (RF) chains. The UEs transmit pilot sequences to a base station (BS) with N BS antennas and M BS RF chains.
Channel model
We consider a frequency-selective fading geometric channel model according to the IEEE 802.11ad wireless standard [24]. The channel matrix H d,k ∈ ℂ N ×ℕ of the d -th delay tap and the k-th user can be expressed as [26] where L k represents the number of paths; l,k ∈ ℂ and l,k ∈ ℂ are the complex gain and the time-domain delay of the lth path, respectively; and a BS ( l,k ) and a UE ( l,k ) are antenna steering vectors of the BS and the UEs with l,k ∈ [0, 2 ) and l,k ∈ [0, 2 ) being AoD and AoA, respectively. B( l,k ) = rc (dT s − l,k ), where T s is the sampling period and rc ( ) denote the pulse-shaping filter at time . Typical uniform linear arrays (ULA) with half-wavelength separation are deployed at the BS and the UEs. Hence, the elements of a BS ( l,k ) and a UE ( l,k ) can be expressed as An extended virtual channel model [24] is employed to approximate the channel matrix H d,k as are the dictionary matrices with sizes G BS and G UE for AoA and AoD, respectively. To exploit the common sparsity of different sub-carriers, the frequency-domain channel matrix with the length of the delay taps N C at the subcarrier p (0 ≤ p ≤ P − 1) can be written as denotes a sparse matrix for the k-th user at the p-th sub-carrier and P is the number of the subcarriers.
Hybrid pre-coding
Considering that the BS uses a hybrid combining scheme, W denote the analog and digital combining matrices at the t -th time frame, respectively. W [11], and we employ zero padding (ZP) [13] instead of the cyclic prefix to avoid corrupting the pilot at the symbol edges. The received signal at the p-th sub-carrier
Antenna
Phase shifter Adder FIGURE 2 Graphical model of the hierarchical Gaussian prior channel model using identify-and-reject strategy can be written as p ∈ ℂ M × are the effective noises from the hybrid combining procedure; w (t ) p denotes the vector of the random outliers including G ≤ M BS non-zero entries; and n (t ) p is the additive multi-variate Gaussian noise vector. In the above equation, x (t ) p,k is the effective pilot signal transmitted by the k-th user at the p-th subcarrier, which can be expressed as BB,p,k ∈ ℂ M × being the analog and digital pre-coding matrices, respectively.
PROBLEM FORMULATION
In this section, we propose a two-step approach to identify the CE problem. We first formulate the CE process as a sparse signal recovery problem in the frequency domain. Then, we propose a sparse Bayesian learning problem based on a hierarchi-cal Gaussian prior channel model, which employs the identifyand-reject strategy. Using this model for CE, the robustness and estimation accuracy of the proposed algorithm can be improved.
CE formulation
As the scatters of BS are much smaller than those of the UEs in the single-cell mmWave uplink MIMO system, a small angular spread will appear at the receiver, resulting in channel sparsity [28]. In particular, the number of effective paths generally is less than 30 [28]. Therefore, in this system the vectorised channel vector is sparse, when the sizes of dictionary matrices G BS and G UE are lager than 64 [29]. According to the sparsity of mmWave channels, Equation (5) can be rewritten as where × is a sparse angular vector with KL non-zero values. Since the available signal-to-noise ratio (SNR) of mmWave communication systems is generally very low, M successive OFDM symbols within the channel coherence time are required to estimate the CSI. By stacking the M measurements into a vector, Equation (7) can be expressed as Note that the number of the required pilot dramatically increases when the traditional LS algorithm is used. In order to reduce the required pilot overhead, compressed sensing-based methods, i.e., orthogonal matching pursuit (OMP) [30], compressive sampling matching pursuit (CoSaMP) [31] and the iterative re-weighted 1 and 2 algorithms [32], are used to recover the sparse vector h p from y p [see Equation (8)].
On the other hand, we can observe that the frequencydomain channel model in (4) exhibits the same sparse feature for any sub-carrier p [24]. Therefore, h p can be seen as a vectorised is the length of the vector common of each sub-carrier. Moreover, the number of required training frames can be substantially reduced due to the common sparse feature of different sub-carriers in the mmWave channels.
Sparse Bayesian learning
Compared to the signal recovery algorithms based on compressed sensing theory, a robust and extensible inference framework based on sparse Bayesian learning, can provide better estimation performance for sparse signal recovery problems [33]. By using this method, the sparsity of the estimated signal is encouraged by employing a sparsity-promoting prior, and all the corresponding components, such as hybrid pre-coding, additive Gaussian noise and random outliers, are presented following a sparse Bayesian learning framework. As a result, a hierarchical Gaussian prior model [34] based on two layers can be constructed.
We assume that non-zero values of channel vector h in the first layer are independent and identically distributed (i.i.d.) and the complex Gaussian distribution [34] where = [ 1 , 2 , … , N ] T denotes the sparsity of the channel h.
In the second layer, the prior precision is also considered as i.i.d. random variables following the Gamma distribution. Therefore, the distribution of prior precision is given by with n ≥ 0, where is the Gamma function. a and b are set as small constant values to make these priors non-informative over { n }. By using the same method as Equation (11), the noise variance can be expressed as where c and d are small values [34].
Considering the random outliers caused by hardware impairments, a set of indicator variables { m } is assigned to detect the outliers by employing the identify-and-reject strategy [35]. Therefore, the received signal can be expressed as where m,: denotes the mth row of , u m , v m and m are the m -th entry of u,v and , respectively. Thus, the probability density function (PDF) of the received signal can be rewritten as This operation ensures that, in the presence of random outliers, the received signal y m is independent to the PDF p(y|h, , ). Since m ∈ {0, 1}, a beta-Bernoulli hierarchical prior placed on m is where m denotes the m-th entry of . In addition, the distribution of prior precision can be written as where e and f follow the Beta distribution. For clarity, Figure 2 gives the graphical model of the hierarchical prior channel model based on the identify-and-reject strategy, where the constants, the hidden variables and the observation are denoted as squares, circles and a shaded circle, respectively. Now the considered CE problem is formulated as a sparse Bayesian problem, which can be solved by the proposed CE scheme in Section IV.
CE SCHEME DEVELOPMENT
In this section, we propose and analyse the novel CE scheme to solve the sparse Bayesian learning problem. Generally, h can be estimated by the posterior distribution p(h|y) [36]. As it will be shown in Section V, by employing the variational Bayesian inference, the FVB CE algorithm provides an excellent estimation accuracy, because it takes advantage of overall support sets of the measurement matrices. Nevertheless, the computational complexity of the FVB algorithm is relatively high since the inverse of the measurement matrix is complicated. To reduce the computational complexity, the SS-FVB algorithm is proposed by using an adaptive selection operation based on the correlations between different support sets. We also derive the computational complexity and the BCRB in this section to analyse the performance of the proposed algorithms.
FVB CE algorithm
We apply variational Bayesian inference to solve the sparse Bayesian learning problem, and thus, the convergence to the local optimum can be guaranteed [37] 2 For simplicity, the set of the unobserved variables { , h, , , } is denoted as {z}. Since the derivation of p(z|y) is difficult to compute directly [36], we decompose the logmarginal probability of the received signal into two terms lnp(y) = L(q) + KL(q||p) (18) where L(q) = ∫ q(z)lnp((y, z)∕q(z))d z, KL(q||p) = − ∫ q(z)ln(p(z|y)∕q(z))d z is the Kullback-Leibler divergence which describes the distance between p(z|y) and q(z) [38], and q(z) can be any PDF close to p(z|y). As p(z|y)≤q(z) and KL(q||p) ≥ 0, the nearest analytical expression of the posterior distribution p(z|y) can be obtained by minimising the Kullback-Leibler divergence. As lnp(y) is only dependent to y, the problem of minimising the Kullback-Leibler divergence is equivalent to maximising L(q). Consequently, our objective becomes to approximate the p(z|y) by maximising L(q) with respect to q(z), i.e., q(h), q( ), q( ),q( ) and q( ) [36]. Details of the derivations are given as follows.
Derivations of q(h) and q(α)
Based on Equations (11) and (15), lnq(h) can be derived as Since Equation (22) We can also obtain lnq( ) As Equation (20) is the exponent of Gamma distribution, q( ) can be expressed as and n,n is the n-th diagonal element of in Equation (22).
ALGORITHM 1 Proposed FVB Algorithm
Input: y, x K ,W , A BS , A UE,K and tolerance .
Output:
The channel estimatorĥ(k) of the k-th user.
12:
End for
Derivations of q(θ) and q(β)
According to Equations (15) and (16), the derivation of lnq( ) is obtained as Note that m still obeys a Bernoulli distribution, i.e., where C is a normalisation constant. We have Then, q( ) can be derived as Equation (29) can be rewritten as which indicates that the posterior q( ) follows a Beta distribution Using the previously derived expressions for the variational distributions q(h), q( ) and q( ), the posterior distributions p(h|y), p( |y) and p( |y) can be calculated in the same way [39]. Then, the channel vector h can be estimated by the posterior mean, i.e.,ĥ The details of the algorithmic implementation of the proposed FVB CE algorithm can be found in Algorithm 1.
SS-FVB CE algorithm
The FVB CE algorithm requires high computational complexity for the necessary matrix inversion. In addition, the high complexity of the measurement matrix will significantly increase the overall computational complexity of the algorithm.
To reduce the algorithmic complexity, here we propose the reduced complexity CE algorithm, SS-FVB, which is based on the property that a small number of the relevant support sets occupies most of the energy of the measurement matrices [18]. This algorithm exploits the correlation between the residuals and the columns of the measurement matrices.
We first propose a convenient reconstruction of the measurement matrix by making a proper adaptive selection operation of the support set. We define the column of the measurement matrix as a "support set", which has the strongest correlation with the received signal. The index number of the corresponding support set, 1 , can be obtained as Then, a new measurement matrix is constructed as By subtracting the contribution of 1 from y, the residual r 1 can be obtained by In the same way, the correlation between the residual and the columns of the original measurement matrix can be exploited to find the new index number as follows: Without loss of generality, we assume 1 < 2 , and then, the new measurement matrix 2 is constructed as The residual r 2 can be calculated as Equation (36) can be employed to learn the latent parameters. By recalling (9), the CE problem can be expressed as The same as the FVB CE algorithm, the sparse channel vector h can be estimated by computing the posterior distribution p(h|y). Since the procedure in approximating p(h|y) with q(h) is identical to the one presented for the FVB algorithm, this procedure is omitted. Following Equation (33), the sparse channel vector h can be estimated aŝ
ALGORITHM 2 SS-FVB CE Algorithm
Input: y, x K ,W , A BS , A UE,K and tolerance .
Output:
The channel vector of the k-th userĥ(k).
10:
Compute the estimatorĥ of the channel vector.
11:
Computeã∕b n andc∕d to update the hyper-parameters n and , respectively.
12:
Detect the indicators and reject the outliers.
16:
Findĥ(k) of the k-th user by detecting the non-zero values.
18: End for
Algorithm 2 presents the detailed algorithmic implementation of the proposed SS-FVB algorithm. The following theorem shows that the proposed adaptive selection operation always converges to a local optimum.
Theorem 1. After KL iterations, the selection operation of the support sets converges to a local optimum.
Proof. See Appendix A. □
Computational complexity
Considering that the inverse operations of and KL require many computational overheads in Algorithms 1 and 2, we derive the computational complexity of the inverses of and KL at every step. More specifically, the measurement matrices are singular matrices, and therefore we discuss the pseudoinverses for and KL , i.e., The computational complexity of different mathematical operations are summarised in Table 1. From this table, it can be observed that the computational complexity of the Outliers (G) ) compared with those of T in Equation (42). The SS-FVB CE algorithm simplifies the inverse derivations in the covariances, because (N − KL) columns in are set to zeros. Therefore, the low computational complexity is achieved by the SS-FVB algorithm at the cost of reduced estimation accuracy. We plot the running time versus the number of pilot for the proposed two algorithms in Figure 3, where K = 4, L = 4, M BS = 4, G BS = 128 and G UE = 128. We can see that as the number of the pilot frames increases, the run time of the proposed two algorithms improves. The SS-FVB algorithm has a much smaller run time than that of the FVB algorithm because of its lower computational complexity.
BCRB
BCRB 3 [40] is conveniently used to assess the estimation error of CE algorithms. BCRB can be calculated by where E[⋅] represents the expectation operation and J is a FIM. Assume that the derivatives of the logarithmic function of joint PDF p(y, h) exist. The FIM can be expressed as [41] Since Equation (45) is complex to be computed, we propose the following theorem to obtain the expression of BCRB.
Theorem 2. It is assumed that p(y|h) satisfies the regularity condition
Then, the complex FIM is given by the matrix where Proof. See Appendix B. □ According to Equations (11) and (12), it is clear that the true prior distribution p(h; a, b) 4 of the sparse channel vector h reduces to where St(h n | , , ) is a Student-t distribution with = 0, = a∕b and = 2a. Moreover, the prior distribution of h n can be obtained as Taking the log-likelihood function on both sides of Equation (47), we can obtain The log-likelihood function of p(h n ) can be calculated as The first-order partial derivative of L(h n ) is .
Based on Equation (48), the first-order partial derivative of lnp(h) is the sum of the U (h n ). Thus, U (h) can be calculated as Using Theorem 2, J h can be calculated for both FVB and SS-FVB algorithms, as given by Using the same way as [42], the measurement matrix is added as the additional observation, because it will impact the estimation of h. Then, J y|h for the proposed two algorithms can be expressed as [43] It can be observed that the derivations of J y|h and J h use the same variables for our proposed CE algorithms, so their lower bounds are equal, denoted as J. The bound J will be used in the next section for analysing the CE performances of algorithms.
NUMERICAL RESULTS
In this paper, the wideband multi-user mmWave communication system has been implemented in software and its performance has been obtained by means of computer simulations. The simulation parameters in this paper are set according to references [3], [23], [24]. There are K = 4 UEs, each with N UE = 32 antennas, and a BS with N BS = 128 antennas. We set M BS = 4 at the receiver and M UE = 1 at the transmitter. The dictionary matrices are constructed with G BS = 128 and G UE = 128 for the receiver and transmitter, respectively. Both FVB and SS-FVB have been implemented, and their performance is compared with some other well-known CE algorithms, including OMP [30], CoSaMP [31], SW-OMP [15] and the SD-based CE algorithms [14]. In addition, the performance evaluation results for all these algorithms have been obtained by using Monte Carlo error counting techniques. The BRCB is also provided.
In accordance with the IEEE 802.11ad wireless standard, we also generate the channel (4) with the following parameters. To compare the performance of various CE algorithms, the Normalised Mean Squared Error (NMSE) ofĥ(k) is introduced, which is expressed as The performance of different CE algorithms versus the required pilot overhead is shown in Figure 4. We can see the superiority of the proposed FVB and SS-FVB CE algorithms. The poor performance of the OMP algorithm is caused by its inability to exploit the joint sparsity shared by the delay and frequency domains. The CoSaMP method, which uses OMP, exhibits a slightly lower error when M takes small values. Although the SD-based algorithm takes advantage of the structural characteristic of mmWave beamspace channel, estimating only one path at each iteration degrades its estimation performance. The FVB and SS-FVB algorithms achieve the best CE performance because both of them make full use of the common sparsity of different sub-carriers in the frequency domain. Figure 5 shows the NMSE performance versus SNR for various CE algorithms. The SNR is ranged from -15 to 5 dB in accordance with the realistic mmWave systems. It is noted that the OMP and CoSaMP algorithms have a poor performance for the whole SNR range as they ignore the joint sparse vector basis. In particular, the SD-based algorithm is the worst, and it cannot work when the SNR reduces to -10 dB. Compared with other four CE schemes, both of the proposed algorithms have much higher estimation accuracy even at low SNR range, because they can effectively exploit the variational Bayesian inference. Figure 6 shows the performance of the different competing algorithms versus the number of the random outliers. It can be observed that the compared algorithms are very sensitive to random outliers and their performance has a large degrada- tion. More specifically, the OMP, CoSaMP and SD-based algorithms cannot guarantee the estimation accuracy when the number of the random outliers increases to 10. According to the identify-and-reject strategy, the NMSE of the proposed algorithms increases very slowly as the number of the random outliers increases. Therefore, the variational Bayesian inference and the identify-and-reject strategy are the proper means to improve the estimation accuracy and overcome the random outliers in wideband mmWave systems. In Figure 7, we compare the NMSE versus the number of the pilot for the proposed CE algorithms as well as the BCRB. It can be seen that the NMSE of FVB algorithm is closer to Comparison of the NMSE versus number of random outliers for the proposed CE algorithms and the BCRB. The SNR is assumed as -5 dB. The number of the training frames is set as 80. The number of the outliers is ranged from 0 to 10 the BCRB than that of SS-FVB algorithm. This is because the SS-FVB algorithm does not exploit all the information from the support sets to estimate the common sparse channel vectors. Actually, employing a larger number of pilot can increase the estimation performance at the expense of both higher training overhead and computational complexity. Therefore, future research should consider ways of narrowing this gap and optimize the trade-offs between computational complexity and estimation performance by exploiting better recovery algorithms. Figures 8 and 9 show the NMSE of the proposed CE algo-rithms versus the SNR and the random outliers, respectively. Figure 7 shows that the NMSE values obtained by FVB and SS-FVB algorithms approximate closely to the BCRB within the whole SNR range. As shown in Figure 8, the identify-and-reject strategy enhances the robustness against hardware impairments for the proposed scheme, so that their performance has a slight degradation with the variation of the random outliers.
CONCLUSION
In this paper, we have proposed a novel CE scheme for wideband multi-user mmWave MIMO-OFDM systems. We have first proposed the FVB CE algorithm by exploiting the common sparsity of different sub-carriers. Considering the random outliers imposed by hardware impairments, a hierarchical channel model based on sparse Bayesian learning has been developed to improve the estimation accuracy. Besides, the power of the identify-and-reject scheme has enhanced the robustness of the FVB CE algorithm significantly. Then, the SS-FVB CE algorithm, which has developed the adaptive selection operation of the measurement matrices, has been proposed to reduce the computational complexity. This algorithm has provided the trade-off between computational complexity and estimation accuracy. Simulation results have verified that the proposed CE scheme is capable of achieving high estimation accuracy even at low SNR and pilot overhead. Future research can focus on balance the trade-offs between the computational complexity and estimation performance. It is also interesting to develop an inverse-free algorithm to reduce the computation complexity imposed by sparse Bayesian learning. | 7,219.8 | 2020-12-26T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Excited State Dynamics of a Self‐Doped Conjugated Polyelectrolyte
The growing number of applications of doped organic semiconductors drives the development of highly conductive and stable materials. Lack of understanding about the formation and properties of mobile charges limits the ability to improve material design. Thus the largely unexplored photophysics of doped systems are addressed here to gain insights about the characteristics of doping‐induced polarons and their interactions with their surroundings. The study of the ultrafast optical processes in a self‐doped conjugated polyelectrolyte reveals that polarons not only affect their environment via Coulomb effects but also strongly couple electronically to nearby neutral sites. This is unambiguously demonstrated by the simultaneous depletion of both the neutral and polaronic transitions, as well as by correlated excited state dynamics, when either transition is targeted during ultrafast experiments. The results contrast with the conventional picture of localized intragap polaron states but agree with revised models for the optical transitions in doped organic materials, which predict a common ground level for polarons and neighboring neutral sites. Such delocalization of polarons into the frontier transport levels of their surroundings could enhance the electronic connectivity between doped and undoped sites, contributing to the formation of conductive charges.
Introduction
Doping of organic semiconductors is required to achieve high conductivities. Significant efforts have thus been invested to and dark red (1100 nm, near-infrared) dotted lines, where photoexcitation is resonant with the N 1 and P 2 transitions, respectively. The inset shows the molecular structure of CPE-K. b) On the left, the recently proposed energy level diagram for doped conjugated systems and the corresponding origin of the optical N 1 and P 2 transitions are shown with solid arrows. [17] On the right side, the energy levels and optical transitions based on the conventional model (not in agreement with our experimental data) are shown. c) TA spectra at an early time delay (0.1 ps) of doped CPE-K at a similar absorbed photon density of ≈1.5 × 10 13 cm −2 for 600 and 1100 nm excitation. Comparable spectral signatures including the N 1 GSB (ground state bleaching of N 1 transition), P 2 GSB (ground state bleaching of P 2 transition), and PA (photoinduced absorption) are seen. d) Temporal evolution of the normalized dynamics probed at the N 1 GSB and PA band maxima for doped CPE-K with 600 and 1100 nm excitation at low absorbed photon density (≈1.5 × 10 13 cm −2 ).
Optical Transitions in the Doped Materials
The absorption spectra of doped CPE-K, heavily doped CPE-K (achieved upon H 2 SO 4 treatment) and undoped CPE-K are shown in Figure 1a (films and solutions). All systems exhibit two main absorption bands at wavelengths shorter than 900 nm, similar to the PCPDTBT polymer without ionic side chains. [22] The main visible (vis) absorption band peaking at ≈703 nm (in the films) is associated to the S 0 -S 1 transitions of neutral (undoped) sites, which we will refer to as N 1 transition for simplicity. In the doped systems, an additional band in the nearinfrared (NIR) is evident in the wavelength range studied here, related to the P 2 transition due to cationic polarons on the backbone. [2,10,11] The intensity of the P 2 band increases with the addition of acid, which is indicative that the polaron concentration is higher in CPE-K/H 2 SO 4 . This is further supported by electron paramagnetic resonance (EPR) measurements that we have previously reported. [11] The EPR signal scales with the polaron concentration and strength of the P 2 absorption band, so that a higher EPR signal is measured after the acid treatment. Also, the dc conductivity of 3 × 10 −2 S cm −1 for doped CPE-K film significantly increases after the addition of H 2 SO 4 (1.4 × 10 −1 S cm −1 ), due to the higher doping level, which affects both the number and mobility of the conductive charges. [23] To obtain quantitative doping concentrations, we have determined the absorption cross-section of the P 2 polaron band from TA measurements on photoexcited PCPDTBT:fullerene films ( Figure S1, Supporting Information). From this, we find a polaron concentration of 1.5 × 10 20 cm −3 in doped CPE-K film and of 2.4 × 10 20 cm −3 in doped CPE-K/H 2 SO 4 film, based on the intensity of their P 2 absorption bands (see details in the Supporting Information). This means that respectively ≈17% and ≈27% of monomers are charged, which, for an average chain length of 7-14 monomers as used here, corresponds to 1-2 charged monomers per chain in CPE-K and 2-3 charged monomers per chain in CPE-K/H 2 SO 4 . Some de-doping occurs during dissolution, so that we find that only 8% and 14% of monomers are charged in doped CPE-K and CPE-K/H 2 SO 4 solution, respectively ( Figure S2, Supporting Information).
In general, the presence of polarons on the conjugated backbone leads to the emergence of new characteristic absorption bands at energies below the bandgap of the neutral semiconductor, [20,24] associated with the P 2 and P 1 transitions. The consensus is that the addition/extraction of electrons from the neutral semiconductor introduces intragap localized energy levels, due to local nuclear reorganization of the doped site (usually from a benzoid to a more quinoid structure), while the energy levels of the neutral sites remain unaffected. [20,21] For cationic polarons, it is stipulated that a singly occupied energy level located above the highest occupied molecular orbital (HOMO) level and an unoccupied energy level below the lowest unoccupied molecular orbital (LUMO) level of the neutral semiconductor are formed and that the polaronic P 2 transition occurs between localized intragap levels, while the P 1 transition takes place between the HOMO and the singly occupied level, as shown on the right of Figure 1b. [20,21] This approach neglects electron-electron interactions on the polaronic site and with neighboring neutral sites, so that modifications to the conventional model have recently been proposed (left of Figure 1b). [5,[17][18][19] Due to on-site Coulomb repulsions, the lower intergap polaronic level splits so that the lone electron level is located below the HOMO level of the neutral organic semiconductor, and both the frontier energy levels are stabilized on the doped site. Moreover, intersite Coulomb interactions down-shift the energy levels of neutral sites located near the positive charge. Within this framework and according to time-dependent density functional theory (TD-DFT) calculations on doped poly-(para-phenylene), the P 2 band is assigned to a charge transfer type transition from the HOMO level of neutral sites adjacent to the polaron to the unoccupied intragap level of the polaronic site. [17] This is in contrast to the intergap P 2 transition predicted by the conventional model. The N 1 transition, however, occurs between the HOMO and LUMO levels of the neutral sites in both approaches. Lastly, the P 1 transition involves the HOMO of the neutral sites to the unoccupied level of the doped sites (new model) but will not be discussed further since it is out of the spectral range accessible by our TA measurements.
For the purpose of this study, TA measurements were performed at different excitation wavelengths by selectively pumping the N 1 transition at 600 nm or the P 2 transition at 1100 nm (vertical dotted lines in Figure 1a). The early TA spectra, recorded 0.1 ps after photoexcitation of doped CPE-K film, are compared for the two excitation wavelengths in Figure 1c and have surprisingly similar features. Both show a negative signal attributed to ground state bleaching of the N 1 transition (N 1 GSB) at vis wavelengths, a broad positive band related to photoinduced absorption (PA) at NIR wavelengths, and an overlapping negative band in the NIR that we assign to the GSB of the P 2 transition (P 2 GSB), as it matches the polaron absorption band. This is surprising, since it unambiguously shows that both 600 and 1100 nm excitation result in simultaneous depletion of the N 1 and P 2 transitions, within the ≈70 fs time resolution of our experiment. Moreover, the excited state dynamics, probed either in the N 1 GSB or overlapping P 2 GSB/PA band, is also comparable for both excitation wavelengths ( Figure 1d). This finding is in disagreement with the conventional description of the transitions for doped organic semiconductors (right of Figure 1b), since here the N 1 and P 2 transitions involve completely independent energy levels. [20,21] According to the conventional model, resonant excitation of the N 1 transition should lead to only the N 1 GSB and PA features similar as observed in undoped CPE-K (see below), while photoexcitation of the P 2 transition should lead to signatures of the P 2 GSB and of photoexcited polarons, having independent relaxation dynamics compared to the N 1 excited state. The observed GSB signatures of both the N 1 and P 2 transitions could in principle result from excited state charge transfer processes between the doped and neutral sites. However, we see no experimental evidence resolving such a behavior upon excitation of either transition, since both GSB features appear already within the ≈70 fs time resolution of the experiment for both excitation wavelengths. This points to strong coupling between the doped and adjacent neutral sites, so that ultrafast charge transfer likely occurs directly upon excitation (without locally excited intermediate). We note that this interpretation is equivalent to the assignment of the P 2 band as a transition from the HOMO level of a neutral site to the unoccupied intragap level of the polaronic site, as predicted by the TD-DFT calculations on doped poly-(para-phenylene) and shown on the left of Figure 1b. [17] This revised model is also in excellent agreement with the depletion of the P 2 transition when exciting the N 1 band at 600 nm. Since the N 1 transition (of neutral sites adjacent to the polaron) has the same ground level as the P 2 transition, simultaneous depletion and recovery when exciting in either absorption band is expected, implying that there is strong electronic coupling between doped and nearby undoped sites.
N 1 Excitation of Doped and Undoped CPE-K Films
In order to further substantiate this finding, we now turn to the details of the TA data. Figure 2 presents the TA results for undoped and doped CPE-K films via 600 nm excitation, where the N 1 transition is resonantly pumped. The TA spectra of undoped CPE-K ( Figure 2a) show a similar behavior as reported for the undoped PCPDTBT polymer. [25] A negative signal attributed to the N 1 GSB is observed at vis wavelengths and its normalized TA dynamics is included in Figure 2b. In addition, a broad positive PA band by the S 1 excited state is seen at NIR wavelengths. The TA spectra obtained in doped CPE-K at selected time delays are displayed in Figure 2c. Similar to undoped CPE-K, the N 1 GSB and PA bands are evident and thus are related to population of the S 1 state via the N 1 transition. As mentioned above, an additional dip is seen in the PA band at very early times (≈0.1 ps), caused by its overlap with the P 2 GSB that quickly disappears. We associate the fact that 600 nm excitation results in simultaneous depletion of the N 1 and P 2 transitions to delocalization of the involved electronic states over both doped and neutral sites due to strong electronic coupling, so that the N 1 and P 2 transitions share the same ground level (left of Figure 1b). Since the high energy part of the N 1 band is photoexcited at 600 nm, it is unlikely that our observations arise from direct excitation of the P 2 band tail and we provide further evidence below to exclude this.
Another consequence of doping is the significantly faster excited state decay in doped CPE-K compared to undoped CPE-K (Figure 2b). Most photoexcitations recombine within one picosecond in the doped film, as shown by the dynamics probed in both the N 1 GSB and PA bands, while only about half of the photoexcitations recombine in undoped CPE-K within the first 5 ps. The faster recombination in doped CPE-K is consistent with the decrease in photoluminescence yield and lifetime upon the addition of charges in organic systems. [26] To distinguish between spectral components that decay with different times constants, we have analyzed the entire spectral and temporal pump-probe datasets using multiexponential global analysis (GA). The amplitude spectra associated with each of the time constants for doped CPE-K are shown in Figure 2d. The dominant recombination component is ultrafast with a time constant of ≈150 fs and involves both the N 1 and P 2 GSB bands. It is absent in undoped CPE-K ( Figure S3, Supporting Information). The simultaneous ground state recovery in this component is further evidence that the N 1 and P 2 transitions are coupled and that our results do not arise from their uncorrelated excitation. As further elaborated later on, we assign the ultrafast (≈150 fs) recombination to polymer segments that contain a polaronic site (referred to as "doped chromophores"), where the neutral sites excited at 600 nm are coupled to the close-by polaron (transition N 1(doped) in Figure 1b). On the other hand, the weak amplitude spectra related to the slower time constants (≈2.5 ps and long (>>1 ns)) are similar to the TA spectra of undoped CPE-K ( Figure 2a) and do not involve the P 2 GSB. We attribute those to neutral polymer segments located further away or not conjugated to the polaronic site and thus not electronically coupling to it ("undoped chromophores," N 1(undoped) transition). We note the relatively small amplitude of the 2.5 ps component compared to the 150 fs component (Figure 2d), which indicates that most neutral sites in doped CPE-K couple to polarons. Indeed, based on the intensity of the N 1 GSB in the 150 fs amplitude spectrum compared to the longer recombination components, we estimate that 83% of the neutral monomers couple to a polaronic site (Table 1). This can be explained by the high doping level and the delocalization of photoexcited chromophores in doped CPE-K films. Finally, we also note a difference in the behavior of doped and undoped CPE-K during intensity-dependent TA measurements ( Figure 2b). As expected, in undoped CPE-K film, excitonexciton annihilation becomes significant as the absorbed photon density increases, leading to faster recombination dynamics. [27] In contrast, the dynamics in doped CPE-K is unchanged even at high absorbed photon densities (≈2.2 × 10 14 cm −2 ), showing that the presence of polarons prevails this process. Due to the large spectral overlap between the exciton emission spectrum and polaron absorption spectrum, exciton-polaron annihilation Adv. Funct. Mater. 2020, 30,1906148 Table 1. Position of the N 1 and P 2 steady-state absorption bands of the doped CPE-K and CPE-K/H 2 SO 4 films and solutions, together with the position and amplitude of the N 1 ground state bleaching (GSB) band in the amplitude spectra (corresponding to the τ 1 -τ 3 time constants) obtained from the global analysis (GA) of the transient absorption data. The ratio of the N 1 /P 2 amplitudes is then related to the concentration of doped monomers (expressed as a % and as a ratio of neutral to doped monomers). All data were collected at similar absorbed photon density. is intrinsically more efficient than exciton-exciton annihilation in photoexcited polymer:fullerene blends. [28] Therefore, it is expected to become important in doped systems at high polaron concentrations. Thus, exciton-polaron annihilation for undoped chromophores and ultrafast recombination in doped chromophores reduce the excitation density before any higherorder recombination effects take place, suppressing excitonexciton annihilation.
P 2 Excitation of Doped CPE-K Films
To access directly the P 2 transition in doped CPE-K, we performed TA measurements using excitation of this band at 1100 nm. The TA spectra and GA results are displayed in Figure 3a,b, respectively. In the early TA spectra, the N 1 GSB band is present, while the NIR PA band overlaps with a more intense P 2 GSB band compared to that observed with 600 nm excitation. The simultaneous appearance of both the N 1 and P 2 GSB bands with P 2 excitation points again to coupling of the N 1 and P 2 transitions that share the same ground state, which strongly suggests electronic coupling between doped and nearby undoped sites, similarly to the results with 600 nm excitation. Nevertheless, to exclude that the N 1 GSB is related to direct excitation of undoped chromophores, we performed intensity-dependent TA measurements with 1100 nm excitation in doped and undoped CPE-K. As the pump photon energy is significantly lower than the undoped CPE-K energy gap, no resonant excitation should take place. Unexpectedly, we still observe similar TA spectral features and recombination dynamics for 1100 and 600 nm excitation in undoped CPE-K ( Figure S4, Supporting Information). This behavior at 1100 nm can be explained by two-photon absorption (dark red arrows in Figure 1b), since the intensity of the early N 1 GSB (at 0.1 ps) in undoped CPE-K has a quadratic dependence on incident photon density (Figure 3c), similar as for PCPDTBT ( Figure S5, Supporting Information). In contrast, the early N 1 GSB in doped CPE-K scales linearly with absorbed photon density (Figure 3c), confirming that this transition is not directly excited by two-photon absorption, but indirectly depleted via onephoton excitation of the P 2 band. Moreover, the difference in the photo excitation density between the two systems becomes evident when comparing the TA amplitudes for 1100 nm excitation. In undoped CPE-K, a pump fluence of at least one order of magnitude higher is needed to obtain a comparable N 1 GSB amplitude than in doped CPE-K.
In agreement with the strong coupling of the N 1 and P 2 transitions in doped CPE-K, similar fast recombination dynamics are obtained for both excitation of the N 1 transition at 600 nm and of the P 2 transition at 1100 nm ( Figure 1d). The GA results Adv. Funct. Mater. 2020, 30,1906148 . Transient absorption (TA) results for near-infrared excitation (at 1100 nm) of the P 2 transition of doped CPE-K film. a) TA spectra at selected time delays at an absorbed photon density of ≈1.5 × 10 13 cm −2 . b) Amplitude spectra associated with the different recombination time constants (0.15, 2.5 ps, long-lived offset) as derived from multiexponential global analysis of the TA data at an absorbed photon density of ≈1.5 × 10 13 cm −2 . c) Dependence of the initial TA signal intensity at 710 nm (N 1 GSB maximum, 0.1 ps after photoexcitation) as a function of photon density in doped CPE-K (left-bottom axes) and undoped CPE-K (top and right axes) excited at 1100 nm. d) Amplitude spectra in the visible region associated with the 0.15 ps time constant (red) and the 2.5 ps time constant (green) for CPE-K film excited at 600 nm (solid lines). Dotted lines show the respective amplitude spectra obtained with 1100 nm excitation, scaled to the same intensity. The absorption spectrum of the doped CPE-K film is shown for comparison. Also, a schematic representation of the polaron on a conjugated chain is included, which causes a Stark shift of the absorption of nearby neutral sites that are located within its Coulomb potential well.
for 1100 nm excitation (Figure 3b) confirm an ultrafast (≈150 fs) recombination component that includes both the N 1 and P 2 GSB, as was observed with 600 nm excitation. We again assign this to ultrafast recombination in doped chromophores of CPE-K, which are directly excited at the polaronic site to the P 2 level. Such fast recombination of photoexcited polarons agrees with pump-push-probe experiments reported for photoexcited polymer:fullerene blends. [29] Given that the P 2 transition only excites doped chromophores, it is somewhat surprising that the slower decay components (≈2.5 ps and long) due to recombination of undoped chromophores are also present with 1100 nm excitation (although with five times weaker absolute amplitude compared to 600 nm excitation, Table 1). They are likely populated via residual two-photon absorption or by ultrafast localization of the initial photoexcitation onto neutral sites. Finally, we observe a less pronounced N 1 GSB with 1100 nm excitation compared to 600 nm excitation in the early TA spectra (Figure 1c, recorded at similar excitation density). This can be explained by two effects: the weaker population of undoped chromophores at 1100 nm and a reduced delocalization in the doped chromophores, containing fewer neutral sites. We have related the ratio of the N 1 to P 2 steady-state absorption bands to the concentration of doped monomers in Figure S6a in the Supporting Information. Comparing this to the ratio of the two corresponding GSB bands in the GA amplitude spectrum of the ≈150 fs component, which is representative of the TA spectrum of the doped chromophores, we find that a chromophore contains about 3.3 neutral monomers for 1 charged monomer with 600 nm excitation ( Figure S6b, Supporting Information; Table 1). This drops to 1.9 neutral monomers per polaron with 1100 nm excitation, evidencing the reduced delocalization of the doped chromophores excited in the P 2 band.
In addition to the electronic coupling of the polaron to adjacent neutral sites, intersite Coulomb interactions can take place between the charged site and its surroundings in doped CPE-K. Such Coulomb shifts of the energy levels in doped organic systems have been theoretically predicted and experimentally observed by photoelectron spectroscopy in previous work. [18] Additionally, if the ground and excited states are affected by the field to different extents, this can cause a Stark shift of the optical transitions. [30] For CPE-K, the backbone of the polymer consists of alternating donor-acceptor units, so that a change in polarity and/or polarizability between the ground and excited state can be expected. Indeed, there is a 12 nm red-shift (from 691 to 703 nm) of the N 1 transition in the steady-state absorption spectra when going from undoped to doped CPE-K film, pointing to such an effect (Figure 1a). The electric field generated by the polarons thus shifts the N 1 transition of nearby undoped sites to the red (as shown in the schematics on the right of Figure 3d). In the TA spectra, the precise position of the N 1 GSB is difficult to quantify due to overlap with positive bands and more limited spectral resolution. Still, there is a clear red-shift of the N 1 GSB band in the 150 fs amplitude spectrum (of only doped chromophores), with 1100 nm (P 2 ) compared to 600 nm (N 1 ) excitation (left of Figure 3d, Table 1). Due to the reduced delocalization of the chromophores with P 2 excitation (see above), the neutral sites probed at 1100 nm are closer to the polaron and therefore feel a stronger electric field, explaining the increased red-shift. However, the amplitude spectra of the 2.5 ps decay component, which represent the undoped chromophores not coupling to a polaron, are similar at both excitation wavelengths. Here, the N 1 GSB is only slightly blue-shifted compared to the 150 fs amplitude spectrum at 600 nm excitation (Table 1), implying that the undoped chromophores are still affected by the charges (as confirmed below by solution measurements).
Excited State Dynamics in Doped CPE-K/H 2 SO 4 Films
To investigate the effect of doping concentration, we performed TA measurements on heavily doped CPE-K/H 2 SO 4 films. In Figure 1a, the higher doping concentration after addition of H 2 SO 4 to CPE-K is obvious when comparing the ratio between the absorption bands for the N 1 and P 2 transitions, which clearly increases in CPE-K/H 2 SO 4 . We have shown above that the concentration of doped monomers increases from 17% to 27% with the addition of acid, as also confirmed via EPR and conductivity measurements. [3,11] The doping level is relatively high for both CPE-K and CPE-K/H 2 SO 4 films, with practically all chains containing doped sites and being close to polarons of other chains due to chain packing in the films (insets of Figure 4a). The P 2 polaron absorption band is blue-shifted in CPE-K/H 2 SO 4 (Table 1), which is attributed to Coulomb repulsion between adjacent positive polarons resulting in their destabilization (and suggesting a smaller distance between polarons in the heavily doped system). [31] The N 1 absorption band is however at a similar spectral position in doped and heavily doped CPE-K, showing that the Stark shift of the N 1 transition due to the electric field of the polarons is comparable at both doping levels. In the TA data for CPE-K/H 2 SO 4 film, for both 600 and 1100 nm excitation at early times (0.1 ps), we observe the N 1 GSB and the NIR PA bands as in CPE-K (Figure 4a), as well as the P 2 GSB. The coupling between the N 1 and P 2 transitions thus persists in the highly doped system. For CPE-K/ H 2 SO 4 , the N 1 GSB seems blue-shifted compared to its steadystate absorption, but this is mostly due to overlap with positive PA signatures on the red side of the band. Nevertheless, the red-shift of this band with P 2 excitation of less delocalized chromophores is still present (as for doped CPE-K, Table 1). Doped CPE-K and heavily doped CPE-K/H 2 SO 4 films decay with similar time constants (inset of Figure 4b). This is confirmed by the GA results for CPE-K/H 2 SO 4 for excitation of the N 1 transition at 600 nm ( Figure 4b) and at 1100 nm ( Figure S7, Supporting Information) that show recombination of doped chromophores with the 150 fs time constant and of undoped chromophores with the 2.5 ps time constant (and weak offset).
Comparing the amplitudes of the decay components (Table 1), we find that 81% of neutral monomers couple to polarons in CPE-K/H 2 SO 4 with 600 nm excitation, which is similar to doped CPE-K (83%). The weight of the undoped chromophores thus remains the same at both doping levels, implying that their coupling to polarons is likely disrupted by chemical or conformational defects, independently of the doping concentration. In the early TA spectra (Figure 4a) and 150 fs amplitude spectra ( Figure S6b, Supporting Information), it can be seen that with 600 nm excitation, the P 2 GSB is more pronounced in the heavily doped system than in less doped © 2019 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim CPE-K, while the N 1 GSB has similar intensity (as expected for the same excitation density). From the ratio of the two bands in the 150 fs amplitude spectrum, we find that the doped chromophores are more localized in CPE-K/H 2 SO 4 , containing 2.3 (instead of 3.3) neutral monomers per doped one (Table 1). We attribute this to the smaller number of neutral monomers available between doped sites in the heavily doped chains, due to the higher doping concentration and smaller distance between polarons. The 2.3:1 neutral to polaron ratio in the doped chromophores is consistent with the overall 27% doping level (from the steady-state absorption, Table 1), which means that per doped monomer there are in total 2.7 neutral ones, from which we saw that about 81% couple to a polaron (undergoing recombination in 150 fs). Another possible explanation for the more pronounced P 2 GSB would be excitation of the tail of the strongly blue-shifted P 2 band of CPE-K/H 2 SO 4 film at 600 nm ( Figure 1a). Given the excellent match of the doped chromophore delocalization with the available number of monomers, this is however not expected to play a significant role. Finally, with P 2 excitation at 1100 nm, the ratio between the N 1 GSB and P 2 GSB is similar for doped CPE-K and CPE-K/H 2 SO 4 (1.8-1.9 neutral monomers per charge, Table 1), showing that the delocalization of the chromophores is in this case purely electronically defined. Overall, we show that most neutral sites are affected similarly by the polarons in both doped CPE-K and heavily doped CPE-K/H 2 SO 4 films and experience a similar electronic environment independently of the doping concentration.
Doped CPE-K and CPE-K/H 2 SO 4 Solutions
To remove the effects of intermolecular interactions and to access lower doping levels, we have dissolved doped CPE-K and heavily doped CPE-K/H 2 SO 4 in solution. Some de-doping occurs during dissolution, so that the polaron concentration is lower in the solutions than in the corresponding films. From the absorption spectra ( Figure 1a; Figure S2, Supporting Information), we have found that 8% and 14% of monomers are charged in doped CPE-K and CPE-K/H 2 SO 4 solution, respectively, compared to 17% and 27% in the corresponding films. At the lower doping concentrations of the solutions, we expect that some CPE-K chains are entirely undoped and also isolated from other doped chains when dissolved (inset of Figure 4c). Due to various environmental effects (absence of packing, lower polaron concentration, solvation), the N 1 band in the solutions is blue-shifted to 650 nm (doped CPE-K) and to 667 nm (doped CPE-K/H 2 SO 4 ) compared to the films (703 nm for both, Table 1). We will show that the more red-shifted N 1 transition in CPE-K/H 2 SO 4 is caused by a higher number of doped chains Adv. Funct. Mater. 2020, 30,1906148 feeling the electric field of the polarons. The P 2 band is on the other hand at slightly higher wavelength than in the films, at 1150 nm (vs 1120 nm) in doped CPE-K and at 1080 nm (vs 950 nm) in CPE-K/H 2 SO 4 . The blue-shift in the more heavily doped system, due to closer proximity between polarons, thus persists to a lesser extent in solution.
Because of the presence of entirely undoped as well as doped CPE-K chains in the solutions, the TA spectra with 600 nm excitation ( Figure S8, Supporting Information) appear quite different from the corresponding ones recorded in films, where we have estimated that all polymer chains contain at least one polaron. In the GA results for dissolved doped CPE-K ( Figure 4c) and CPE-K/H 2 SO 4 (Figure 4d) with 600 nm excitation, we observe that the N 1 GSB of the solutions is an overlap of blue-and red-shifted bands that recombine with different rates. The ultrafast ≈150 fs component due to recombination in doped chromophores is still present, evidencing that the coupling of the N 1 and P 2 transitions is maintained for the doped chains in solution. The position of the N 1 GSB in the 150 fs amplitude spectra is similar as in the films (at ≈700 nm for doped CPE-K, Table 1), which is strongly red-shifted compared to the steady-state solution spectra and shows a strong Stark shift in the doped chains (Table 1). A recombination component of 2 ps is seen in the doped CPE-K and CPE-K/H 2 SO 4 solutions, which is also comparable to what is observed in the corresponding films in terms of the time constant and red-shifted position of the N 1 GSB (Figure 4c,d). We assign this to undoped chromophores located on chains containing a polaron, thus feeling its electric field but not coupling to it. Comparing the weight of the N 1 GSB of only the 150 fs and 2 ps components in the solutions (which discards the effect of the entirely undoped chains), we find that 71% (CPE-K) and 80% (CPE-K/H 2 SO 4 ) of the monomers on the doped chains couple to polarons, which is only slightly lower than in the films (Table 1). In contrast, the blue-shifted N 1 GSB in the doped CPE-K and CPE-K/H 2 SO 4 solutions is located at 600 nm (over 100 nm shift compared to the doped chromophores, Table 1). It recombines more slowly (≈40 ps) and is related to a NIR PA band peaking at ≈1150 nm (Figure 4c,d). It is attributed to isolated undoped CPE-K chains, that are not affected in any way by the polarons and do not show any Stark effect. As expected, this pronounced slow decay component is absent in films and is weaker in the more doped solution (where more chains contain a polaron). From the amplitude of the N 1 GSB of the 40 ps component (undoped chains) compared to the ones of the two faster time constants (doped chains), we estimate that about 28% of the chains are entirely undoped in the CPE-K solution, which drops to 14% in the CPE-K/H 2 SO 4 solution (Table 1). Since the steady-state absorption spectra are the sum of doped and undoped chains, the N 1 band is found at a spectral position that corresponds to their weighted average, which is thus more red-shifted in CPE-K/H 2 SO 4 .
From the ratio of the N 1 GSB to P 2 GSB in the 150 fs amplitude spectrum of doped CPE-K solution with 600 nm excitation ( Figure S6b, Supporting Information; Table 1), we find that for every charged monomer in a doped chromophore, there are 3.2 neutral ones in doped CPE-K solution, which is very similar to the value of 3.3 found in the corresponding film. We therefore evidence a similar delocalization of the doped chromophores in film and solution, showing that the coupling of the N 1 and P 2 transitions is essentially intrachain and does not depend on chain packing. This is likely the intrinsic electronic delocalization of doped chromophores excited at 600 nm, which is not limited by the available number of neutral monomers. On the other hand, in CPE-K/H 2 SO 4 solution, the ratio of neutral to charged monomers in the doped chromophores drops to 2.6:1 (Table 1), consistent with the reduced distance between polarons causing the blue-shift in the absorption. As expected at the lower doping level in solution, the effect is less pronounced than in the corresponding film (2.3 neutral monomers per polaron). Finally, with 1100 nm excitation of the P 2 transition of the doped CPE-K and CPE-K/H 2 SO 4 solutions, the doped chromophores are selectively excited, so that only the red-shifted N 1 GSB is observed (inset of Figure 4c; Figure S9, Supporting Information). There are no longer any effects due to the undoped chains, therefore the TA spectra and dynamics are essentially the same as in the corresponding films (Figures S9 and S10, Supporting Information). The GA amplitude spectra are dominated by the 150 fs decay of the doped chromophores (which contain 1.7-1.8 neutral monomers per charge, showing similar delocalization between the two doping levels and with the films, Table 1). The very weak slower components (1.8 ps and offset) are again due to residual population of undoped chromophores on the doped chains. We thus evidence a dual behavior of independent doped and undoped CPE-K chains in solution, allowing an even more direct comparison of their properties. The doped polymer chains in solution show the same characteristic coupling of the N 1 and P 2 transitions, ultrashort excited state lifetime and Stark-shifted N 1 GSB as was discussed in films, while the undoped chains show no effect of the polarons.
Jablonski Diagram of Doped CPE-K
Based on our results, we propose a Jablonski state diagram that describes the doped CPE-K system, as depicted in Figure 5, including electronic transitions that take place via photoexcitation (solid arrows) and recombination processes (dashed arrows). We stipulate that a CPE-K polymer chain does not act as a single chromophore but breaks into doped und undoped segments. In the doped chromophores, the neutral sites are close to a polaron and strongly electronically couple to it, implying delocalization of the electronic states over the entire segment including its polaronic and neutral monomers. In the undoped chromophores, the neutral sites are located further away from the polarons (not coupling to it) or their conjugation to the polarons is disrupted by conformational or chemical defects, but they can still feel the electric field of the polarons if they are located on the same chain.
In general, due to disorder and coupling to nuclear modes, optical absorption in conjugated polymers occurs within a chromophoric segment and not over the entire conjugated backbone. [8,32,33] Electronic relaxation and self-trapping processes then further localize the initially excited state on the subpicosecond time scale. By considering exciton-exciton annihilation occurring at high excitation densities faster than our ≈70 fs experimental time resolution, we estimate that initially photoexcited segments in undoped CPE-K film span ≈4 nm (with 600 nm excitation), which corresponds to ≈3-4 monomers (using a monomer size of ≈1.2 nm, [34] Figure S11, Supporting Information). This agrees with the wavefunction delocalization (≈4-5 monomers) obtained for PCPDTBT via theoretical DFT calculations. [25] Therefore, it can indeed be expected that the backbone of doped CPE-K breaks into several chromophoric segments, even for relatively short CPE-K chains of 7-14 monomers as the ones used here. The existence of doped and undoped chromophores is thus justified (green and orange ovals in Figure 5), as well as the size of ≈3 monomers that we find when analyzing the delocalization of the doped chromophores based on the ratio of the N 1 and P 2 GSB bands in the TA data (Table 1).
In both doped CPE-K and heavily doped CPE-K/H 2 SO 4 films, we find the excited state dynamics to be strongly dominated by the doped segments, given the relatively high doping levels where each polymer chain contains at least 1-2 polaronic sites. When exciting the N 1 transition at 600 nm, both the doped and undoped chromophores are photoexcited (with about 80% and 20% weight, respectively), and absorb via the N 1(doped) and N 1(undoped) transitions, as shown in Figure 5. Our TA results unambiguously establish that the neutral sites in the doped chromophores strongly couple to the polaronic sites in spite of the structural distortion of the polaron. The N 1(doped) transition thus leads to the simultaneous appearance of the N 1 and P 2 GSB bands in our TA data, and to a predominant ultrafast (≈150 fs) recombination component. As shown on the left of the Jablonski diagram in Figure 5, the coupling of the neutral and polaronic sites on the same doped chromophore implies that the lower excited states of the segment are the ones populated by the P 2 and P 1 transitions. Therefore, the S 1 state can decay nonradiatively via internal conversion to the low-lying polaronic states, which are known to have very short lifetimes. [29] We note that the ratio of the N 1 GSB to P 2 GSB in the 150 fs amplitude spectrum allows to estimate the delocalization of the doped chromophores to 3.3 neutral monomers (in doped CPE-K) and to 2.3 neutral monomers (in heavily doped CPE-K/H 2 SO 4 ) per charged site, the smaller chromophore size in the latter likely being due to less available neutral sites between the polarons. In solution, the doping level drops and entirely undoped isolated polymer chains are present (28% and 14% in CPE-K and CPE-K/H 2 SO 4 , respectively). The properties of the remaining doped chains are nevertheless quite similar as in the films, in terms of doped chromophore delocalization and position of the N 1 absorption band. This N 1 transition (for both doped and undoped chromophores) is red-shifted whenever a polaron is present on the chain, due to the Stark effect caused by the electric field around the charge.
The doped chromophores of CPE-K and CPE-K/H 2 SO 4 in films and solutions are more selectively excited at 1100 nm of the P 2 transition (red arrow in Figure 5), leading to a similar excited state behavior as with N 1(doped) excitation at 600 nm, that is characterized by the simultaneous N 1 and P 2 GSB signatures and an ultrafast 150 fs decay. The P 2 GSB is in this case more pronounced compared to the N 1 GSB, due to a smaller delocalization of the doped chromophores excited at 1100 nm (yellow oval in Figure 5). We find 1.7-1.9 neutral monomers per charged monomer at both doping levels and in both films and solutions. Indeed, it has been reported from theoretical calculations and polarization-sensitive measurements on conjugated polymers, that a lower excitation energy leads to less delocalized initially excited electronic states. [8,33,35] Targeted excitation of the doped chromophores at 1100 nm also allows to evidence a slightly red-shifted N 1 GSB, since the probed neutral sites are closer to the polaron due to the reduced delocalization, and thus feel a stronger Coulomb potential.
Finally, for all investigated doped samples in films and solutions, we observe a slower excited state decay component (1.8-2.5 ps), mainly with 600 nm excitation and to a lesser extent with 1100 nm excitation. We assign this to recombination in undoped segments (right side of the Jablonski diagram in Figure 5). The N 1 GSB of the corresponding amplitude spectrum is at a similar position as seen in the doped chromophores, showing that the undoped segments are still located on doped polymer chains and feel the electric field of the charge. They are either directly excited via the N 1(undoped) transition at 600 nm (dark orange arrow), populated when some initially excited doped chromophores localize on neutral sites during ultrafast excited state relaxation, or excited by residual twophoton absorption (double dark red arrows). Since the low-lying polaronic states are absent on the undoped segments, their recombination is slower (green dashed arrows). The slower decay components are assigned to nonradiative recombination of undoped chromophores similar as seen in undoped CPE-K film ( Figure S3, Supporting Information) and/or to excitonpolaron annihilation, which is an incoherent recombination mechanism that involves Förster energy transfer from the undoped segment to the polaronic site. [26,28] This can occur due to the proximity of the undoped and doped chromophores on the same doped polymer chain, in both films and solutions. However, entirely undoped and isolated chains exist only in solution, and they are characterized by even slower decay dynamics (40 ps component) and a strongly blue-shifted N 1 GSB (by about 100 nm).
Conclusion
In summary, we have presented here the first in-depth study of the ultrafast photophysics of a self-doped conjugated polyelectrolyte, highlighting the optoelectronic properties of doping-induced polarons and their interactions with their surroundings. We chose a cationic self-doped system, which presents particularly high advantages in terms of stability (no dopant diffusion), similar morphology compared to the undoped film, [12] and absence of intermolecular orbital hybridization with the dopant. Our most important finding is that there is strong electronic coupling of the polaronic site to nearby neutral sites, which share the same ground state for their optical transitions. This is unambiguously demonstrated by the simultaneous depletion of both the neutral and polaronic transitions, as well as by correlated excited state dynamics, when either transition is photoexcited during femtosecond transient absorption experiments. This result contrasts with the conventional picture of localized intragap polaron states and agrees with a revised model for the electronic structure and optical transitions in doped organic systems. [5,[17][18][19] Second, our study shows that intersite Coulomb interactions are present, so that the positive polarons cause a Stark shift in the transitions of nearby neutral sites. The electronic coupling and electrostatic effects of the polarons occur independently on doping concentration and both in thin films or in solution.
Achieving high conductivity in doped organic semiconductors requires high mobility and a high yield of free charges that do not remain bound (electrostatically or by orbital hybridization) to the ionized dopant. [3,5,6c] Here, via spectroscopic elucidation of the optoelectronic properties of a self-doped system, we find characteristics of the polaron that might help its dissociation from interfacial bound states. In particular, we show that the polaron delocalizes across neighboring neutral sites along the conjugated backbone, and also couples into their frontier energy levels, which are the ones responsible for charge transport. [36] We thus demonstrate an electronic connectivity of the polarons to their local surroundings, which likely helps the polarons to become mobile and to separate from their charged counterion. In analogy, nanoscale charge transport has also been identified as a key factor in the charge separation of photogenerated polarons in organic solar cell blends. [37] The importance of polaron delocalization in doped conjugated polymers for high hole mobilities has also been demonstrated by early work on conjugated polymers doped with small molecules such as iodide, where the optical properties of polarons in doped polymers with different interchain interactions were studied. [24] We conclude that it is essential to develop useful predictive models of not only the electrical and transport characteristics, but also of the optoelectronic properties of the polarons on the conjugated backbone. Our findings bring significant new insights to doped organic systems, since we provide evidence of polaron delocalization to neighboring neutral sites that can ultimately increase the conductivity and macroscopic mobility upon doping. [38]
Experimental Section
Sample Preparation: For this work, CPE-K was studied in solution and as thin films. Details for the synthesis are reported elsewhere. [10,23] For making films, CPE-K solutions were prepared by first dissolving CPE-K into water of resistivity = 18.2 Ω cm, sonicating until all the material was dissolved, then adding MeOH to give a 20 mg mL −1 solution of CPE-K in H 2 O/MeOH (3:2). The solution was filtered through a 0.45 µm PTFE syringe filter prior to use. To prepare CPE-K/H 2 SO 4 and undoped CPE-K solutions, one molar equivalent of respectively H 2 SO 4 or KOH per monomer unit of CPE-K was added to the CPE-K solution. The quartz substrates were cleaned in a sonicator with soapy water, water, acetone, and isopropanol for 15 min each, blow dried using N 2 , and further treated with UV-ozone for 1 h. Thin films of CPE-K were prepared by spin casting 100 µL of the different CPE-K solutions onto the aforementioned quartz substrates. The spin coating parameters were 1000 rpm for 30 s followed by 5000 rpm for 30 s. Film thickness measurements were performed using an Ambios XP-100 stylus profilometer. A line was scored on each sample in order to measure the thickness. In order to prepare CPE-K and CPE-K/H 2 SO 4 in solution for absorption and TA experiments, the polymers were re-dissolved in H 2 O/MeOH (1:1) under nitrogen atmosphere and the resulting solutions were diluted to have an absorbance of ≈0.18-0.21 at 600 nm (≈42-49 µg mL −1 ). The absorption spectra of the films and solutions were measured with a Lambda 950 UV-vis-IR spectrophotometer (Perkin Elmer).
Transient Absorption Spectroscopy: To study the excited state dynamics, spectrally resolved femtosecond TA measurements were performed using a Ti:Sapphire amplifier system (Astrella, Coherent). The output pulses had a time duration of ≈35 fs, 800 nm center wavelength, repetition rate of 1 kHz and energy of ≈6 mJ per pulse. Part of the amplifier output was used to pump the optical parametric amplifier (OPA) (Opera, Coherent) that converted the photon energy of the incident beam to the wavelength used to photoexcite the samples. Pump wavelengths at 600 and 1100 nm were used. Probe wavelengths covering the visible and near-infrared region ranging from 480 to 1300 nm were spectrally resolved. This was achieved via continuum white light pulses generated by strongly focusing a small part of the fundamental beam onto a 5 mm sapphire plate. Part of the probe pulses were then temporally and spatially overlapped on the sample with the pump pulses, while the other part was used as a reference. The transmitted probe beam through the sample and the reference beam were spectrally dispersed in two home-built prism spectrometers (Entwicklungsbüro Stresing, Berlin) and detected separately with either back-thinned Silicon CCDs (Hamamatsu S07030-0906) or InGaAs arrays (Hamamatsu) for, respectively, visible and near-infrared detection. The transmission change of the probe pulses following photoexcitation was recorded for different pump-probe time delays up to nanoseconds, while the pump pulses were chopped at 500 Hz for the signal to be measured shot by shot. The TA changes induced by the pump were monitored with 70 fs time resolution. The beam sizes of the excitation and probe pulses were ≈1 mm and 250 µm, respectively to ensure uniform distribution of detected photoexcited species. To avoid anisotropy effects, the relative polarization of the probe and pump pulses was set at the magic angle. The TA spectra for the entire time window were scanned multiple times for both films and solutions without any significant signs of degradation. For solid state measurements the films were sealed in a chamber filled with nitrogen, while the solutions were placed in a quartz cuvette with an optical path length of 2 mm (Starna Cells Inc.). All TA data was corrected for the chirp of the white light.
Multi-Exponential Global Analysis: To distinguish between overlapping spectral features corresponding to different photoexcited species and processes, multiexponential GA was used. In this procedure, the entire spectral and temporal pump-probe data sets were analyzed simultaneously using the sum of exponential functions. This involved analyzing the time profiles at all probe wavelengths using the same time constants but allowing the pre-exponential amplitudes to vary freely. For all systems studied here, a bi-exponential function and an offset, which represents species that live longer than the 1 ns time window of the measurement, were used. An additional term was added that convolutes with the bi-exponential function to include the instrument-response-limited rise of the TA signal at early times. This is described by a Gaussian function where the full-width-at-half-maximum is related to the temporal resolution of the experiment (70-80 fs). The GA approach yields the amplitude spectra with their associated time constants for different relaxation and recombination processes that influence the overall TA signal at different timescales, facilitating the interpretation of the dynamics.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 11,659.2 | 2019-12-23T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Lhx5 controls mamillary differentiation in the developing hypothalamus of the mouse
Acquisition of specific neuronal identity by individual brain nuclei is a key step in brain development. However, how the mechanisms that confer neuronal identity are integrated with upstream regional specification networks is still mysterious. Expression of Sonic hedgehog (Shh), is required for hypothalamic specification and is later downregulated by Tbx3 to allow for the differentiation of the tubero-mamillary region. In this region, the mamillary body (MBO), is a large neuronal aggregate essential for memory formation. To clarify how MBO identity is acquired after regional specification, we investigated Lhx5, a transcription factor with restricted MBO expression. We first generated a hypomorph allele of Lhx5—in homozygotes, the MBO disappears after initial specification. Intriguingly, in these mutants, Tbx3 was downregulated and the Shh expression domain abnormally extended. Microarray analysis and chromatin immunoprecipitation indicated that Lhx5 appears to be involved in Shh downregulation through Tbx3 and activates several MBO-specific regulator and effector genes. Finally, by tracing the caudal hypothalamic cell lineage we show that, in the Lhx5 mutant, at least some MBO cells are present but lack characteristic marker expression. Our work shows how the Lhx5 locus contributes to integrate regional specification pathways with downstream acquisition of neuronal identity in the MBO.
Introduction
The hypothalamus is a brain region with essential roles in homeostasis and behavior (see for instance, Saper and Lowell, 2014). Alterations of its complex development can lead to pathological conditions in adults . The hypothalamus is subdivided into highly differentiated regions formed by functionally and morphologically highly differentiated neuronal aggregates, the hypothalamic nuclei. The induction and specification of the hypothalamus in general has been the subject of numerous studies in a variety of animal models (reviewed in Machluf et al., 2011;Pearson and Placzek, 2013). These have identified important roles for Shh, BMP, Wnt, FGF, and Nodal signaling. Although these well-known signaling pathways have been shown to specify the hypothalamus as a region, as well as determining its dorso-ventral, antero-posterior, and latero-medial axes, how the specification of individual nuclei is regulated remains elusive.
One particularly important region is the mamillary region, including its main nucleus, the mamillary body (MBO). The MBO is a large and compact neuronal aggregate acting as a hub between hindbrain, thalamus, and hippocampus through major afferent and efferent axonal bundles. The MBO has key functions in foraging behavior as in memory formation (Vann and Aggleton, 2004;Vann, 2013). Loss of the MBO in Foxb1 mutant mice leads to deficits in working memory (Radyushkin et al., 2005). In humans, MBO degeneration is involved in the anterograde amnesia characteristic of the Wernicke-Korsakoff syndrome (Kahn and Crosby, 1972), a serious neurological condition connected to alcohol abuse (Kopelman et al., 2009) and bariatric surgery (Koffman et al., 2006). Although analysis of mouse and zebrafish mutants has shown that transcription factors Sim1 and 2, Foxb1, and Fezf2 are required for MBO development and survival (Alvarez-Bolado et al., 2000a;Marion et al., 2005;Wolf and Ryu, 2013), little is known about the genetic regulation of MBO development.
Forebrain expression of Sonic hedgehog (Shh), which encodes a secreted protein with morphogen properties, is essential for appropriate hypothalamic regional specification (Szabó et al., 2009). The hypothalamic Shh expression domain, however, has to be downregulated in the mamillary region in order for it to differentiate (Manning et al., 2006). The T-box (Tbx) family of transcription factor genes has essential roles in development (Naiche et al., 2005;Greulich et al., 2011;Wansleben et al., 2014). Work on zebrafish has shown that Wnt inhibition is necessary for hypothalamic differentiation (Kapsimali et al., 2004), and in chick BMP signaling leads to Wnt inhibition and subsequent upregulation of T-box gene Tbx2, which specifically represses Shh in the tuberal and mamillary regions allowing them to differentiate (Manning et al., 2006). This role of Tbx2 is performed by Tbx3 in the mouse (Trowe et al., 2013). What is not clear is how the downregulation of Shh translates into nuclear formation and how nucleogenesis is integrated in the regulatory networks of regional specification mechanisms.
LHX5 is a member of the LHX family of transcription factors acting as important differentiation determinants (Hobert and Westphal, 2000;Kadrmas and Beckerle, 2004), and it is strongly expressed in the caudal hypothalamus from very early stages (E9.5) through the time of formation of recognizable neuronal aggregates (Figures 1A-D) (Sheng et al., 1997; Allen-Institute-for- Brain-Science, 2009;Shimogori et al., 2010). Lhx5 has specific roles in forebrain developmente.g., it is essential for hippocampal development (Zhao et al., 1999) and regulates the distribution of Cajal-Retzius neurons (Miquelajáuregui et al., 2010). Here we created a novel mutant allele of Lhx5 and analyzed it using expression profiling with microarrays, ChIP-Seq and luciferase experiments, as well as examination of the hypothalamus of the Tbx3 −/− . Our results indicate that Lhx5 has an essential role in several different developmental pathways regulating MBO specification and differentiation.
FIGURE 1 | Expression of Lhx5 in the mamillary region. In situ hybridization for Lhx5 on sagittal sections (rostral to the left) of E11.5 (A,B) and E18.5 (C,D) brains. Lhx5 is expressed in the ventricular zone (neuroepithelium; arrow in (A) and inset in (A) as well as in the incipient mamillary mantle layer (arrow in B). At E18.5, the MBO is prominently and specifically labeled in the mamillary region (framed in C, magnified in D). Abbreviations: 4V, fourth ventricle; ac, anterior commissure; cf, cephalic flexure; MB, midbrain; MBO, mamillary body; MO, medulla oblongata; P, pons; Th, thalamus. In (C,D) a dashed line brings out the contour of the brain. Scale bars: 500 µm.
Mouse Lines
Animals were housed and handled in ways that minimize pain and discomfort, in accordance with German animal welfare regulations (Tierschutzgesetz) and in agreement with the European Communities Council Directive (2010/63/EU). The authorization for the experiments was granted by the Regierungspräsidium Karlsruhe (state authorities) and the experiments were performed under surveillance of the Animal Welfare Officer of the University of Heidelberg responsible for the Institute of Anatomy and Cell Biology. To obtain embryos, timed pregnant females of the appropriate crossings were sacrificed by cervical dislocation.
We generated a novel conditional allele of Lhx5 by homologous recombination. We cloned the conditional Lhx5 targeting construct by inserting loxP sites into the Lhx5 locus spanning a region from intron 1 to intron 4 including exons 2-4 (Figures 2A-C).
In this line , the Foxb1 coding sequence has been replaced by a Cre-IRES-eGFP cassette by homologous recombination, and this allele expresses Cre and eGFP under the control of the regulatory sequences of Foxb1. These mice show Cre expression in the thalamic and hypothalamic neuroepithelium (Zhao et al., , 2008. We used only heterozygous Foxb1 Cre−eGFP/+ mice, which show a normal phenotype (Zhao et al., , 2008, Foxb1 Cre−eGFP/Cre−eGFP homozygotes were not used in this study.
Measurements of MBO Size
For the measurements of MBO size ( Figure 4I) we first visualized the MBO on sections by labeling it with an anti-GFP antibody, then measured the labeled area (in pixels) with the public domain software ImageJ. We did this for every section with MBO and added up all the MBO-section areas measured in every individual embryo. The result was the sum of the MBO section areas for mutants and controls.
In Situ Hybridization
Templates were amplified by PCR from cDNA and probes were synthesized using the Roche DIG RNA labeling Mix. In situ hybridization was performed on paraffin sections according to previously described protocols (Blaess et al., 2011). The sections were counterstained with 0.1% Eosin.
Microarray
The mamillary neuroepithelium of E10.5 wild type and mutant embryos was dissected and directly frozen at −80 • C. The RNA was preserved with RNAlater ICE (Life Technologies) and sent to MFT Services Tübingen, Germany, for microarray experiments and basic bioinformatical analysis. These experiments were done using the Affymetrix Gene Chip Mouse Gene 1.1 ST Array Plates. The "heat map" was generated by using the TM4 Software. All Microarray samples are available on the GEO database.
Quantitative PCR
The quantitative PCR was performed according to MIQE guidelines (Bustin et al., 2009). RNA was isolated using the RNeasy Plus Micro Kit (Qiagen) and RNA integrity was checked on an agarose gel. The RNA was reverse transcribed using the M-MLV Reverse Transcriptase (Promega). Quantitative PCR was performed using Power SYBR green PCR Master Mix (Applied Biosystems) and a Step One Plus Real-Time PCR System (Applied Biosystems). Quantification was performed with the delta-delta Ct method and Ef1 was used as endogenous control (reference gene).
Cell Culture
To generate a stably transfected Lhx5-expressing Neuro2a cell line, a construct was generated that added a FLAG-tag to the C-terminus of LHX5 and that controls the expression of Lhx5 under the Ptight-promotor of the Tet-On Advanced Inducible Gene Expression System (Clontech). The cells were cotransfected with this construct and the pTet-ON-Advanced vector (3:1) by using FUGENE HD Transfection Reagent (Promega). The cells were selected in medium containing 1 mg/ml G418 and clones that showed a doxycycline-dependent, homogenous expression of Lhx5 were frozen for further experiments.
ChIP-Seq
The ChIP-Seq experiments were performed according to published protocols (Robertson et al., 2007;Schmidt et al., 2009).
We treated the cells of our Lhx5-expressing Neuro2a cells (see above) with doxycycline for 24 h to induce Lhx5 expression, fixed the cells for 10 min in 1% formaldehyde and sonicated to generate fragments of approximately 200 bp. We performed chromatin immunoprecipitation overnight using an anti-FLAG antibody (Sigma) at 4 • C, then removed protein and RNA by enzymatic digestion and sent the purified DNA to the Deep Sequencing Facility (Heidelberg University, Germany). Highthroughput sequencing was performed using the NEB ChIPseq Master Mix Prep kit for Illumina and an Illumina HiSeq2000 instrument. All ChIP-Seq samples are available on the GEO database. The data obtained were analyzed using the tools of the Galaxy project (Giardine et al., 2005). Sequence reads were mapped to the mm9 genome assembly with Bowtie (Trapnell and Salzberg, 2009) and unmapped reads were removed. Peak calling was performed with MACS using a M-FOLD value of 10 and a pvalue cutoff of 1e-05. The input sequences were used as a control. The peaks were annotated with PeakAnalyzer 1.4 (Salmon-Divon et al., 2010) using the "nearest downstream gene" method. For the identification of enriched motifs we used DREME (Bailey, 2011). As a control, we performed the same analysis with cells that only expressed the FLAG-tag. In this control only 56 peaks were found and no meaningful binding motif was identified in the DREME analysis.
Luciferase Assay
Luciferase Assays were performed in the stably transfected Neuro2a cell line using the Dual-Luciferase Reporter Assay System (Promega). The identified LHX5 binding sites were cloned into a luciferase reporter pGL4.26 (Promega) and a Lmo1 expression vector (Origene MC203585) was used for competition experiments. TurboFect Transfection Reagent (Thermo Scientific) was used for the transfection of the Lhx5 expressing Neuro2a cell line. Transfected cells were treated with doxycycline to induce Lhx5 expression. Two independent experiments with triplicates were performed.
Cell Cycle Analysis
The cell cycle analysis was performed according to published methods (Martynoga et al., 2005). In this protocol, timedpregnant mice were first injected intra-peritoneally with 0.05 mg iododeoxyuridine (IddU, Sigma) in 0.9% NaCl per gram of body weight and then 1.5 h later with the same dose of bromodeoxyuridine (BrdU) (Sigma). After an additional 30 min, the mice were sacrificed and the embryos collected. Paraffin sections (8 µm) were obtained and IddU and BrdU labeled cells were detected using standard immunohistochemistry. IddUpositive and IddU/BrdU-double positive cells were quantified and the length of the S-phase and of the whole cell cycle were calculated according to published formulas (Martynoga et al., 2005).
ß-Galactosidase Activity Detection
ß-Galactosidase activity was detected as described (Koenen et al., 1982). Embryos from timed pregnancies were collected and Frontiers in Neuroanatomy | www.frontiersin.org directly frozen in OCT at −80 • C. The embryos were cut (20 µm) and the sections were fixed for 5 min in 1% paraformaldehyde, 0.2% glutaraldehyde and 0.2% NP-40 in PBS. The sections were then rinsed and incubated in staining solution (1 mg/ml X-gal, 5 mM K3Fe(CN)6, 5 mM K4FE(CN)6 and 2 mM MgCl2 in PBS) overnight in the dark at RT. The sections were counterstained with Nuclear Fast Red.
Lhx5 Expression in the Presumptive Mamillary Region
In order to investigate how the formation of the MBO is regulated we set out to analyze the function of the candidate gene Lhx5. Lhx5 is expressed in the caudal hypothalamus of the mouse as early as E9.5, approximately the time when Shh is specifically downregulated in this area (Sheng et al., 1997) The MBO is derived from the mamillary recess (Altman and Bayer, 1986), present in the mouse from E11.5. We detected Lhx5 transcripts by ISH at E11.5 in the presumptive mamillary neuroepithelium (ventricular zone) (medial sections) ( Figure 1A) as well as in the earliest post-mitotic layer (mantle layer) of the mamillary region ( Figure 1B). Later, at E18.5, when most nuclei and axonal tracts are already clearly recognizable, Lhx5 expression persisted in the MBO labeling it in a specific and strong way (Figures 1C,D). Detailed expression patterns can be found in databases (Allen-Institute-for-Brain-Science, 2009).
The presence of Lhx5 expression from the stage when Shh is downregulated in the ventricular zone until the MBO is fully formed suggested a role spanning the entire specification and differentiation of this nucleus and a possible link between the specification of the mamillary region as a separate hypothalamic field, and the specification of the MBO as a unique neuronal aggregate inside this field.
Biallelic Disruption of Intronic Sequences of Lhx5 Causes LHX5 Protein Loss and Abnormal Phenotype
In order to explore the role of Lhx5 in MBO development we generated a novel conditional Lhx5 mouse line (Figures 2A-C). The PGK-neo cassette was flanked by FRT sites and was removed by crossing with the FLPeR deleter mouse (Farley et al., 2000). Unexpectedly, we found high embryonic lethality for the non-Cre-recombined conditional mouse line: out of 747 mice from this line surviving after weaning only three (0.4%) were homozygous (Lhx5 fl/fl ). We hypothesized that the loxP insertions in intronic regions could have disrupted a hitherto unknown regulatory element necessary for appropriate RNA processing, as has been reported in a number of other mutants (see for instance, Meyers et al., 1998;Nagy et al., 1998;Kist et al., 2005). This could result in a reduced production of LHX5 protein in Lhx5 fl/fl mice and a hypomorph phenotype. To explore this possibility we first detected Lhx5 mRNA on histological sections of Lhx5 fl/fl embryo brains plus two other related genotypes as controls (Figure 2; Supplementary Figures 1, 2). As positive control we used Foxb1 Cre−eGFP/+ mouse embryos, which show normal phenotype (Zhao et al., , 2008. Mouse embryos with the Lhx5 fl/fl genotype (i.e., non-Cre-recombined) showed apparent decrease of mRNA expression as compared to Foxb1 Cre−eGFP/+ embryos (compare Figure 2D to Figure 2G; Figure 2E to Figure 2H). As negative controls we used mouse embryos with Foxb1 Cre−eGFP/+ ; Lhx5 fl/fl genotypes (i.e., conditional mutants for Lhx5); these showed as expected loss of Lhx5 expression in the mamillary region, where Foxb1 and Lhx5 normally coexpress (Figures 2J,K). Quantitation of mRNA by qPCR in Lhx5 fl/fl and Lhx5 +/+ embryos showed a non-statistically significant tendence to smaller values in the mutant (not shown).
Then we used an antibody specific for both the LHX1 and the LHX5 proteins on parallel sections (Figures 2F,I,L) to those hybridized for mRNA. On the sections from Lhx5 fl/fl embryos we could not detect any LHX1/5; the Foxb1 Cre−eGFP/+ embryos, on the contrary, showed strong protein expression in the mamillary body as expected (compare Figure 2F to Figure 2I). Finally, the mamillary region of the Foxb1 Cre−eGFP/+ ; Lhx5 fl/fl embryos did not show LHX1/5 protein either (compare Figure 2I to Figure 2L).
Moreover, phenotypical analysis of Lhx5 fl/fl embryos prior to Cre-recombination revealed a mutant phenotype resembling the published phenotypes of the Lhx5 full mutant-a defective hippocampus (Figures 3A,B) (Zhao et al., 1999) and ectopic Cajal Retzius cells forming a cluster in the caudal telencephalon (Figures 3C,D) (Miquelajáuregui et al., 2010).
We concluded that Lhx5 fl/fl embryos show a hypomorph phenotype probably caused by inefficient protein synthesis after insertion of a loxP site into an intron sequence with regulatory functions.
Lhx5 is Essential for the Development of the Mamillary Region
To analyze the role of Lhx5 in the development of the MBO, we crossed the Lhx5 fl/fl conditional line with the Foxb1 Cre−eGFP/+ line . Since Foxb1 is a specific marker of the developing MBO (Alvarez-Bolado et al., 2000b), this crossing leads to a conditional inactivation of Lhx5 in the MBO. In Nissl-stained sagittal sections at E18.5 the MBO of Foxb1 Cre−eGFP/+ mice was visible as a compact mass of neurons giving rise to a characteristic axonal bundle (the principal mamillary tract) ( Figure 4A). These structures were absent in Foxb1 Cre−eGFP/+ ;Lhx5 fl/fl brains ( Figure 4B) as well as in Lhx5 fl/fl brains ( Figure 4C). The absence of the MBO was confirmed by loss of GFP antibody detection (enhanced GFP reporter of the iCre-IRES-eGFP-cassette), confirming the histological result (Figures 4D,E; the non-recombined Lhx5 fl/fl brains lack of course the eGFP reporter). Finally, antibody detection of LHX5 protein (Figures 4F-H) showed that they are indeed lost in the non-recombined Lhx5 fl/fl brains as well as in the ones from Foxb1-Cre-eGFP +/− ;Lhx5 fl/fl crosses. The loss of the MBO in the non-recombined Lhx5 fl/fl embryos was indistinguishable from that in the Foxb1-Cre-eGFP +/− ;Lhx5 fl/fl embryos. To quantify the loss of the MBO we labeled sagittal sections of Foxb1-Cre-eGFP +/− ;Lhx5 fl/+ and Foxb1-Cre-eGFP +/− ;Lhx5 fl/fl with anti-GFP antibodies and measured the MBO area (using ImageJ software) (Figure 4I), uncovering a failure of the mutant MBO to grow to a normal size from E13.5 on.
Proliferation and Apoptosis are Not Altered in the Lhx5 fl/fl MBO Next we asked if either a defect in proliferation or increased apoptosis were responsible for the reduction in MBO size in the Lhx5-deficient hypothalamus. An initial analysis of mitotic and post-mitotic compartments in the early mamillary region using antibodies against the proliferation marker Ki67 (Starborg et al., 1996) and the neuronal marker beta-III-tubulin revealed no difference between Lhx5 fl/+ and Lhx5 fl/fl (Figures 5A-D). We then applied the IddU/BrdU method (Nowakowski et al., 1989;Martynoga et al., 2005) to analyze cell cycle length in the Lhx5 fl/+ and Lhx5 fl/fl mamillary neuroepithelium of E10.5-E12.5 embryos (Figures 5E,F). This method allows for calculation of the length of the S-phase of the cell cycle. One proliferation marker, IddU, is injected in pregnant mice at one time-point. After a known interval (90 min), a second proliferation marker, BrdU (which can be independently detected with specific antibodies) is injected. Both label the DNA synthesized during the S-phase. After 30 min, the embryonic brains are collected and the cells labeled either only by IddU or by both IddU + BrdU are counted. From the ratio between both numbers of cells, and since we know the interval during which cells can incorporate IddU but not BrdU (90 min.), the length of the S-phase can be easily calculated (see Martynoga et al., 2005 for details).
The Lhx5 fl/fl showed no change in S-phase duration ( Figure 5I) or in the duration of the cell cycle (Figure 5J), indicating that Lhx5 is not essential for MBO proliferation. To investigate a possible increase in apoptosis in the Lhx5-deficient MBO, we labeled sections of Lhx5 fl/fl and Lhx5 fl/+ caudal hypothalamus with an antibody against active Caspase3 (Figures 5G,H) and quantified the number of apoptotic cells at different developmental stages. Our results showed that the number of apoptotic cells was not increased in the Lhx5-deficient hypothalamus between E11.5 and E14.5 ( Figure 5K). Actually, the number of apoptotic cells per section detected at E11.5 was, although small in any case, higher in control animals than in mutants ( Figure 5K). Although it could be speculated about the biological significance or lack thereof of this finding, it could perhaps be a reflection of the altered properties of the mutant MBO cells already at this age.
In any case, a decrease in proliferation or an increase in cell death is unlikely causes for the reduction of the MBO in the absence of Lhx5.
Lhx5 Controls MBO Expression of Lmo1 and of the Cell Fate Determinants Tbx3, Olig2, and Otp To understand the molecular basis of the MBO hypoplasia observed in Lhx5 fl/fl embryos we performed comparative expression profiling using microarrays. We extracted total RNA from the caudal hypothalamus of wild type and Lhx5 fl/fl mouse brains. Since tissue loss in the mutant could bias the results, we collected the tissue at E10.5, before any reduction in MBO size is apparent in the mutant (Figure 4I). Unsupervised hierarchical clustering of genes that were at least 1.5-fold up-or downregulated with P < 0.05 yielded a heat map ( Figure 6A) indicating that the global gene expression patterns in embryos of the same genotype clustered together and that wild type and mutant samples were clearly distinct. Likewise, principal component analysis showed that mutant and wild type samples separate well from each other (Figure 6B). In this way we detected 56 downregulated and 41 upregulated named genes (as opposed to not yet identified transcripts like RIKEN clones, etc.) (Supplementary Table 1). After qRT-PCR validation we selected 15 candidates for further analysis (Figures 6C-E) including the cell fate determinants Tbx3, Olig2, and Otp as well as Lmo1, an interaction partner of LDB, the obligate cofactor of LHX proteins (Bach, 2000). In order to visualize the changes in spatial expression patterns, in situ hybridization analysis on tissue sections of E12.5 Lhx5 fl/+ and Lhx5 fl/fl embryos was performed (Supplementary Figure 3). All candidate genes downregulated in the Lhx5 fl/fl mutant (Figures 6C-E) were expressed in the Lhx5 fl/+ mamillary region (either in the neuroepithelium or in the mantle layer) and appeared reduced or absent in the mutant (Supplementary Figure 3). Some downregulated candidates showed complete loss of expression: Foxb2 (Supplementary Figures 3E,F), Ntm (Supplementary Figures 3Q,R), Lypd1 (Supplementary Figures 3S,T) and Cx36 (Supplementary Figures 3U,V). Others showed a strong reduction in labeling: Otp (Supplementary Figures 3G,H), Barhl1 (Supplementary Figures 3K,L), Nkx2.4 (Supplementary Figures 3C,D), Olig2 (Supplementary Figures 3I,J). Finally, other downregulated candidates showed pattern changes, like Tbx3 (Supplementary Figures 3A,B). As for the upregulated, Wnt5a The microarray data have been deposited in NCBI's Gene Expression Omnibus (Edgar et al., 2002) and are accessible through the GEO Series accession number GSE61614 and the link http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?token= yxgbokgqrvkdjif&acc=GSE61614.
LMO1 is a Possible Functional Antagonist of LHX5
In order to elucidate whether the regulatory interactions observed above are direct we used chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) (Robertson et al., 2007). Performing this analysis on primary tissue would have been the best choice. This was however not possible since there is no ChIP-grade available antibody against LHX5 (and the only antibody against LHX5 known to us identifies also LHX1). For this reason we chose to transfect a construct expressing a fusion protein of Lhx5 plus FLAG tag (which can be reliably identified with antibodies) into a stable cell line known to express Lhx5. Then we used the Tet-On Advanced Inducible Gene Expression System to regulate the Lhx5 expression level so that it mimics the natural expression level (see Materials and Methods for details). We identified 546 possible LHX5 binding sites, which we assigned to corresponding genes using the nearest downstream method (Peak Annotation with Peak Analyzer). We then analyzed these binding sites for enriched motifs using the DREME software (Bailey, 2011) and found a motif ( Figure 7A) that is enriched in 32.18% of the binding sites and corresponds to a predicted LHX5 binding motif (Berger et al., 2008). Of the loci corresponding to qPCR-validated microarray candidates, three showed this motif-Lmo1, Tbx3, and Foxb2. We then performed luciferase assays to test whether these binding sites can regulate expression of their downstream genes; the results indicated ( Figure 7B) that Lmo1, Foxb2, and Tbx3 are possible direct targets of LHX5 ( Figure 7B). LIM-domain-only (LMO) proteins (like LMO1) can negatively regulate the function of LIM-HD transcription factors by competing with them for binding to the dimer of their obligate co-factor LIM domain-binding protein (LDB) (Bach, 2000;Chen et al., 2010). When the two binding domains of the LDB dimer are occupied by two copies of a LIM-homeodomain protein (like LHX5), this protein is active as a transcriptional regulator. On the contrary, if one of the LIM copies is substituted by an LMO protein, the LIM-homeodomain transcription factor is not active anymore. The downregulation of Lmo1 that we observed in the Lhx5 mutant suggests that transcription of this negative LHX regulator, in turn, is activated by LHX5, thereby providing a negative feedback loop for Lhx5 ( Figure 7C). We used luciferase assays to test this hypothesis and found that LMO1 exerts dose-dependent inhibition of transcriptional activation from the LHX5 binding site (Figure 7D). In summary, we showed that Tbx3, Foxb2, and Lmo1 are possible direct targets of LHX5 and, additionally, that LMO1 negatively regulates LHX5 via a negative feedback loop.
Tbx3 has a Role in MBO Development
One of these genes, Tbx3, has an important role in the development of the hypothalamus (Manning et al., 2006;Trowe et al., 2013). We examined the expression of Tbx3 at three different medio-lateral levels in our mutants at E12.5 (Figures 8A-F) and found a strong reduction in the expression domain in the mutant. This reduction affected not only the rostro-caudal extension of the midline (Figures 8A,C vs. Figures 8B,D) but was also evident at more lateral levels (Figures 8E,F). We then hypothesized that Tbx3 is involved in MBO development, and on this basis predicted MBO defects in Tbx3-deficient brains. Examination of the hypothalamus of Tbx3 mutant mice (Hoogaars et al., 2007) at E14.5 showed a reduced MBO with abnormal morphology (Figures 8G-L) as well as a strong reduction in axonal projections (Figures 8M,N). Since the Tbx3 mutant embryos die before birth, usually around E14.5, we could not ascertain the possible total loss of the MBO at later stages.
The Shh Domain is Enlarged in Lhx5 Mutant Hypothalamic Midline
The reduction in Tbx3 is very intriguing, since this gene (Tbx2 in chicken) is needed to inhibit Shh expression in the tuberomamillary region (Manning et al., 2006;Trowe et al., 2013), an event indispensable for this region to differentiate (Manning et al., 2006). We hypothesized that the downregulation of Tbx3 in Lhx5 mutants would result in an abnormal expansion of the territory of Shh in the caudal hypothalamus. In situ detection of Shh expression confirmed that, in the Lhx5 mutants, the domain where Shh is normally downregulated becomes very small (Figures 9A,B). We obtained similar results at E12.5, when Shh expression is at its peak in this region, after which it starts to disappear (Figures 9C,D). We concluded that failure to completely inhibit Shh expression in the tubero-mamillary region is a possible mechanism explaining the MBO phenotype that we observe in the Lhx5 mutants.
The Foxb1 Lineage is Deficiently Specified in the Lhx5-deficient Caudal Hypothalamus Other transcription factor genes downregulated in our Lhx5 mutant are Olig2 and Otp, known to be involved in MBO development (see Discussion), as well as Foxb2, Barhl1, Nkx2-4, and Arx. Since we did not detect changes in apoptosis or proliferation defects, it seems likely that the cells constituting the MBO primordium are present in the mutants but that they have lost their specific MBO identity. The subsequent loss of cell adhesion protein expression (Cx36-Gdj and Ntm) could underlie the loss of morphological appearance of the MBO and make it undetectable. Since Foxb1 is an early marker of the mamillary neuroepithelium and the developing MBO (Kaestner et al., 1996;Alvarez-Bolado et al., 2000b), we used β-galactosidase detection in Foxb1-Cre;ROSA26R mice to reveal the MBO lineage. In Foxb1-Cre +/− ;Lhx5 fl/+ ;Rosa26R + a large caudal hypothalamic domain including the mamillary area was labeled in E13.5 embryos ( Figure 10A). In Foxb1-Cre +/− ;Lhx5 fl/fl ;Rosa26R + , however, there is only a restricted, round domain formed by cells of the Foxb1 lineage at E13.5 (Figure 10B), corresponding in appearance and position to the MBO. We assume that these are abnormally undifferentiated cells originally fated for the MBO. Expression analysis of specific MBO markers Foxb1, Lhx1, Sim1 and Sim2 (Figures 10C,E,G,I) showed strong downregulation in the mutant (Figures 10D,F,H,J). Additionally, the preserved expression of Lhx1 in regions other than the mamillary (Figures 10I,J) indicates that the result is specific. This result confirmed the presence of MBO cells with abnormal loss of identity in our mutant.
Overall Hypothalamic Regional Specification Appears Correct in the Lhx5 Mutant
To learn more about the extension of the changes observed in the Lhx5 mutant, we performed a general analysis with markers for adjacent regions as well as markers of the tuberal region. Hypothetically, Lhx5 might act in posterior hypothalamic progenitors to repress midbrain identity and at the same time promoting expression of MBO markers. Therefore, in the mutants, the rostral end of the ventral midbrain would abnormally extend into the mamillary region of the hypothalamus. We tested this hypothesis by detecting three genetic markers of the ventral midbrain whose expression patterns inform about the rostral extension of the ventral midbrain: Pitx2, tyrosine hydroxylase (Th), and Pitx3 (Figures 11A-F). The domains of these three markers were essentially unaltered in the mutants (see arrows in Figures 11A-F), which belied any expansion of the midbrain domain into the Lhx5-deficient hypothalamus. Arx and Olig2 are markers of the prethalamus, an Lhx5-expressing region adjacent to the hypothalamus. Expression of both genes was maintained in the mutants (Figures 11G-J). The expression of Tbr1, a marker of the thalamic eminence, was not affected in the mutants either (Figures 11K,L). Finally, we explored the expression of several markers of the tuberal region. The expression of genes specific for some important hypothalamic nuclei, like SF-1 (Nr5a1) (marker of the ventromedial nucleus of the hypothalamus) and Pomc (marker of the arcuate nucleus) was not changed in the Lhx5 mutants (Figures 11M-P). Lef1, a marker of the boundary between the mammillary and tuberal regions, was also essentially unchanged in the mutants (Figures 11Q,R). Additionally, we performed apoptosis (caspase 3) and proliferation (Ki67) analyses on the hypothalamic tuberal region as well as the prethalamus, which showed no change in the mutant (not shown).
We concluded that the hypomorph allele of Lhx5 that we have generated does not cause a general defect in hypothalamus development, but its effects are mostly felt on the MBO. This said, we have also observed gross alteration of pituitary development in our mutants (not shown), due probably to the reduction in Tbx3 expression (Tbx3 is essential for pituitary development, Trowe et al., 2013).
Discussion
We attempted to elucidate the mechanisms downstream Shh signaling by which the regions of the caudal hypothalamus acquire their identity. The expression domain of Lhx5 is appropriate for this gene to play a role in determining important properties of the caudal hypothalamus. Therefore, we generated a novel mutant allele giving rise to a hypomorph. Subsequent expression analysis with microarrays and other experiments have provided us with a series of candidate genes involved in appropriate differentiation of the MBO. LHX5 regulates directly or indirectly the onset or the maintenance of expression of these genes and is therefore key to MBO development (Figure 12). Two major pathways known to be involved in the development of the tubero-mamillary region could be affected in the Lhx5 FIGURE 12 | A possible network of Lhx5-regulated genes and interactions related to MBO development and differentiation. Genes downregulated (red arrows) or upregulated (blue arrows) according to microarray data validated by qPCR have been placed into 5 "bins". Bins 1-3 contain genes and interactions known to be essential for MBO development (bin 1) or for the development of the hypothalamus (bins 2 and 3). Bin 4 shows transcription factors specifically expressed in the MBO which have not been proven essential for its development. Bin 5 contains effector genes (adhesion, axonal guidance) expressed in the MBO and presumably involved in its differentiation. Lmo1 has a reciprocal regulatory relation with Lhx5 not shared with any of the other candidates. In gray, data from the literature.
mutant. One of them includes transcription factors Olig2 and
Otp acting upstream of Sim1 and 2 and finally Foxb1 for differentiation and survival of the MBO. The second involves the restricted inhibition of Shh by Tbx3 to allow for tubero-mamillary differentiation.
Olig2 and Otp Act Upstream of Sim1 and Sim2 in MBO Development Of the genes found in our expression analysis, some can be readily related to the MBO phenotype. We have divided them into five categories (Figure 12, bins 1-5). The first one contains the transcription factors Olig2 and Otp. Olig2 activates Sim1 in the developing zebrafish diencephalon (Borodovsky et al., 2009), and the expression domain of Sim1 was strongly reduced in our mutant (Figures 10E,F). Otp is required for Sim2 expression in the mouse hypothalamus (Wang and Lufkin, 2000). Both Sim1 and Sim2 are essential for MBO development (Marion et al., 2005) and additionally they are required to maintain MBO expression of the transcription factor Foxb1, which is itself required for MBO development and survival (Alvarez-Bolado et al., 2000a). This suggests that Lhx5 contributes to initiate or maintain a transcriptional network that is required for the specification of the MBO.
Tbx3 is Required for MBO Development Downstream of Lhx5
In the chicken hypothalamus, Tbx2 is required to antagonize Shh in the tubero-mamillary region in this way allowing it to acquire hypothalamic fate (Manning et al., 2006). The same function is performed by Tbx3 in the mouse (Trowe et al., 2013). We show here that Tbx3 is a possible direct target of Lhx5 (Figure 6B), that the Shh expression domain is inappropriately large in our mutant (Figures 9C,D), and that the Tbx3-deficient brain has an abnormal MBO (Figures 8G-L). Thus, Lhx5 is upstream of three pathways all of which can independently cause the MBO defects observed in our mutants (summarized in Figure 12, bins 1-3).
Is Wnt Inhibition through Lhx5 Required for MBO Development?
In zebrafish (Kapsimali et al., 2004) and chicken (Manning et al., 2006) hypothalamic fate acquisition requires Wnt pathway inhibition. Lhx5 inhibits Wnt by acting upstream of the Wnt antagonists Sfrp1a and Sfrp5 in zebrafish (Peng and Westerfield, 2006). Although to the best of our knowledge these interactions have not yet been confirmed in the mouse, our observation that Wnt5a (Figure 12, bin 2) is ectopically expressed in the MBO (Supplementary Figures 3C ′ ,D ′ ) seems to agree with them. This would suggest that a lack of antagonism of Wnt signaling in this region may lead to a failure to acquire appropriate fate. On the other hand, the increase in Wnt5a that we observe is very small, and there are no signs of other genes of the Wnt pathway altered in our mutant. For this reason we consider that this result suggests some Wnt involvement in MBO development, in general agreement with published data on other models. But this line of inquiry would need to be confirmed by further experiments in the mouse.
Deficient MBO Specification in the Lhx5 Mutants
The final result of these alterations could be a reduction in the region specified to produce MBO neurons as well as an imperfect differentiation of the MBO-fated neurons that could still be generated. Lhx5 mutants would have fewer MBO neurons and they would be incorrectly specified. This deficient specification would in turn cause loss of expression of specific markers (Figure 12, bins 4, 5). Furthermore, the genes in the fifth bin are involved in adhesion (Ntm, Cx36-Gdj) and axonal outgrowth (Shootin1), which suggests that the loss of MBO identity may additionally translate into a loss of specific aggregation of MBO neurons and loss of the characteristic mamillary axonal tree. The lack of changes in proliferation or apoptosis ( Figure 5) together with the persistence of a restricted group of Foxb1-lineage cells in the mamillary region (Figures 10A,B) are consistent with this hypothesis.
LMO1 is a Possible Direct Target and Antagonist of Lhx5
LMO proteins antagonize the function of LHX transcription factors by competing with them for binding to the LHX obligate partner LDB (Bach, 2000). Since we show that Lmo1 is a possible direct target of LHX5, we predict that Lmo1 and Lhx5 are arranged in a negative feedback loop and our luciferase assays confirm this prediction (Figure 7D). A similar mechanism is in operation in the developing thalamus between Lhx2 and Lmo3 (Chatterjee et al., 2012).
Zebrafish Vs. Mouse
The function of hypothalamic regulators seems to have been highly conserved during evolution and zebrafish orthologs have similar roles to their mouse counterparts (Machluf et al., 2011). Therefore, it would be interesting to know if the proposed gene regulatory network is evolutionary conserved and valid in other organisms. In zebrafish, the mamillary region is specified by the combined activity of transcription factors Fezf2, Otp, Sim1a, and Foxb1.2 (Wolf and Ryu, 2013). At a stage when neuronal specification takes place, the expression domains of Fezf2, Otp, Foxb1.2, and Sim1a form distinct subdomains within the zebrafish mamillary region giving rise to distinct mamillary neuronal subpopulations (Wolf and Ryu, 2013). Here we show that in the mouse Otp, Sim1, and Foxb1 are direct or indirect targets of LHX5, which is essential for MBO development. Fezf2 is expressed early in the developing forebrain and controls regionalization of the diencephalon in both zebrafish and mouse (Hirata et al., 2006;Jeong et al., 2007;Shimizu and Hibi, 2009;Scholpp and Lumsden, 2010). Furthermore, it is expressed in the mouse mamillary neuroepithelium, but not throughout the entire MBO (Allen-Institute-for-Brain-Science, 2009). Fezf1-and Fezf2are responsible for the expression of Lhx5 in the subthalamus, and the double mutant mouse exhibits a hippocampal phenotype very similar to that of the Lhx5 mutant (Zhao et al., 1999;Hirata et al., 2006). However, the MBO was intact in this double mutant (Hirata et al., 2006). These results suggest that the pathways underlying hypothalamic regional development are conserved to a high degree.
Acknowledgments
This work was supported by the Deutsche Forschungsgemeinschaft (AL603/2-1). The Tbx3−/− material was kindly provided by Oliver Trowe and Andreas Kispert (University of Hannover, Germany) and used under permission from Vincent M. Christoffels (University of Amsterdam, Netherlands). Dr. TZ was supported by Chongqing Science and Technology Commission Grant cstc2014jcyjA10045.
Supplementary Material
The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fnana.
2015.00113
Supplementary Table1 | List of results of microarray expression profiling.
Supplementary Figure 1 | Medio-lateral series of sagittal sections through the brain of mouse embryos (E12.5), genotypes as indicated at the top. For each genotype, the left column shows in situ hybridization for Lhx5 and the right column shows labeling of an adjacent section with anti-LHX1/5 antibody (at higher magnification). The arrows indicate the mamillary body primordium. Scale bars left column (ISH), 500 ìm; right column (antibodies), 100 µm. | 9,044 | 2015-08-14T00:00:00.000 | [
"Biology",
"Medicine"
] |
Drivers for precision livestock technology adoption: A study of factors associated with adoption of electronic identification technology by commercial sheep farmers in England and Wales
The UK is the largest lamb meat producer in Europe. However, the low profitability of sheep farming sector suggests production efficiency could be improved. Although the use of technologies such as Electronic Identification (EID) tools could allow a better use of flock resources, anecdotal evidence suggests they are not widely used. The aim of this study was to assess uptake of EID technology, and explore drivers and barriers of adoption of related tools among English and Welsh farmers. Farm beliefs and management practices associated with adoption of this technology were investigated. A total of 2000 questionnaires were sent, with a response rate of 22%. Among the respondents, 87 had adopted EID tools for recording flock information, 97 intended to adopt it in the future, and 222 neither had adopted it, neither intended to adopt it. Exploratory factor analysis (EFA) and multivariable logistic regression modelling were used to identify farmer beliefs and management practices significantly associated with adoption of EID technology. EFA identified three factors expressing farmer’s beliefs–external pressure and negative feelings, usefulness and practicality. Our results suggest farmer’s beliefs play a significant role in technology uptake. Non-adopters were more likely than adopters to believe that ‘government pressurise farmers to adopt technology’. In contrast, adopters were significantly more likely than non-adopters to see EID as practical and useful (p≤0.05). Farmers with higher information technologies literacy and intending to intensify production in the future were significantly more likely to adopt EID technology (p≤0.05). Importantly, flocks managed with EID tools had significantly lower farmer- reported flock lameness levels (p≤0.05). These findings bring insights on the dynamics of adoption of EID tools. Communicating evidence of the positive effects EID tools on flock performance and strengthening farmer’s capability in use of technology are likely to enhance the uptake of this technology in sheep farms.
Introduction
The United Kingdom (UK) is the largest lamb meat producer in Europe and the fourth largest worldwide. Despite the great size of British sheep breeding flock, sheep farming is traditionally a sector with lower profit margins than other livestock sectors such as dairy or pig farming [1][2][3][4]. Low margins coupled with heavy reliance on support payments [5] suggests there is room for increased production efficiencies in the sheep farming sector. Low record keeping traditionally seen on sheep farms is likely to be a missed opportunity on the identification of less efficiently used farm resources [5,6]. Although the use of technologies such as Electronic Identification (EID) tools simplify recording and retrieval of flock information and allow datadriven management decisions, anecdotal evidence suggest that its adoption has not been extensive, despite levy boards promotion actions in that direction. However, uptake rates have not been formally investigated in the UK.
Historically, identification of sheep in the UK was done by tattooing, piercing the ear with plastic tags or cutting notches in the external pinna. However, the introduction of an EU regulation in 2010 made Electronic Identification (EID) of all sheep mandatory, and from 2014 onwards all sheep movements had to be reported to the Animal Reporting and Movement Service (ARAMS), an animal movement database launched by the DEFRA (Department for Environment, Food & Rural Affairs). Electronic identification of individuals allows effective animal movement tracking in the event of a disease outbreak, and supports individual flock management with potential benefits with regards to labour efficiency [7]. EID identifiers (ear tags, boluses or pastern bands) contain a low radio frequency microchip with a unique identification number, which can be retrieved with an EID reader at up to 20 cm away. More advanced EID reader devices allow quick access to previous records and insertion of new data in the field. Electronic identification tag readers are an example of a "Precision Livestock Farming" (PLF) technology, which is a farm management concept developed in the mid-1980s which includes the set of tools and methods available for an efficient use of livestock resources [8][9][10][11]. EID recorded information can be used for informed decision making on several aspects of flock management, such as breeding (i.e. selecting individuals with desirable genetic traits), health (i.e. lameness, particularly with respect to culling repeatedly lame sheep), nutrition (i.e. facilitating the grouping of animals with similar body condition scores and tailoring their diet), and performance and welfare (i.e. monitoring weight gains and individual welfare outcomes) [5,12]. Despite these benefits, little is known about the use of EID technology as a management tool on sheep farms in the UK and to the authors' knowledge there is no peerreviewed publication on farmer's views and opinions on this technology.
Technology acceptance and uptake is complex and influenced by a variety of factors such as socio-demographics (age, education), financial resources and farm size, with these variables having different effects on adoption. Several theories have aimed at explaining adoption of technology in the past few decades-the Theory of Reasoned Action (TRA) [13], the Technology Acceptance Model (TAM) [14], the Theory of Planned Behaviour [15][16][17], the Diffusion of Innovation (DOI) Theory [18], and the Technology Readiness Index [19]. These models mainly focus on technology's 'internal' factors and individual perceptions related to those internal factors while ignoring any external influences (e.g. contextual, government, market). Whilst these generic models have been extensively used to explore technology adoption in sectors such as health and information systems, their usability in explaining technology adoption has not been explored widely for precision livestock farming and, specifically, investigating effect of both internal and external influences on adoption. Moreover, there are no studies on sheep farmer's beliefs on adoption of technology in the UK.
The aims of this research were to i) explore uptake and sheep farmers beliefs about EID technology for flock management in UK, ii) explore the association between EID adoption technology and farmers beliefs and other farmer and farm characteristics, and iii) investigate the association between use of EID technology and levels of lameness on farms, as a health outcome measure.
Study sample
A total of 2000 sheep farmers from England and Wales were sent a postal questionnaire in September 2015 enclosed with a cover letter explaining the aim of the study and data confidentiality. Commercial sheep farms supplying lamb deadweight to a major abattoir were contacted via postal mail. Farmers were invited to answer the questionnaire using the prepaid envelope enclosed with the questionnaire, and participate in a free draw with the winner receiving an iPad. To increase response rates, one reminder was sent to those farmers who had not yet answered the questionnaire.
Questionnaire design
The questionnaire was eight pages long and had five sections (text in S1 Questionnaire). Section 1 was designed to collect data on the farmer and the farm characteristics. It included information on years farming sheep, the farmer's age, other enterprises on farm (i.e. beef, dairy, arable, other), self-reported information technologies (IT) knowledge, technology used at home and on farm, internet use, percentage of time spent managing sheep, number of part and full time workers on farm, and land altitude. Section 2 aimed to gather data on flock production from September 2014 to August 2015. It included questions about flock size, production information such as pregnancy scanning percentage, number of lambs sold, number of lambs retained as replacements, number of lambs retained as stores, number of ewes culled, reasons for culling sheep, and questions on whether business changes have been made in the past year and whether changes were intended over the next two years. Section 3 asked farmers to estimate flock lameness in terms of prevalence during four periods of the past year (as previous research indicated farmers can estimate prevalence levels similarly to a lameness researcher [20,21]), and frequency of use of individual treatments, including treatment with antibiotic injection, considered best practice when treating lame sheep [22,23]. Section 4 included questions on how farmers recorded information on farm, EID use and type of EID technology used by the farmer. Section 5 included 21 belief statements related to farmer's opinions and beliefs about the use of EID for flock management. Twenty one statements were developed from Technology Acceptance Model and Technology Readiness Index constructs [24] and previous work by the researchers (Kaler and Green, 2013). Farmers were asked to answer the statements using a 5-point Likert type scale (1 = 'disagree strongly', 2 = 'disagree', 3 = 'neither agree nor disagree', 4 = 'agree' and 5 = 'agree strongly').
Questionnaire was pilot tested on five farmers, and improvements in the questionnaire were made accordingly before sending out to the study sample.
The study was approved by School of Veterinary Medicine and Science Ethics Committee (no: 1167 140528).
Data analysis
The data was analysed anonymously. The responses from the questionnaire were entered into the database software Microsoft Access and checked for errors. Data analysis including descriptive analysis, exploratory factor analysis and multivariable logistic regression modelling was completed in Stata 14 (Statacorp, USA). Sections 1-5 were analysed descriptively using means, medians and frequencies depending on the nature of the variable. A Kruskal-Wallis test was used to investigate if there was a significant association between flock lameness levels and farmer's use of EID technology. All usable data was used in the analysis.
2.3.1. Exploratory factor analysis of farmers beliefs. An exploratory factor analysis (EFA) was performed on farmer's belief statements. EFA is used to identify latent constructs underlying a set of related items [25]. Some checks were performed previous to the analysis. The Kaiser-Meyer-Olkin (KMO) test was done in each individual item to assess sampling adequacy (>0.5). The Bartlett test of sphericity (BS) (weighted p value x 2 <0.05) was performed to test for the existence of relationships among variables, and the appropriateness of the correlation matrix was checked by observing a systematic covariation among the items [26]. After these checks, factor analysis followed by oblique rotation (promax) of the factors was performed to permit a degree of correlation between factors [25,26]. A scree test, based on eingenvalues of the reduced correlation matrix, was performed to aid on deciding the number of factors to be retained [25,26]. Variables with low reliability (i.e. uniqueness>0.7) and with high cross loadings were discarded [26]. The exploratory factor analysis and rotation were rerun with the selected variables, and the final solution achieved. For each set of items per factor, the Cronbach's alpha and inter-item covariance were checked for testing for internal consistency [27,28].
Logistic regression modelling.
Two multivariable models were built to explore association between farmer beliefs (Model 1), farm/farmer characteristics (Model 2) and adoption of EID technology by farmers (outcome variable). Depending on a farmer's reported intention to continue using or intention to adopt EID technology for farm management in the following year, they were allocated to one of the three groups. First group was composed of farmers that intended to continue using the technology ('adopters'), a second group was composed of farmers intending to adopt it ('intenders') and a third group was farmers neither using nor intending to adopt it in the future ('non-adopters'). For modelling purposes the first two groups were merged after exploring that there were no significant differences between these groups with regards to beliefs. Model 1. Multivariable logistic regression was performed to model adoption/intention to adopt EID recorded information for flock management, using factors resulting from EFA as explanatory variables. For the predictor variables, each factor had scores which were computed using a non-refined method of weighted sum scores taking into consideration the strength or lack of strength of each factors' items [29].
A manual forward stepwise selection was performed [30]. P-values of 0.05 were retained in the model and were considered significant.
The model took the form: Adoption= intention to adopt EID recorded information f or f lock management $ a þ bXj þ ej Where α is the intercept and~is a logit link function, βXj is series of psychosocial factors/ beliefs, and ej is the residual random error that follows a binomial distribution. Model 2. Multivariable logistic regression was performed to model adoption/intention to adopt EID recorded information for flock management, using farm and farmer characteristics as explanatory variables. P-values of 0.05 were retained in the model and were considered significant. Stepwise model building approach was used, variables with p-values 0.05 or considered confounders or important from previous published work were retained in the model [30]. The model took the form: Adoption= intention to adopt EID recorded information f or f lock management $ a þ bXj þ ej Where α is the intercept and~is a logit link function, βXj is a series of explanatory variables using farm and farmer characteristics, and ej is the residual random error that follows a binomial distribution.
For Model 1 and Model 2, Pearson chi-square test was used to investigate associations between categorical variables, and non-parametric tests were used to investigate associations between continuous and categorical variables [30].
Results
A total of 439 out of 2000 questionnaires were received, generating a usable response rate of 22% (data in S1 Dataset).
Farmer and farm information
The majority of farmers was between 46 and 55 years old (57%, 246/435) (Fig 1) and half of the farmers (213/429) classified their IT knowledge as "medium" (Fig 2).
Seventy-seven per cent of farmers (327/423) used internet either for web browsing, email, or social network (twitter/ facebook), 10% of farmers reported other uses of internet, and about 13% did not use internet at all. Out of 435 farmers, approximately 46% used a smartphone (Android or iPhone) at home, but only 31% used it on farm. Forty-eight per cent (193/ 403) of the farms were located in the uplands, 37% in the lowlands and 15% were located in the hills. Seventy per cent (295/422) of the farms were located in Wales, while the remaining 30% were in England. Median flock size reported was 500 breeding ewes (IQR 250-850), and median scanning percentage was 160% (IQR 140-180) (363 observations). Most farmers had a beef enterprise on farm besides sheep (Fig 3).
Twenty-eight per cent (111/398) of farms hired one full time worker, and 14% and 4% of farms hired 2 and 3 full time workers respectively, during the same period. Eighty-one per cent of farmers (348/429) housed sheep at least once from September 2014 to August 2015. Median number of lambs sold was 550 (IQR 278-1000) (401 responses), median number of Regarding flock health management, ewe tooth loss was indicated by 81% (352/437) of farmers as a reason for selecting ewes for culling, followed by mastitis (70%), infertility (47%), lameness (32%), poor condition (30%) and low productivity (17%). One tenth of farmers indicated other reasons for selecting ewes for culling (i.e. prolapse, abortion (EA and Toxoplasmosis), high cull price and poor lamb prices). Twenty-six per cent (113/439) of farmers reported an intention to increase breeding flock size in the following 2 years, while 10% of farmers intended to decrease breeding flock numbers.
Recording information on farm and use of EID technology
Seventy-three per cent (322/439) of farmers used a notebook/diary to record information on farm, 34% (148/439) of farmers used a computer, 10% (45/439) used a smartphone, 16% (70/ 439) used a piece of paper, and 5% (24/439) used a tablet or personal digital assistant to record flock data. Almost all (99%, 417/420) flocks used EID ear tag, with only one flock being identified with bolus, and other flock with both bolus and ear tag. Fifty-two per cent (221/423) of respondents had an EID reader on farm. Of those, handheld EID reader was the most common type, being present on 99% of farms (219/221). Four farmers had both types (static and handheld), and only two farmers had a static reader only. Forty-eight per cent (61/126) used it for managing both ewes and lambs, 40% (50/126) used it for ewes only, and 12% (15/126) used it exclusively for lamb management purposes.
A total of 87 farmers (21%) reported using EID technology for management purposes and intended to continue using the technology ('adopters'); 97 farmers (24%) reported an intention to adopt the technology ('intenders') and 222 farmers (55%) reported neither using nor intending to adopt the EID technology for management purposes in future ('non-adopters'). There was no significant difference (p>0.05) between 'adopters' and 'intenders' groups with regards to their beliefs statements, and therefore these groups were merged. Thus the resulting groups were: farmers who adopted/intended to adopt EID for flock management (n = 184), and farmers with no intention of adopting EID for flock management in the future (n = 222).
Farmers beliefs on data recording and results of exploratory factor analysis
The number of respondents per belief statement, and the proportion of farmers strongly agreeing, agreeing, neither agreeing or disagreeing, disagreeing and strongly disagreeing with statements on use of EID technology is presented in Table 1.
EFA that was run on 21 belief statements resulted in three factors. Belief statements composing each factor and correspondent loading values can be seen in Table 2. Three belief statements loaded on the first factor called here after 'practicality' (α = 0.921) as this included beliefs related to practical elements of technology, three statements loaded on the second factor 'external pressure and negative feelings' (α = 0.877) and this included combination of external pressure and negative feelings toward technology regarding feeling of added complexity or distrust, and seven statements loaded on the third factor 'usefulness' (α = 0.653) which included beliefs on benefits of technology.
Multivariable analysis of factors associated with adoption/intention to adopt EID technology for flock management
Model 1. All three factors ('practicality', 'external pressure and negative feelings', and 'usefulness') were significantly associated with adoption/intention to adopt EID technology for flock management (Table 3).
Logistic regression results are interpreted in terms of odds ratios (OR). The OR represents the odds that an outcome (in this case adoption of EID technology) will occur given a particular variable/factor(in this case farmer's attitudes), compared to the odds of the outcome occurring in the absence of that variable/factor [31]. In summary, the odds ratio can be seen as a measure of effect [30]. Farmers who valued more the convenience, time and ease of use of EID technology (i.e. with higher scores on the 'practicality' factor) were 1.18 times (CI. 1.02-1.36) significantly more likely to adopt EID technology for management relatively to farmers with lower scores on that factor. The same effect was seen with regards to 'usefulness' factor, so that the more strongly farmers believed in the usefulness of the EID technology in terms of benefits related to health, productivity, veterinary consultation, abattoir feedback, traceability and breeding value, the more likely they were to adopt it (OR: 1.22 (CI 1.10-1.35)). In contrast, the more external pressure and the negative feelings (e.g. overwhelmed by complexity or scepticism in future ability of technology) farmers felt towards the technology the less likely they were to adopt EID technology (OR: 0.73, CI: 0.61-0.87) ( Table 3). All three factors were significantly correlated with each other with factor 1 and 3 positively associated and both negatively associated with factor 2. Model 2. Farm or farmer characteristics significantly associated with adoption or intention to adopt EID recorded information for flock management were: IT knowledge, use of smartphone to record information on farm, intention to intensify production in the next two years, time spent with the flock from September 2014 to August 2015, and always using an antibiotic injection to treat lame ewes from September 2014 to August 2015 (Table 4).
IT knowledge, use of smartphone to record information and intention to intensify production were positively associated with 'practicality' and 'usefulness' factors and negatively associated with 'external pressure and negative feelings' factor.
Factor 3 'usefulness'
The ease of use of EID technology is important to my decision to use EID recording for farm management 0.7842 The time required to use EID is important to my decision to use EID recording for farm management 0.9376
Association between use of EID technology and lameness levels
Farmers using EID technology ('adopters') for management from September 2014 to August 2015 had significantly lower flock lameness levels (median 5, IQR 2-6) compared to farmers who did not intend to adopt the technology ('non-adopters') (median 5, IQR 3-10) and farmers intending to adopt it in the future ('intenders') (median 5, IQR 4-10) (χ 2 = 10.91) p = 0.005). Fig 4 presents the framework obtained from our results. Farmers with high IT knowledge, using a smartphone to record information on farm, and with intention to intensify production were more likely to have adopted/intend to adopt EID tools to record flock information than farmers not having these characteristics. Farmers who had adopted /intended to adopt EID technologies were more likely to perceive it as practical and useful than non-adopters. On the contrary, external pressure and negative feelings factor was negatively associated with uptake of EID technologies.
Discussion
To the authors' knowledge, this is the first study exploring farmer's beliefs towards EID related technology. One of the key and novel findings in this study is that 'external pressure and negative feelings' factor seems to be significant in the adoption of technology, in addition to the practicality and usefulness aspects of technology-two constructs which are most frequently studied in technology adoption [32][33][34]. This factor included beliefs that negatively impacted adoption, that is, farmers that felt under pressure to adopt technologies were less likely to adopt EID recorded information for flock management. These farmers were more likely to see EID technology as an extra burden for farmers, complex, and had higher level of distrust and scepticism in current technology. This is consistent with the 'Technology Readiness Index' (TRI) paradigm, which argues that discomfort and insecurity towards a technology act as inhibitors of acceptance and have a negative relationship with technology adoption [19,24]. There is anecdotal evidence that legislation related to implementation of sheep EID in the UK was not well accepted among some farmers, who saw it as an extra bureaucratic burden with no clear benefits. This is also indicated by results in the current study as even though all farmers were complying with legislation by having EID tags for their flock, only 53% farmers were further utilising the presented opportunity to use EID technology for management by purchasing or owning EID readers. Furthermore, only 21% were actually using the EID technology for management purposes. This indicates that, despite investment, a low proportion of farmers are using this technology for management purposes. There could be several factors explaining this. First, as mentioned above, legislation involving a mandatory aspect of EID tagging lacked an overall approval of the sheep industry which may have generated negative perceptions and exacerbated feelings of pressure among farmers, and contributed to reluctance in adopting any EID equipment for management. Science and technological innovations are shaped by the social and political context they are developed within [35]. People's views on this social and political context influences their views of the technology [36,37]. Farmers feel that they are over burdened with regulations and audits from industry and government, and that mechanisms for auditing farmers are also often ineffective [38]. The correlation between farmer's views that there is too much pressure on them to adopt new technologies and that EID adds to the complexity of their information gathering demands-factors that relate to the compulsory use of EID for traceability, and how likely they are to adopt the technology for their own management purposes suggests that some farmers are being influenced by what they perceive as the negative political connotations of EID. A recognition of farmer's own forms of expertise and experience into the design of technologies [38] and measures to improve disease management [39] can foster trust and give farmers more ownership over disease management, rather than top down measures which farmers might find problematic. Similar approaches utilising principles of co-production have been used in health care for technology adoption [40].
Drivers of precision technology adoption by sheep farmers
Secondly, it is possible that feelings of external pressure further compounded by lack of published evidence and validated case studies on the beneficial effects of EID technology is responsible for generating negative feeling among farmers with regards to added complexity and distrust in technology. However, farmers who had better IT knowledge and were already using smartphone to record information were less likely to have these negative feelings and more likely to adopt technologies. This suggests that one way to negate these negative feelings might be by enhancing IT capability of farmers.
In the current study, two other factors-'practicality' and 'usefulness' were significantly associated with adoption of EID technology i.e. farmers that perceived EID related technology as useful and practical were significantly more likely to adopt or intend adopting it. These results are consistent with the "Technology Acceptance Model", which argues that "perceived ease of use" and "perceived usefulness" are key predictors of technology adoption [32,41]. Previous research on the adoption of technologies in agricultural field has reported similar results [34,42]. The importance of designing technologies that are easy to use and useful for the farmers has been previously highlighted [38]. Messages focussing on beneficial effects and the ease of use of EID technology may strenghten technology uptake.
Cost of the technology was an important factor across all the groups (adopters, intenders and non-adopters) as only 9% farmers disagreed or strongly disagreed with cost as important. The cost of an EID reader will depend on the complexity and features of the model, and current prices vary between £300 and £1000 approximately. Lack of resources (financial or others) are a well know barrier for the adoption of PLF tools [42][43][44][45]. However, technology adoption decision is frequently reported to be influenced by an assessment of the 'cost effectiveness' of the tool [46,47] and for this reason it would be expected that all farmers would rate importance of financial cost for adoption decision highly along with productivity and time saving gains. These results suggest that both adopters and non-adopters consider the 'absolute' cost of the tools an important factor in the adoption decision, possibly due to the low profit margins in sheep farming seen in recent decades.
One interesting finding of this study was that farmers from both groups (non-adopters and adopters/intenders) tended to disagree with the statement "Adoption of EID by other farmers is important to my decision to use EID recording for farm management" (only 20% farmers agreed or strongly agreed) with no significant difference between groups) suggesting 'social pressure' is not influential in adoption. This contradicts the findings of Kutter et al., (2011), who collected farmer's opinions about the use of PLF tools, and concluded that other farmers are regarded as very important for promoting interest in the topic. Other studies, however, pointed out that technology adoption is a highly individualistic process, conducted according to farmer's personality and experience, among other factors [48], and this may explain results in the current study.
The most important farmer characteristics predicting adoption of EID recorded information for flock management were the farmer's IT literacy and use of smartphone technology. This is not surprising, since PLF technologies are 'data intensive', and farmers with lower levels of IT literacy may struggle to manage and use efficiently big amounts of collected data [49,50]. Moreover, farmers already using technology (i.e., smartphone or computer) may find the introduction of new technology on farm compatible with existing practices. Compatibility with farming operations, equipment, and routines has been shown to have a significant effect on farmer perception of ease of use of technology, and indirectly on technology adoption [42].
Intention to increase production in the future was also significantly associated with adoption of EID related technology. Similarly, intensity of production was observed to be associated with adoption of precision farming technologies among Irish dairy farmers in a recent study [51]. This is in line with previous research indicating a relationship between adoption of new technologies by farmers and attitude towards investment and risk [45]. The proportion of labour time spent by a farmer in managing the flock in the previous year was positively and significantly associated with adoption of EID technology. This could be due to the fact that time spent could facilitate familiarity with technology which could then enhance confidence and influence perception of ease of use and perceived benefits. Off-farm employment has been negatively associated with the adoption of precision farming tools among US farmers due to lack of time to gain familiarity [52,53].
Our results show that other known sociodemographic factors seen as influencing technology uptake, such as age or enterprise size, did not significantly influence adoption of EID technology. Effect of age on adoption of technology has been variable with some studies suggesting this as a significant factor and poor adoption of technology with increasing age [54,55] whilst other suggesting age as not a barrier for adoption [42,53].
Previous research has also reported contradictory results with regards to enterprise size: while Aubert et al. (2012) reported no association between technology adoption and enterprise size, several other studies have reported a positive relationship [33,52,53,56,57]. It is important to emphasize that flocks in the current study were commercial breeding flocks with a median flock size of 500 which is larger than average flock size in the UK [58].
The use of EID technology for flock management was significantly associated with lower lameness levels. Lameness levels used in this study were estimated and reported by the farmers and fit closely to recent estimates of lameness prevalence [20]. Lower lameness levels could be due to the fact EID recorded information can be utilised to record individual sheep treatments and identify lame animal for isolation and culling, which is recommended best practice to reduce flock lameness levels [12,23]. Farmers using EID technology may also be more aware of the lameness levels of their flock, in contrast to farmers not using it. Farmers who rely on memory to identify sheep for culling have been previously reported to have higher relative risk of lameness [20]. All this suggests that EID technology could act as an important tool for management and control of lameness. Farmers always treating lame sheep with antibiotics (i.e following one of recommended practice to reduce lameness) [22] were also significantly more likely to be adopters of EID technology. This suggests that this group of farmers is perhaps more open to new innovations and have positive perceptions towards technology due to associated health and welfare benefits.
The selected sample for the survey was not random per se but the sample list had commercial farmers distributed across England and Wales and there was no difference between respondents and non-respondents with regards to location. There is still possibility that the results especially regarding absolute distribution of adopters and non-adopters are not representative of the whole of the UK or the entirety of England and Wales. However, this is less likely to affect the associations among the factors and adoption of EID technology. Despite this, the framework of factors associated with adoption of EID technology as presented in this study does not imply causation. The likely impact of these factors on adoption needs to be tested further in intervention studies and in confirmatory factor analysis.
One of the disadvantage of collecting data on by questionnaire on beliefs is that there may be a self-report bias. However, as recommended in the literature, actions were taken to reduce this bias and increase validity of the questions (i.e. phrasing belief statements in a non-judgmental way, and assurance that responses would remain confidential and anonymous). [59].
The results of the current study give us insight into what factors influence adoption of EID technology on farms and can be used to target actions to positively influence uptake by farmers. We believe our results also have a wider application to adoption of technology in general, and raise interesting questions on the inclusion of external pressures and negative feelings felt by farmers in adoption models. We need further work to explore how beliefs related to feelings of discomfort, distrust and external pressure are being formed in the farming community and investigate which specific functionalities of EID technology act as barrier for farmers (such as reading of the tags, use of software, or others) to further enhance adoption.
Conclusion
In this study English and Welsh sheep farmer's perceptions and their underlying beliefs towards EID technology were captured for the first time, giving new insights into barriers and drivers of adoption of this kind of technology. We conclude that the adoption of EID technology is influenced by three correlated factors: 'practicality', 'usefulness' and 'external pressure and negative feelings'. Well-communicated evidence of the positive effects of EID technology on farm performance and the health and welfare of the flock, co-production of EID technology service involving farmers, enhancing farmer's capability in use of technology is likely to enhance both farmer's trust in technology and its subsequent adoption. However, EID technology must be practical and cost effective. Factors such as age, farm type (upland or lowland) or size of farm seem to be less important for adoption of EID technology. | 7,677.6 | 2018-01-02T00:00:00.000 | [
"Economics"
] |
On the performance of fusion based planet-scope and Sentinel-2 data for crop classification using inception inspired deep convolutional neural network
This research work aims to develop a deep learning-based crop classification framework for remotely sensed time series data. Tobacco is a major revenue generating crop of Khyber Pakhtunkhwa (KP) province of Pakistan, with over 90% of the country’s Tobacco production. In order to analyze the performance of the developed classification framework, a pilot sub-region named Yar Hussain is selected for experimentation work. Yar Hussain is a tehsil of district Swabi, within KP province of Pakistan, having highest contribution to the gross production of the KP Tobacco crop. KP generally consists of a diverse crop land with different varieties of vegetation, having similar phenology which makes crop classification a challenging task. In this study, a temporal convolutional neural network (TempCNNs) model is implemented for crop classification, while considering remotely sensed imagery of the selected pilot region with specific focus on the Tobacco crop. In order to improve the performance of the proposed classification framework, instead of using the prevailing concept of utilizing a single satellite imagery, both Sentinel-2 and Planet-Scope imageries are stacked together to assist in providing more diverse features to the proposed classification framework. Furthermore, instead of using a single date satellite imagery, multiple satellite imageries with respect to the phenological cycle of Tobacco crop are temporally stacked together which resulted in a higher temporal resolution of the employed satellite imagery. The developed framework is trained using the ground truth data. The final output is obtained as an outcome of the SoftMax function of the developed model in the form of probabilistic values, for the classification of the selected classes. The proposed deep learning-based crop classification framework, while utilizing multi-satellite temporally stacked imagery resulted in an overall classification accuracy of 98.15%. Furthermore, as the developed classification framework evolved with specific focus on Tobacco crop, it resulted in best Tobacco crop classification accuracy of 99%.
Introduction Seasonality is one of the most important characteristics of vegetation. Multi-temporal remote sensing is an effective source to monitor and observe the growth dynamics for classification of vegetation and to analyze spatio-temporal phenomena (trends and changes) over the time, using time series data [1]. As remote sensing time series data is being generated at a high scale and enormous rate, it is needed to fully utilize its characteristics for effective land-cover classification. Such remote sensing data has rich content of seasonal patterns and its relative sequential association, which can be very beneficial while performing classification [2]. The provision of additional information by the remotely sensed time series data resulted in increasing interest from researchers and academia in processing of time series data, to extract features for retrieving useful information about the conditions and vegetational growth patterns [3]. While existing approaches to temporal feature extraction offer various ways to represent vegetation dynamics, however in reality, finding an effective and appropriate practical approach is not an easy task [4]. Artificial neural networks inspired from human biological learning systems are widely used machine learning algorithms. They have shown promising performance in diverse fields such as automatic decision support in medical diagnosis and classification of archaeological artifacts [5,6]. In [7] the use of temporal features for Tobacco crop estimation and detection using feed forward neural network resulted in the achievement of more than 95% accuracy.
Deep learning models, or Deep Artificial Neural Network (ANN) with more than two hidden layers, have ample model sophistication to learn from end-to-end data representations rather than manual feature engineering, based on human experience and knowledge [8,9]. Deep learning has been seen in recent years as a breakthrough technology in machine learning, data mining and remote sensing science [10]. Due to the versatility of deep learning models, expert free automated learning, computational efficiency, and feature representation studies specially related to Image classification hugely takes advantage of Deep learning [11]. Crop classification by Deep CNN shows improved performance in comparison to traditional machine learning methods [12]. A comprehensive study has been conducted to evaluate the performance of a temporal deep convolutional model (TempCNN), while using the satellites time series data [13]. They compared the performance of TempCNN to a traditional machine learning algorithm, Random Forest and a deep learning approach Recurrent Neural Network (RNN), which is suitable for temporal data. Their result shows that TempCNN is more accurate in classifying the satellite time series data than the other state of the art approaches. For deep learning-based studies of different vegetation, a novel domain specific dataset, CropDeep is introduced in [13]. The images in the dataset is collected by different cameras, IoT devices and other equipment. Further, different deep learning models are applied and compared their performance. They concluded that although deep learning algorithms have significant performance while classifying different crops. their is still room for improvement of the algorithms. A transformer architecture for embedding time-sequences is adapted in [14,15], in order to exploit the temporal dimension of time series data. They proposed the replacement of convolutional layer with encoders that operate on unordered sets of pixels to exploit the courser resolution of the satellite images publically available. Their method comes up with the decreased processing time and memory requirement and also with improved precision. Another deep learning approach, the long short term memory(LSTM) network, is employed to utilize the temporal characteristics of time series data for land cover classification task [16]. The LSTM network originated from text and speech generation is employed to earth observation. They compared its performance with Support Vector Machine (SVM), RNN and classical non-temporal CNN and achieved state of the art performance. The advantage of CNNs compared to traditional machine learning technique is evident from [17,18], where independent training of random forest, a traditional machine learning algorithm and CNN, a deep learning algorithm is studied and its concluded that the performance of CNN is better in terms of speed as well as accuracy. Furthermore, in [19], the effectiveness of the Conv-1D model in classifying the crop time series temporal data representation is studied and found efficient.
CNNs have become the defacto standard for various machine learning operations especially, studies related to image classification during the past decade. Deep CNN with multiple hidden layers and millions of parameters have the ability to learn complex patterns and objects, provided if trained appropriately on a massive size dataset with ground truth labels. With proper training, this ability of complex learning makes them a feasible tool in different machine learning applications for 2D signals such as images and video frames.
Related work
Our proposed study is a pixel based classification model that takes spectral and temporal features into consideration. Several studies have been conducted in the domain mentioned in the previous section, that used TempCNNs and RNNs. The most relevant and comparable to our study is conducted by C. Pelletier et al [13]. They have done an exhaustive study to prove TempCNNs as one of the best candidates for classification of crop types using multi temporal satellite imagery. Red, Green and NIR spectral bands, and some spectral indices (IB, NDVI and NDWI) has been used as input to the CNN, whereas 13 different classes were predicted as output by the network. Their model achieved an Overall Accuracy of 93.5%, by experimenting on several different hyper-parameters such as model depth, batch size and number of bands. In comparison, our model utilizes temporal inception blocks as well as a fusion of multi satellite imagery that helps learn richer features and hence results in a much higher overall accuracy of 98.1%.
A. Study area
Our study area is located in the Khyber Pakhtunkhwa (KP) province of Pakistan. More specifically, for our experimentation work in KP province we selected the Yar Hussain tehsil of District Swabi as presented in Fig 1. This area has wide arable land and a diverse vegetational environment. This region is known for the maximum growth of quality Tobacco crop, and has great revenue generation potential for KP province in-terms of taxable income.
Remote sensing multi-spectral data
In our experimental setup, we utilized the following two types of remotely sensed data. This data comprise of Sentinel-2 and Planet-Scope.
Sentinel-2. It's an open data satellite imagery, acquired from Copernicus open hub Sentinel-2 satellite [20]. Sentinel-2 is a Copernicus Program earth observation mission which systematically acquires optical imagery over land and coastal Waters at a high spatial resolution (10m to 60m). In our experimental study with focus on Tobacco crop classification, we considered remotely sensed imagery of our pilot region, acquired on 5th, 11th, 26th and 31st of May 2019, while keeping in view the phenological cycle of Tobacco crop.
Planet-Scope. The Planet-Scope constellation, with 120 satellites currently in orbit, makes up the largest commercial satellite fleet in history, collecting daily images of the entire landmass of Earth [21]. Its sensors are capable of capturing four different multispectral bandsincluding red, green, blue, and near-infrared multispectral bands, with resolution of 3-5 meters, which is reasonable to analyze and track changes in vegetation and forest cover. Planet-Scope is a a commercial satellite whose data can be purchased from Planet INC [22]. In our experimental setup, Planet-Scope [22] imagery of the pilot region, acquired on 27th of May, 2019 is considered. Fig 2 shows the timeline of the acquired images of the regions of interest.
Ground survey for data collection
The ground data collection surveys of the pilot region were conducted using an indigenously developed Geo Survey application [23]. A brief overview of the Geo Survey application is pictorially presented in (Fig 3) [7]. (Fig 3(a)) presents the main view of the application, with multiple choices to choose the method of survey. (Fig 3(b)) shows a polygon drawn around the survey area by choosing the tapping option from the main menu, while (Fig 3(c)) represents viewing of the conducted survey. (Fig 3(d)) presents the data viewing capability of the Geo Survey application. The developed Geo Survey application is native, which is using JAVA programming language. The survey data is being saved in google firebase real time database. Data from firebase is being downloaded in JavaScript Object Notation (JSON) format, and converted into KML using indigenous python scripting. The database used for the storage of data is MySQL. Finally, KML is converted into shapefiles using ARCGIS, for training and testing the performance of the proposed model. With the choice of retrieving a polygon by encircling or by selecting different points interactively, our survey application proved to be cost effective and time efficient, as compared to other traditional methods. In our experimental work, the underlying land cover was divided into five different classes; including Urban, Wheat, Tobacco, Water and Other Vegetables.
Data preparation
The acquired remotely sensed imagery of ( Fig 2) were further preprocessed using the following steps.
Spatial resampling. Due to different band resolutions of both Sentinel-2 and Planet-Scope, spatial re-sampling has been carried out using bi-linear interpolation. While considering the 2x2 neighborhood values of a known pixel, Bi-linear interpolation takes a weighted average of these 4 pixels and calculate its final interpolated value and designates it to the unknown pixel. Tables 1 and 2 shows band designations, spectral and spatial resolutions of Sentinel-2 and Planet-Scope. All bands of Sentinel-2 have been resampled to the resolution of Planet-Scope with a resolution of 3 meters.
Layer stacking. Normalized Difference Vegetation Index (NDVI), a spectral index was calculated separately for both Sentinel-2 and Planet-Scope satellite imageries. In literature, NDVI is commonly used as input in addition to the acquired spectral bands of the data. This usually helps in handling the non-linearity among the spectral bands of the input data. NDVI is a manually calculated spectral feature, which is quite useful for detecting healthy vegetation and is calculated as follows in Eq (1).
In our experimental setup, NDVI is calculated and layer stacked as an additional layer with the acquired multispectral imagery.
Temporal stacking. The final imagery that is used by our proposed model is a temporal stack of both Sentinel-2 and Planet-Scope images for that of the pilot region acquired at various temporal time-stamps, selected with reference to the phenological cycle of Tobacco crop, as presented in (Fig 3). More specifically, the resulting temporal stacked imagery consists of 20 overall bands, in which 4 bands are of Planet-Scope, and the remaining 16 are of Sentinel-2 satellite imagery.
Dataset
The samples set collected during the survey of the pilot region (Table 3) were divided into two subsets, namely training and testing sets. The training and testing data sets have overall 80% and 20% representation of the sample dataset. Additionally, a 15% data of the training set is separated as a validation set. Furthermore, stratified k-fold technique is used with 8 folds for statistical validation and performance analysis of our proposed model. The dataset consists of multi-spectral imagery from 2 different satellites Sentinel-2 and Planet-Scope. Sentinel-2 provides 13 different spectral bands with temporal resolution of 5 days, whereas Planet-Scope provides 4 different spectral bands with 1-day temporal resolution. Spectral bands were selected based on their capability and the extent of provision of the relative information contents with reference to our target research work. Three bands including Red Green and Near InfraRed with 3m and 10m spatial resolution were selected for Planet-Scope and Sentinel-2, respectively. The blue band is discarded because it's not useful in crop classification and is quite sensitive towards atmospheric particles such as dust and clouds [24]. The number of polygons and pixels, collected through ground survey, for each ground class is presented in Table 1.
Convolutional neural network
The proposed model shown in Fig 4, is based on the work of [25] and [26]. As discussed in the previous section our experimental setup uses satellite time series imagery, to learn spectro temporal features. Our model consists of three temporal convolutional inception blocks, followed by a Dense layer and a Softmax layer for classification. Each temporal convolutional inception block utilizes filters of size 1,3 and 5 in parallel to provide feature maps that are concatenated and passed to the subsequent layer as shown in filtering, the input is zero padded in order for activation maps to be of the same size and stacked appropriately. These blocks are responsible for learning Spectro temporal features, as 1D convolutional filters are used, which are already proven effective for learning temporal features [27]. Pooling layers are not used in our architectures, as it's mostly used in computer vision tasks where a whole image or parts of image are provided to the neural network as 2D signals with channels on the third axis, and it is mostly used for robustness against noise and to highlight most dominant spatial features. The proposed model is not using the spatial data, due to the lack of delineation labels, therefore only spectral and temporal data is supplied as input to the network in the form of 2D matrix where x-axis represents the temporal axis and yaxis represents the spectral axis. Therefore, pooling will only result in dimensionality reduction and in turn decrease in accuracy as discussed in [28]. The implemented model is a pixel-based classification algorithm, it takes a pixel as input, and outputs the predicted class that it belongs to. Each pixel represents the spectral reflectance of the on-ground land cover class. The spectral reflectance may vary with the change in time, lighting conditions and other atmospheric factors. Similarly, Vegetations have phenological cycles, and the state of crop is subject to change during that cycle, which in turn contributes towards changes in its spectral reflectance. The Input to our neural network is 4x5-2D matrix as presented in (Fig 5), where a row represents spectral bands while the columns represent timesteps (image acquisition dates), as presented in (Fig 2). So, the same pixel of different dates is stacked in columns of the matrix. In (Fig 5) the structure of the input is discussed where Red, Green, Near Infra-Red and Normalized Difference Vegetation Index data are denoted by R, G, NIR, NDVI respectively, as the row labels, whereas timesteps are denoted by t1, t2, t3 . . . t5.
Inception 1-D convolutional block
The inception block of our model architecture is one dimensional convolution and is shown in (Fig 6). It runs from left to right (on temporal axis) as can be seen in Fig 5, hence the term temporal convolution is used. The intuition behind it is that they will learn the spectro-temporal features of the crops. The filter size 3 learns the changes in 3 timesteps, filter size 5 learns in 5 timesteps and filter size 1 will learn in a single time-step. These three parallel convolutions are the novelty we added to our model, inspired by the famous inception net, and are giving significantly better results than the same network without an inception block, which gives 92% accuracy. Feature maps obtained from the 1-D convolutions are stacked (concatenated) and passed on to the batch normalization layer, and finally an activation function is applied to introduce non-linearity into our model. As neural networks are prone to overfitting [29], several regularization techniques are employed to curb it. Dropout regularization is used after each block with the percentage set to 20%, which helps reduce overfitting by disabling a few features from each layer during training. L2 regularization of scale 10-6 is utilized, which is proven to be very effective as it settles for a less complicated model by introducing a penalty. Number of epochs is set to 15. Furthermore, early stopping is used with patience of zero validation loss.
Model loss and accuracy
In this section, we discuss the models training and validation accuracy and loss throughout the training phase. demonstrates the logarithmic curve for both sets. An important thing to notice is that both curves follow the same pattern and keep increasing till our epoch limit that is set to 15. Similarly, model's loss for both sets is depicted in Fig 7, which follows a smooth positive decay curve without any significant fluctuations, and the loss is converging towards the end of the curve. The reason that test set accuracy is more than train set is that we are using a drop-out regularization in training which disables some of the features (it's given in percentage) in each layer to keep it from overfitting. Additionally, the use of early stopping overcomes the overfitting problem of the model. In order to analyze the performance of the proposed model and to investigate its trend in classification performance, the classification results were generated with 50%, 80%, 90%, 95% and 99% confidence score, as presented in Fig 8. Fig 8, present the number of pixels of the classified classes of the pilot region with their confidence scores. It can be observed from Fig 8 that increasing the classifier's confidence threshold results in reduction in number of pixels classified to a specific land cover class and increase in number of unclassified pixels. Similar performance trends were observed for all the considered land cover classes, including Wheat, Tobacco, Water, Urban and Other Vegetables. Furthermore, it can be observed from the results presented in Fig 8 that an increase in the confidence threshold of the proposed model results in gradual increase in the number of unclassified pixels. This is due to the fact that while considering a high confidence threshold for the classification model, pixels with lower classification probabilities are flagged as unclassified, in order to achieve highly precise results. With the visual inspection on tools like ENVI (Environment for Visualizing Images) [30], it was concluded that a threshold of 90% gives very accurate shape boundaries for different crop fields and Water streams etc. Furthermore, it is also observed that the increase in the classification confidence threshold also reduces any excessive outliers, where
Classification performance
This section presents parameters on which classification performance has been evaluated [7] and classification report of the proposed model over the pilot region.
Accuracy assessment parameters
• Precision-The precision is termed as user accuracy in land use land cover (LULC) classification, which calculates the presence of correctly predicted number of training data pixels of a class in the classified image. The term is also regarded as precision.
Precision ¼ ðCorrectly classified sitesÞ ðTotal number of classified sitesÞ ð2Þ • Recall (Sensitivity) Recall or sensitivity can be defined as the number of correctly predicted training data pixels of a class, compared to the total training data pixels provided to the system.
Recall ¼ ðCorrectly classified Ground Truth SitesÞ ðTotal number of Ground Truth sitesÞ ð3Þ
• F1 score-The weighted average of precision and recall is called F1-score.
• Overall-Accuracy-Overall accuracy is the ratio of the sum of all correctly classified training data pixels to the total number of training data pixels. Where; Total accuracy ¼ ðNumber of all correctly classified samples ðTotal number of samplesÞ � 100 ð5Þ
Classification report
Our experimental work was concluded while employing the best model architecture. More Specifically, in our classification model we utilized batch size 128 and relu as activation function for all the hidden layers. Furthermore, it is observed that the input consisting of only temporally stacked multispectral imagery (Sentinel-2 and Planet-Scope) spectral reflectances without vegetation indices (NDVI) results in best classification performance. The proposed model was utilized for land cover classification of the pilot region and results were obtained in the form of a confusion matrix shown in Table 4. It can be observed from the obtained results, presented in Table 4, that our proposed model resulted in convincing classification performance, with an overall accuracy of 98.15%. Both precision and recall for the Other Vegetables class has been the recorded as 97%, while for the Wheat class, precision and recall of 98% and 97% has been observed, respectively. Moreover, our proposed model showed remarkable classification performance for Tobacco class, with precision and recall performance of 99%. Water and Urban classes has been perfectly classified, giving 100% precision and recall performance.
Land cover classification
Finally, the pilot region of interest was classified by applying our trained Deep Learning model to the temporal stack of raster multispectral remote sensing data and classified landcover map of the pilot region was generated, as shown in Fig 9.
Results and discussion
In this section we will discuss the quantitative and qualitative outcome of the proposed model as an output of the performed experiments. As Artificial Neural networks do possess a variety of hyper parameters that can be tuned to get the best performance for the task at hand, but trying every possible value for them in combination with others is practically not feasible, as it requires a huge amount of computational power due to the innate compute hungry nature of the algorithm. Therefore, several best practices have been established for certain hyper parameters and can be borrowed for similar tasks from other networks as well. To balance the biasvariance trade off, our network is borrowing parameters such as depth or number of hidden layers; which is set to 3; as well as the number of filter units; which is set to 32; from the work of [13]. As CNNs are computational intensive and performing experiments for finding hyperparameters requires lots of processing power, therefore we have utilized Microsoft Azure cloud services for the training and experimentation of our model. More details are listed in Table 5.
Our algorithm is using Adam as an optimizer with default learning rate of 10-2, beta 1 of 0.9, beta 2 of 0.999 and epsilon of rate 1e − 07, similarly the l2 regularization rate is set to 1.e − 6.
We have searched through the other hyper parameters by evaluating them on overall accuracy of training and validation set on 8 folds of data. Experiments were performed with a focus on the impact of the following factors on the land cover classification performance.
Influence of spectral indices
Neural networks are very powerful algorithms in terms of learning a nonlinear relationship between input and desired output. The depth of neural networks corresponds to the complexity of the features that it learns from the input data. The top layers that are nearer to the input, learn simpler features, whereas the deeper layers learn more complex features. Therefore, neural networks are known to automate the process of feature engineering and shifted the burden towards architecture engineering. However, in remote sensing several spectral indices are calculated from the available electromagnetic spectrum for certain tasks, such as NDVI, NDWI and Birlliance Index etc. for finding green vegetation, and are provided as input feature to classifier, along with the multispectral imagery. Our proposed model has been tested with various combinations of spectral bands and NDVI, to find its impact on the overall classification performance. It can be observed from the results presented in Table 6 that NDVI has very minor impact on the classification performance of the proposed model. This reiterates the fact that ANNs are very powerful in terms of learning compound features from the raw intensities of different spectral bands of pixels, and are intelligent enough to extract meaningful information in the deep layers on its own, instead of explicitly incorporating additional information in the remotely sensed data, such as spectral indices, such as NDVI, which themselves are calculated from the same remotely sensed input data.
Influence of batch size
Due to the limitation of computational power, a study of the influence of batch size on the overall training and validation accuracy and training time was conducted on the 8 folds of data. Four different batch sizes were used with values 16, 32, 64 and 128. As reflected by the results in Table 7, variations in the batch size has very insignificant influence on the overall training and validation accuracy of the model. More specifically, the overall accuracy is increased by increasing the batch size. Therefore, higher batch size value can be used to speed up the training processes.
Influence of activation function
Experiments were conducted while employing three of the most common activation functions to the proposed base model in order to find out their influence on the model accuracy. The three activation functions are briefly described below. The first activation function is sigmoid, which is most commonly used as a logistic function as well and it outputs values in the range 0-1. It is mathematically represented by the following formula.
Second function is called Relu, it takes logits as input and outputs zero for negative values whereas linear for positive values. It is described by the following formula.
Third function is tangent hyperbolic or tanh, it is similar to sigmoid and gives output values between -1 and 1 [31].
It is evident from the results obtained and presented in Table 8 that ReLU is performing better than tanh and sigmoid. Moreover, ReLU also has lower standard deviation in comparison to tanh and sigmoid.
Visual analysis
The best settings of various factors, were selected for the proposed model based on the results obtained and presented. Experiments were performed using the proposed model to classify the Yar Hussain region in the Swabi District of KP, Pakistan. The classified images of the pilot region were visually analyzed. Samples of the classified visual are shown in Fig 9(b), 9(c) and 9(d) which are zoomed in view of the sub regions of Fig 9(a). In Fig 9(b), it can be seen that our implemented model has classified the Tobacco class with very fine boundaries. Towards the edges a small number of unclassified pixels gives the confidence about the classified Tobacco class. Fig 9(c) shows a high number of unclassified pixels, the reason is, the mixing of different classes into one another at class boundaries. Fig 8 clearly shows this trend, as the confusion rises with the increase in confidence score the pixel of Wheat class followed by Tobacco and Other Vegetables get into the unclassified class to be sure about the classified classes. A Water stream can also be seen at the bottom right corner of Fig 9(a) as well as Fig 9(d), classified accurately with detail using our algorithm. Despite being a pixel based model, our model has shown promising results of predicting different land cover classes with precise shapes.
Conclusion
In this paper we presented a state-of-the-art mechanism of inception inspired Deep CNN for Tobacco crop classification using remotely sensed time series multispectral data. Instead of the conventional approach of utilizing single satellite imagery, both Sentinel-2 and Planet-Scope imageries are stacked together in order to increase its spectral resolution and to avail maximum reflectance information. The multi-satellite imagery stacking results in an improved classification performance of the proposed model. Furthermore, instead of utilizing a single date imagery, the temporal resolution of the applied multispectral data is improved by stacking multi-date satellite imageries. The concept of multi-date satellite imageries stacking is applied with specific focus on the phonological cycle of Tobacco crop. Indigenously developed "GEO-Survey" surveying application was utilized for ground truth data survey. It is concluded with the results obtained that the proposed novel approach of TempCNN consisting of three temporal convolutional inception blocks and a fully connected and a SoftMax layer resulted with overall classification accuracy of 98.15% on satellite time series data. Furthermore, the developed classification framework with particular focus on Tobacco crop, resulted in the highest Tobacco crop classification accuracy of 99%. | 6,797 | 2020-09-28T00:00:00.000 | [
"Computer Science",
"Agricultural and Food Sciences",
"Environmental Science"
] |
Characterization of the Optical Properties of Turbid Media by Supervised Learning of Scattering Patterns
Fabricated tissue phantoms are instrumental in optical in-vitro investigations concerning cancer diagnosis, therapeutic applications, and drug efficacy tests. We present a simple non-invasive computational technique that, when coupled with experiments, has the potential for characterization of a wide range of biological tissues. The fundamental idea of our approach is to find a supervised learner that links the scattering pattern of a turbid sample to its thickness and scattering parameters. Once found, this supervised learner is employed in an inverse optimization problem for estimating the scattering parameters of a sample given its thickness and scattering pattern. Multi-response Gaussian processes are used for the supervised learning task and a simple setup is introduced to obtain the scattering pattern of a tissue sample. To increase the predictive power of the supervised learner, the scattering patterns are filtered, enriched by a regressor, and finally characterized with two parameters, namely, transmitted power and scaled Gaussian width. We computationally illustrate that our approach achieves errors of roughly 5% in predicting the scattering properties of many biological tissues. Our method has the potential to facilitate the characterization of tissues and fabrication of phantoms used for diagnostic and therapeutic purposes over a wide range of optical spectrum.
In both SFDI and FDPM methods, the diffusion equation is used to approximate the Boltzmann transport equation. This results in the overestimation (underestimation) of the diffuse reflectance at low (high) spatial frequencies. In addition to the fitting error, enforcing the boundary conditions in the diffusion equation 15 introduces some error in arriving at the analytical formulas for realistic semi-infinite media. Moreover, the experimental setup in both SFDI and FDPM methods are complex and costly. In SFDI, in addition to a spatial light modulator, two polarizers at the source and detector are needed to reject the specular reflection collected normal to the surface. As for FDPM technique, a network analyzer is required to modulate the current of the LED and to detect the diffused reflectance of the temporally modulated beam. These instruments render the setup complex and costly. Furthermore, these methods are incapable of measuring the anisotropy coefficient of the sample, g , which is an important parameter for characterizing turbid media [16][17][18] . In biological tissues, the probability of scattering a beam of light at an angle θ (with respect to the incoming beam) can be described suitably by the Henyey-Greenstein phase function 19,20 : (1 ) where the optical properties of the turbid medium depend on both g (that characterizes the angular profile of scattering) as well as the scattering length, s l , the average distance over which the scattering occurs. Among these techniques, IAD is the most popular one due to its relatively higher accuracy and simpler experimental setup. Briefly, IAD is based on matching the measured and the calculated diffuse reflectance and transmittance by calibrating the scattering and absorption coefficients used in the simulations. When an accurate measurement of the un-scattered transmission can be made, it is possible to obtain g as well. In IAD, the errors are mostly attributed to the experimental data. For instance, when measuring the total transmission and reflectance, part of the light scattered from the edge of the sample can be lost, or when measuring the un-scattered transmission, the scattered rays may unavoidably influence the measurement 13 .
We propose an efficient method to address the above challenges and have a better compromise between accuracy and the cost of measuring the scattering parameters (i.e., g and s l ). Our method is based on a supervised learner that can predict the scattering pattern of a turbid medium given its thickness (t) and scattering parameters. Once this supervised learner is found, the scattering parameters of any turbid sample can be calculated given its thickness and the image of the scattered rays' pattern either by inversing the supervised learner or performing an optimization task.
Our process for obtaining the scattering pattern, as illustrated in Fig. 1, starts by producing a pencil beam from an LED placed behind an aperture. The pencil beam has a well-defined but arbitrary polarization and is incident on the turbid medium with a known thickness. The surface of this medium is then imaged to a camera sensor through a lens, where the un-scattered beam with the well-defined polarization is rejected via a polarizer placed next to the turbid medium. We note that with such a non-coherent and phase-insensitive measurement, the size of the image as well as the components scale with the dimeter of the laser. Because of this scaling rule, the length unit of the image shown in Fig. 1 equals the number of the scaled pixels of the camera. We also note that for a collimated illumination, the distance between the source and the sample is arbitrary. A similar argument holds for the distance between the polarizer and the sample because the un-scattered light is collimated.
We employed the same configuration as in Fig. 1 in our computational simulations. In particular, we placed the camera lens far from the sample ( cm 15 ) such that the scattered light is almost parallel to the optical axis. We employed a lens with a focal length, radius, and maximum numerical aperture of, respectively, cm 4 , mm 6 , and . . Additionally, the optical resolution of the system according to Rayleigh's criterion was m 4 μ at nm 1550 21 , which is equal to the pixel pitch of the detector in our simulations. Since the pixel pitch was larger than half of the optical resolving limit and hence the Nyquist criterion was not satisfied, the scattering patterns are slightly blurred. As for the LED bandwidth, Δλ, we chose it wide enough to have a coherent length much smaller than the optical path length of the rays ( ). With this choice, the coherent effects do not distort the scattered images. In particular, we ensured that Δλ where n is the refractive index and λ is the wavelength of interest. The minimum required bandwidth is nm 40 when L m 200 opt μ = , = . n 1 33, and λ = nm 1550 . We note that, in our method the simulations are performed on thin slabs of phantom or tissue with known thickness. Although performing the same type of experiment using reflection is in principle possible, we expect much weaker reflection than transmission for such thin slabs. Additionally, the reduction of the signal strength translates into lower SNR and higher measurement errors in the case of reflection. We have also found some experimental works which are based on quantitative phase of the transmission images of thin samples for which << t s l 22,23 . As opposed to this latter approach, our method is based on the intensity of the scattering patterns which is simpler and applicable to thicker samples.
To fit our supervised learner, a high-fidelity training dataset of input-output pairs is required. Here, the inputs (collectively denoted by x) are the characteristics of the turbid samples (i.e., g, s l , and t) while the outputs (collectively denoted by y) are some finite set of parameters that characterize the corresponding scattering patterns (i.e., the images similar to the one in Fig. 1). We elaborate on the choice of the latter parameters in Sec. 0 but note that they must be sufficiently robust to noise so that, given t, the scattering parameters of any turbid sample can be predicted with relatively high accuracy using the supervised learner.
Results
To construct the computational training dataset, we used the Sobol sequence 24 . It is noted that the lower limit on the sample thickness is because of the considerable inaccuracies associated with the negligible probability of scattering in thin samples. In contrast, the upper limit on the sample thickness is bounded due to the computational costs associated with tracing the large number of ray scatterings. As for ranges of g and s l , they cover the scattering properties of a wide range of biological tissues including but not limited to liver 26 , white brain matter, grey brain matter, cerebellum, and brainstem tissues (pons, thalamus) 27 .
Once the simulation settings were determined, following the schematic in Fig. 1, the scattering pattern corresponding to each of them was obtained by the commercial raytracing software Zemax OpticStudio. Although there are many software programs applicable for this task (such as Code v, Oslo, and FRED), Zemax is perhaps the most widely used software for ray tracing. Unlike mode solvers, ray tracing is computationally fast. The significant scattering effects as well as the employed broadband light source (i.e., the LED) further justify the use of a ray-tracing software. In our simulations with Zemax , rigorous Monte-Carlo simulations were conducted for higher accuracy (instead of solving the simplified diffusion equation) and the turbid media were simulated with the built-in Henyey-Greenstein model 28 . To push the upper limit on the sample thickness to 600 μm, we increased the number of Monte Carlo intersections and observed that the maximum capacity of Zemax (roughly two million segments per ray) must be employed for sufficient accuracy. Additionally, we found that a 100 × 100 rectangular detector and five million launched rays provide a reasonable compromise between the accuracy and the simulation costs (about 3 minutes for each input setting).
As mentioned in Sec. 1, the scattering patterns corresponding to the simulation settings (i.e., the DOE points) must be characterized with a finite set of parameters (denoted by y in Fig. 1) to reduce the problem dimensionality and enable the supervised learning process. To determine the sufficient number of parameters, we highlight that our end goal is to arrive at an inverse relation where the g and s l of a tissue sample with a specific thickness can be predicted. Therefore, if the parameters are chosen such that both g and s l are monotonic functions of them, two characterizing parameters are required for a one-to-one relation. It must be noted that, these parameters must be sufficiently robust to the inherent errors in the simulations mentioned above. We will elaborate on this latter point below and in Sec. 3.
We have conducted extensive studies and our results indicate that the transmitted power, p, and the scaled Gaussian width, σ, can sufficiently and robustly characterize the scattering patterns of a wide range of tissue samples. While p measures the amount of the LED beam power transmitted through the sample and collected at the image, σ measures the extent to which the sample scatters the LED beam. It is evident that these parameters are negatively correlated, i.e., increasing p would decrease σ and vice versa.
Measuring p for an image is straightforward as it only requires integrating the gray intensity over all the image pixels. Measuring σ, however, requires some pre-processing because the amount of scattering in an image is sensitive to noise and has a strong positive correlation with it (i.e., high scattering would involve a high degree of noise in the image and vice versa). As illustrated in Fig. 2, we take the following steps to measure σ for an image: 1. Filtering the image with a Gaussian kernel to eliminate the local noises (see panel b in Fig. 2). In general, the width of the Gaussian kernel depends on the resolution of the original image as well as the amount of noise. In our case, the filtering was conducted (in the frequency space) with a kernel width of 7 pixels. 2. Obtaining the radial distribution of the intensity by angularly averaging it over the image. 3. Mirroring the radial distribution to obtain a symmetric curve and then scaling it so that the area under the curve equals unity (see panel c in Fig. 2). At this point, the resulting symmetric curve would approximate a zero-mean Gaussian probability distribution function (PDF).
4. Fit a regressor to further reduce the noise and enrich the scattered data which resemble a Gaussian PDF (compare the solid and dashed lines in panel c). 5. Estimate the standard deviation of the Gaussian PDF via the enriched data. Divide this standard deviation by the power of the image (i.e., p) to obtain σ.
As for the regressor, we recommend employing a method that can address the potential high amount of noise in some of the images which, as mentioned earlier, happens when scattering is significant (e.g., when t is large while g and s l are small). We have used Gaussian processes (GP's), neural networks, and polynomials for this purpose but recommend the use of GP's mainly because they, following the procedure outlined in ref. 29 , can automatically address high or small amounts of noise. Additional attractive features of GP's are discussed in Sec. 2.1 and Sec. 5.
The reason behind scaling the standard deviation in step 5 by p is to leverage the negative correlation between the transmitted power and the noise to arrive at a better measure for estimating scattering. To demonstrate this, consider two images where one of them is noisier than the other. It is obvious that the noisier image must be more scattered and hence have a larger scattering measure. To increase the difference between the scattering measures (and, subsequently, increase the predictive power of the supervised learner), one can divide them with a variable that is larger (smaller) for the smaller (larger) scattering measure. This variable, in our case, is the transmitted power which is rather robust to the noise.
Finally, we note that the images were not directly used in the supervised learning stage as outputs because: (i) Predicting the scattering pattern is not our only goal. Rather, we would like to have a limited set of parameters (i.e., outputs) that can sensibly characterize the image and hence provide guidance as to how the inputs (i.e., g s t [ , , ] l ) affect the outputs (and correspondingly the scattering patterns). Using the images directly as outputs is a more straightforward approach but renders monitoring the trends difficult. (ii) With 100 100 × outputs (the total number of pixels), fitting a multi-response supervised learner becomes computationally very expensive and, more importantly, may face severe numerical issues. One can also fit × 100 100 single-response supervised learners but this is rather cumbersome, expensive, and prone to errors due to high amounts of noise in some pixels. (iii) With × 100 100 outputs, the inverse optimization processes (for estimating g and s l given t and an image) becomes expensive. 65 , and economics 66 . These methods provide the means to predict the response of a system where no or limited data is available. Neural networks, support vector machines, decision trees, Gaussian processes (GP's), clustering, and random forests are amongst the most widely used methods. In case of biological tissues, supervised learning via neural networks has been previously employed, e.g., for classification of tissues using SFDI-based training datasets [67][68][69][70][71][72] .
We employ GP's to link the characterizing parameters of the scattering patterns (i.e., p and σ) with those of the tissue samples (i.e., t, g, and s l ). Briefly, the essential idea behind using GP's as supervised learners is to model the input-output relation as a realization of a Gaussian process. GP's are well established in the statistics 73 , computational materials science 33,40 , and computer science 74 communities as they, e.g., readily quantify the prediction uncertainty 75,76 and enable tractable and efficient Bayesian analyses 77,78 . In addition, GP's are particularly suited to emulate highly nonlinear functions especially when insufficient training samples are available.
In our case, the inputs and outputs corresponds to x t g s [ , , ] l = and y p [ , ] σ = , respectively. As there are two outputs, we can either fit a multi-response GP (MRGP) model or two independent single-response GP (SRGP) models. With the former approach, one GP model is fitted to map the three-dimensional (3D) space of x to the two-dimensional (2D) space of y. With the latter approach, however, two GP models are fitted: one for mapping x to p and another for mapping x to σ. The primary advantage of an MRGP model lies in capturing the correlation between the responses (if there is any) and, subsequently, requiring less data for a desired level of accuracy. An MRGP model might not provide more predictive power if the responses are independent, have vastly different behavior, or contain different levels of noise.
We conducted convergence studies to decide between the two modeling options and, additionally, determine the minimum DOE size required to fit a sufficiently accurate model. As mentioned earlier, the Sobol sequence was employed to build one DOE of size 400 over the hypercube . ≤ ≤ . , and μ ≤ ≤ t m 200 600 . Sobol sequence was chosen over other design methods (e.g., Latin hypercube) because consecutive subsets of a Sobol sequence all constitute space-filling 50 designs. Following this, we partitioned the first 300 points in the original DOE of size 400 into six subsets with an increment of 50, i.e., the i th DOE (i = 1,…, 6) included points … × i 1, , 50 from the original DOE. The last 100 points in the original DOE (which are space-filling and different from all the training points) were reserved for estimating the predictive power of the models. Next, three GP models were fitted to each DOE: (i) an MRGP model to map x to y, and (ii) two SRGP models; one to map x to p and another to map x to σ. Finally, the reserved 100 DOE points were used to estimate the scaled root-mean-squared error (RMSE) as: where N is the number of prediction points (N = 100 in our case), q is the quantity of interest (either p or σ), and q is the estimated quantity by the fitted model. Figure 3 summarizes the results of our convergence studies (see Sec. 5.1 for fitting costs) and indicates that: 1. As the sample size increases, the errors generally decrease. The sudden increases in the errors are either due to overfitting or the addition of some noisy data points. 2. σ e of the MRGP model is almost always smaller than that of the SRGP (compare the red curves in Fig. 3a and b). The opposite statement holds for e p . This is because p, as compared to σ, is much less noisy. Based on the convergence studies, we can conclude that an MRGP model with at least 300 training data points can provide, on average, prediction errors smaller than 5%. Following this, we fitted an MRGP model in 28.6 seconds to the entire dataset (i.e., DOE of size 400) and employed it in the subsequent analyses in Sec. 2.2. Figure 4 illustrates how p and σ (and hence the scattering patterns) change as a function of tissue sample characteristics based on this MRGP model. The plots in top and bottom rows of Fig. 4 demonstrate the effect of inputs on, respectively, the transmittted power and the scaled Gaussian width. In Fig. 4(a) and (b), s l is fixed to either mm 0 1 . or . mm 0 04 and the outputs are plotted versus t and g. In Fig. 4(c) and (d), t is fixed to either 300 mm, or 500 mm and the outputs are plotted versus g and s l . In Fig. 4(e) and (f), p and σ are plotted versus s l for three values of g while having t fixed to 400 mm. In summary, these plots demonstrate that decreasing a sample's g or s l , or increasing its thickness, would decrease the transmitted power while increasing the scattering (i.e., σ). Moreover, both p and σ change monotonically as a function of the inputs. This latter feature enables us to uniquely estimate g and s l given t, p, and σ.
To quantify the relative importance of each input parameter on the two model outputs, we conducted global sensitivity analysis (SA) by calculating the Sobol indices (SI's) 79,80 . As opposed to local SA methods which are based on the gradient, SI's are variance-based quantities and provide a global measure for variable importance by decomposing the output variance as a sum of the contributions of each input parameter or combinations thereof. Generally, two indices are calculated for each input parameter of the model: main SI and total SI 81 . While a main SI measures the first order (i.e., additive) effect of an input on the output, the total SI measures both the first and higher order effects (i.e., including the interactions). SI' are normalized quantities and known to be efficient indicators of variable importance because they do not presume any specific form (e.g., linear, monotonic, etc.) for the input-output relation.
Using the MRGP model, we conducted quasi Monte Carlo simulations to calculate the main and total SI's of the three inputs for each of the outputs. The results are summarized in Fig. 5 and indicate that all the inputs affect both outputs. While p is noticeably sensitive to g (and equally sensitive to t and s l ), σ is almost equally sensitive to all the inputs. It is also evident (as captured by the difference between the height of the two bars for each input) that there is more interaction between the inputs in the case of p than σ.
Inverse Optimization: Estimating the Scattering Properties of a Tissue. Noting that the sample thickness can be controlled in an experiment, tissue characterization is achieved by finding the scattering parameters of the sample given how it scatters a pencil beam in a setup similar to that in Fig. 1. More formally, in our case, tissue characterization requires estimating g and s l given p, σ, and the sample thickness t. Although in principle we can inverse the MRGP model at any fixed t to map p [ , ] σ to g s [ , ] l , this is rather cumbersome. Hence, we cast the problem as an optimization one by minimizing the cost function, F, defined as: . We note that, the model predictions are subject to = t t e where t e is the sample thickness. To test the accuracy of the fitted MRGP model in estimating the scattering parameters, we generated a space-filling test dataset of size 100 while ensuring that none of the test points were the same as the 400 training ones used in fitting the MRGP. For each test point, then, the outputs (i.e., p and σ) and the sample thickness (i.e., t) were used to estimate the inputs (i.e., ĝ and s l ) by minimizing Eq. 3. To solve Eq. 3, we used the Fmincon command in the optimization toolbox of MATLAB ® . Figure 6(a) illustrates the prediction errors of estimating g (on the left axis) and s l (on the right axis) for the 100 test points. It is evident that the average errors are zero in estimating either g or s l , indicating that the results are indeed unbiased. In Fig. 6(b) the errors are plotted with respect to the sample thickness to investigate whether they are correlated with t. As no obvious pattern can be observed, it can be concluded that our procedure for estimating g and s l is quite robust over the range where t is sampled in the training stage.
For further investigations, we normalize the errors and provide the summary statistics in Table 1. As quantified by the scaled RMSE, on average, the prediction errors are relatively small, especially for the g. The maximum scaled errors are . 9 1% (corresponding to simulation ID 56 in Fig. 6) and . 18 1% (simulation ID 22) for g and s l , respectively. Figure 7 demonstrates the contour plots of the cost function for these two simulations. As it can be observed, in each case there are regions in the search space of g s [ , ] l where the cost function F is approximately constant. In fact, the true optimum and the estimated solution (indicated, respectively, with white and red dots in Finally, we note that the inverse optimization cost is negligible (less than 10 seconds) in our case because we have employed a gradient-based optimization technique which converges fast because (i) it uses the predictions from the MRGP model for both the response and its gradient (which are done almost instantaneously), and (ii) we have reduced the dimensionality of the problem from 100 × 100 (the number of pixels in each image) to two (i.e., σ p [ , ]).
Discussions
In our computational approach, the accuracy in predicting the scattering parameters of a turbid medium mainly depends on (i) the errors in Zemax simulations, (ii) the predictive power of p and σ in characterizing the scattering patterns, and iii ( ) the effectiveness of the supervised learner and the optimization procedure. The inherent numerical errors in Zemax inevitably introduce some error into the training dataset. In addition, the number of launched rays in our Monte Carlo simulations, though having utilized the maximum capacity of Zemax, might be insufficient and hence introduce some inaccuracies. This latter source of error particularly affects samples which scatter the incoming LED beam more (e.g., thick samples with small g and s l ) because once the number of segments per a launched ray exceeds the software's limit, the ray is discarded.
To reduce the problem dimensionality and enable the supervised learning process, the images of the scattering patterns (see, e.g., Fig. 2a) were characterized with two negatively correlated parameters, namely, p and σ. Since the images are not entirely symmetric and may not completely resemble a Gaussian pattern, employing only σ to capture their patterns' spread will introduce some error. We have addressed this source of error, to some extent, by filtering the images with a Gaussian kernel (see, e.g., Fig. 2b) and enriching the radial distribution of the scattering patterns by a GP regressor (see Fig. 2c). Our choice of regressor, in particular, enabled automatic filtering of small to large amounts of noise through the so-called nugget parameter. Additionally, we leveraged the negative correlation between p and σ in the definition of σ to increase its sensitivity to the spreads. The supervised learning and inverse optimization procedures will, of course, benefit from reducing the simulation errors and finding parameters with more predictive power than σ.
As for the supervised learner, we illustrated that a multi-response Gaussian process can provide sufficient accuracy with a relatively small training dataset (see Fig. 3a). Learning both responses (i.e., p and σ) simultaneously, in fact, helped to better address the noise due to the negative correlation between the responses. As demonstrated in Fig. 3a, the MRGP model with 300 training samples can achieve, on average, errors smaller than 5%. Increasing the size of the training dataset would decrease the error but, due to the simulation errors, an RMSE of zero cannot be achieved. Additionally, sensitivity analyses were conducted by calculating the Sobol indices of the inputs (i.e., t, g, and)using the MRGP model. As illustrated in Fig. 5, all the inputs are effective and affect both outputs with p being noticeably sensitive to g and embodying more interactions between the inputs.
We casted the problem of determining the scattering parameters as an inverse optimization one where g and s l of a tissue sample were estimated given its thickness t, and the corresponding scattering pattern (i.e., p and σ). In optimization parlance, the objective or cost function (defined in Eq. 3) achieves the target scattering pattern by Table 1. Summary of prediction errors: The scaled RMSE (see Eq. 2) and scaled maximum error are calculated for the 100 data points in Fig. 6. . × . in (b). The true and estimated optima are indicated with, respectively, white and red dots in each case.
searching for the two unknown inputs while constraining the sample thickness. As illustrated in Fig. 6 for 100 test cases, our optimization procedure provides an unbiased estimate for the scattering parameters with an error of roughly 5%. The inaccurate estimations in our optimization studies are because g and s l have similar effects on p and σ. This is demonstrated in Fig. 7 where the local optima of the objective function create a locus and hence overestimating g or s l would result in underestimating the other and vice versa. To quantify this effect, we calculated Spearman's rank-order correlation between the errors g ĝ − and μ − s l for the 100 data points reported in Fig. 6 and found it to be 0 90 − . . Such a strong negative correlation value (Spearman's rank-order correlation is, in the absence of repeated data values, between 1 − and 1) indicates that when g is underestimated, s l will be overestimated and vice versa.
When coupling our method with experimental data, there will be some measurement errors primarily due to the noise of the camera sensor, insufficient rejection of the un-scattered waves, and the inaccuracy in determining the sample thickness. To address the dark current noise of the camera, the camera integration time should be increased. It is also favorable to increase the power of LED to reduce the influence of the parasitic rays of the environment, but caution must be practiced to avoid saturating the camera or damaging the sample. To ensure proper rejection of the un-scattered rays, an image corresponding to the absolute pixel-by-pixel difference of the two images with and without the polarization filter should be obtained. Then, the maximum intensity on the difference image should be compared with that of the image obtained with the polarization filter. Lastly, to minimize the errors due to sample thickness, samples with uniform and carefully measured thickness must be prepared. Inaccuracies in measuring the sample thickness or using considerably non-uniform ones, will adversely affect the prediction results. To quantify the sensitivity of the predictions to inaccuracies associated with thickness, we repeated the inverse optimization process in Sec. 2.2 considering potential measurement errors of 10%. In particular, we redid the inverse optimization for the test dataset while employing thickness values with 10% difference from the true values (i.e., instead of t, t 0 9 . or t 1 1 . were used). The estimated values (i.e., ĝ and ŝ l ) where then compared to the true ones. As summarized in Table 2, the scaled RMSE's have, especially in the case of s l , increased. To see whether the sample thickness has a correlation with the errors, in Fig. 8 the errors are plotted versus t. Although the errors are quite large in some cases (which is expected because the true t value is not used in the inverse optimization), the overall results are unbiased (i.e., the average errors are close to zero). These results indicate that, as long as the sample thickness is measured sufficiently accurate, the model can provide an unbiased estimate for g and s l .
Errors with 10% Overestimation of t Errors with 10% Underestimation of t g 5 90 .
. Table 2. Summary of prediction errors while enforcing 10% error in thickness: The scaled RMSE's (see Eq. 2) are calculated for the 100 test points while enforcing 10% difference between the sample thickness used in the simulations and the thickness used in the inverse optimization. Lastly, we compare the accuracy of our approach to other methods. The reported errors in Fig. 3, Table 1, and Fig. 6 do not consider the errors that will be introduced upon experimental data collection. As explained below, our error estimates are comparable to those of the FDPM, SFDI, and IAD methods from a computational standpoint. It is noted that in all these methods (including ours) the dominating error will be associated with experimental data (once it is used in conjunction with simulations).
In FDPM, the model bias 77 originates from the assumptions made for solving the diffusion equation (e.g., using a semi-infinite medium as opposed to a finite-size sample) 82 . Besides this, there are two other error sources in FDPM (i) the preliminary error due to approximating light transport in tissues with diffusion equation is estimated to be 5~10% 83 , and (ii) the error due to the quantum shot noise limit of the instrument which depends on the configuration and components of the system. For a reasonable system comprising of two detectors and one source at the modulation frequency of MHz 500 , the limit of quantum shot noise results in about 2% error in estimating the scattering coefficient 82 . The scattering length, s l , is the inverse of the scattering coefficient and will roughly have the same error. Similarly, the percentage errors of s l roughly equals to that of the reduced scattering coefficient defined as − g s (1 )/ l . Considering only these two sources of noise, we can assume a total noise of around 10% for the FDPM technique. As for the SFDI technique, the diffusion approximation results in an overall reported error of around 3% for the reduced scattering coefficient 9 . In the IAD method, the prediction errors are sensitive to the input data. For instance, it is reported that with a 1% perturbation in the inputted transmission and reflection amounts, the relative error in estimating the scattering coefficient and anisotropy factor increases 10 and 4 times, respectively 13 .
Conclusion
We have introduced a non-invasive method for computational characterization of the scattering parameters (i.e., the anisotropy factor and the scattering length) of a medium. The essence of our approach lies in finding a supervised learner that can predict the scattering pattern of a turbid medium given its thickness and scattering parameters. Once this supervised learner is found, we solve an inverse optimization problem to estimate the scattering parameters of any turbid sample given its thickness and the image of the scattered rays' pattern. Additionally, our approach is computationally inexpensive because the majority of the cost lies in building the training dataset which is done once.
To the best of our knowledge, this is one of the simplest and most inexpensive methods of tissue characterization because, in practice, only a few basic and low-cost instruments such as an LED, an aperture, a polarizer, and a camera are required. Additionally, our analyses and results are independent of the wavelength of the LED and therefore the scattering parameters of many tissues can be estimated over a wide range of visible and infrared wavelengths. We note that, in our method it is assumed that the absorption is much weaker than the scattering and thus its effect on the output images is negligible. This assumption holds for some tissues including white brain matter, grey brain matter, cerebellum, and brainstem tissues where the scattering coefficient is more than 100 times larger than the absorption coefficient in most of the visible and in the reported near-infrared range 27 (see Table 3 in ref. 84 for more details). Measuring weak absorption of tissue with our method requires more intense data analysis and processing. However, we believe that this limit doesn't translate into impracticality of our method as there are methods 22,23 which can only estimate g and s l .
We plan to experimentally validate our approach and quantify the effect of measurement errors (due to, e.g., the noise of the camera sensor and insufficient rejection of the un-scattered waves) on estimating the scattering parameters. We believe that this method has the potential to facilitate the fabrication of tissue phantoms used for diagnostic and therapeutic purposes over a wide range of optical spectrum.
Methods
Gaussian Process Modeling. GP modeling has become the de-facto supervised learning technique for fitting a response surface to training datasets of either costly physical experiments or expensive computer simulations due to its simplicity, flexibility, and accuracy 74,[85][86][87][88] . The fundamental idea of GP modeling, is to model the dependent variable, y, as a realization of a random process, where ∈ n is the number of inputs. The underlying regression model can be formally stated as: = between x Z( ) and w Z( ), GP modeling essentially consists of estimating the β coefficients, process variance s, and parameters of the correlation function ⋅ ⋅ R( , ). Often, the maximum likelihood estimation (MLE) method is used for this purpose 89,90 .
We implemented an in-house GP modeling code in Matlab ® following the procedure outlined in ref. 29 . The so-called Gaussian correlation function was employed with an addition of a nugget parameter, δ, to address the possible noises: Scientific REPORTS | 7: 15259 | DOI:10.1038/s41598-017-15601-4 where θ are the roughness parameters estimated via MLE. For noiseless datasets, δ is generally set to either a very small number (e.g., 10 −8 ) to avoid numerical issues, or zero. In our work, we have used GP's for two purposes: (i) to smooth out the radial distribution of the scattered rays and enrich the associated PDF for a better estimation of its standard deviation (see Fig. 2), and (ii) to fit a response surface for mapping x t g [ , , ] μ = to σ = y p [ , ]. We emphasize that, the adaptive procedure of ref. 29 allows to adjust δ in Eq. 6 to address negligible to large amounts of noise.
Multiple studies have extended GP modeling to multi-output datasets. Of particular interest, has been the work of Conti. et al. 91 where the essential idea is to concatenate the vector of responses (i.e., y y y [ , , ] u T 1 = … for u outputs) and model the covariance function as = ⊗ x w s x w c R ( , ) (, ) where s is the u u × covariance matrix of the responses and ⊗ is the Kronecker product. Finally, it is noted that since we did not know a priori how p and σ change as a function of μ = x t g [ , , ], a constant basis function was used (i.e., x f ( ) 1 = ) in all our simulations.
The computational cost of fitting each of the MRGP models used in the convergence study is summarized in Table 3. As it can be observed, the costs are all small and increase as the size of the training dataset increases.
Sensitivity Analysis with Sobol Indices. Sobol indices (SI's) are variance-based measures for quantifying
the global sensitivity of a model output to its inputs. For a model of the form , the main SI for the i th input x i is calculated as: where V Y ( ) is the total variance of the output, X i denotes all the inputs except X i , V X i is the variance with respect to x i , and E Y x ( ) x i ĩ | is the expectation of Y for all the possible values of X i while keeping X i fixed. Using the law of total variance, one can show that S i 's are normalized quantities and vary between zero and one. It is noted that, similar to the above, the random variables and their realizations are denoted with, respectively, upper and lower cases.
The total SI for the i th input is calculated as:~~= Table 3. Computational cost of fitting MRGP models in our convergence study: As the number of the training samples increases, the fitting cost increases as well. | 9,242.8 | 2017-11-10T00:00:00.000 | [
"Physics"
] |
Residual γH2AX foci as an indication of lethal DNA lesions
Background Evidence suggests that tumor cells exposed to some DNA damaging agents are more likely to die if they retain microscopically visible γH2AX foci that are known to mark sites of double-strand breaks. This appears to be true even after exposure to the alkylating agent MNNG that does not cause direct double-strand breaks but does produce γH2AX foci when damaged DNA undergoes replication. Methods To examine this predictive ability further, SiHa human cervical carcinoma cells were exposed to 8 DNA damaging drugs (camptothecin, cisplatin, doxorubicin, etoposide, hydrogen peroxide, MNNG, temozolomide, and tirapazamine) and the fraction of cells that retained γH2AX foci 24 hours after a 30 or 60 min treatment was compared with the fraction of cells that lost clonogenicity. To determine if cells with residual repair foci are the cells that die, SiHa cervical cancer cells were stably transfected with a RAD51-GFP construct and live cell analysis was used to follow the fate of irradiated cells with RAD51-GFP foci. Results For all drugs regardless of their mechanism of interaction with DNA, close to a 1:1 correlation was observed between clonogenic surviving fraction and the fraction of cells that retained γH2AX foci 24 hours after treatment. Initial studies established that the fraction of cells that retained RAD51 foci after irradiation was similar to the fraction of cells that retained γH2AX foci and subsequently lost clonogenicity. Tracking individual irradiated live cells confirmed that SiHa cells with RAD51-GFP foci 24 hours after irradiation were more likely to die. Conclusion Retention of DNA damage-induced γH2AX foci appears to be indicative of lethal DNA damage so that it may be possible to predict tumor cell killing by a wide variety of DNA damaging agents simply by scoring the fraction of cells that retain γH2AX foci.
Methods: To examine this predictive ability further, SiHa human cervical carcinoma cells were exposed to 8 DNA damaging drugs (camptothecin, cisplatin, doxorubicin, etoposide, hydrogen peroxide, MNNG, temozolomide, and tirapazamine) and the fraction of cells that retained γH2AX foci 24 hours after a 30 or 60 min treatment was compared with the fraction of cells that lost clonogenicity. To determine if cells with residual repair foci are the cells that die, SiHa cervical cancer cells were stably transfected with a RAD51-GFP construct and live cell analysis was used to follow the fate of irradiated cells with RAD51-GFP foci.
Results: For all drugs regardless of their mechanism of interaction with DNA, close to a 1:1 correlation was observed between clonogenic surviving fraction and the fraction of cells that retained γH2AX foci 24 hours after treatment. Initial studies established that the fraction of cells that retained RAD51 foci after irradiation was similar to the fraction of cells that retained γH2AX foci and subsequently lost clonogenicity. Tracking individual irradiated live cells confirmed that SiHa cells with RAD51-GFP foci 24 hours after irradiation were more likely to die.
Conclusion: Retention of DNA damage-induced γH2AX foci appears to be indicative of lethal DNA damage so that it may be possible to predict tumor cell killing by a wide variety of DNA damaging agents simply by scoring the fraction of cells that retain γH2AX foci.
Background
Several DNA repair pathways have evolved to maintain cell viability after exposure of mammalian cells to DNA damaging agents. Sufficiently high doses of drugs or radiation cause cell killing, and it seems reasonable to expect that those cells that can repair DNA damage will survive while those unable to repair their damage will die. Sensitive detection of residual DNA damage at the level of the individual cell could allow us to identify treatment resistant subpopulations within tumors. This possibility can now be examined by making use of the fact that complex DNA lesions such as DNA doublestrand breaks (DSBs) are marked by microscopically visible γH2AX foci [1].
DSBs rapidly activate kinases that phosphorylate histone H2AX. Resulting γH2AX foci can be used to identify the number and location of DSBs and to follow their fate during recovery [2,3]. The fraction of tumor cells that retain γH2AX foci 24 hours after irradiation has been correlated with the fraction of cells that fail to divide and form colonies [4,5]. Similar results have been reported for RAD51 recombinase, a key player in DSB repair by homologous recombination [6]. RAD51 molecules also accumulate slowly as microscopically visible foci that are often co-expressed in cells with γH2AX foci [7,8]. Recently, RAD51 foci have been found in association with persistent DSBs [9]. What is not known for certain is whether the cells that retain γH2AX or RAD51 foci 24 hours after irradiation are actually the cells that die.
γH2AX foci begin to form immediately after irradiation, reaching a maximum size about 30 or 60 min later and disappearing over the next several hours. However, residual foci may remain in some cells for days after exposure and may mark unrepaired or misrepaired sites [10,11]. Importantly, residual foci appear to be replicated and retained by daughter cells [4,5]. Since rapid loss of γH2AX is contingent upon functional DNA repair, it is not surprising that retention of γH2AX foci has been associated with loss of clonogenic potential. Several studies have reported that repair-deficient cell lines retain more foci and more cells with foci when analyzed 24 hours after irradiation [12,13]. The percentage of cells that retained γH2AX foci 24 hours after irradiation was correlated with the percentage of cells that lost clonogenicity, thus making it possible to use the fraction of cells with residual foci as a way to estimate sensitivity to killing by ionizing radiation [4,5].
DSBs may be produced either directly or indirectly [2]. Direct DSBs occur as a result of exposure to ionizing radiation as well as selected drugs including bleomycin and the topoisomerase II inhibitor, etoposide. Indirectly produced DSBs can arise when a single-strand break, crosslinked DNA, or damaged DNA base meets a replication fork [14]. Phosphorylation of H2AX may also occur indirectly during repair of base damage [15]. Arguably also an indirect mechanism, extensive H2AX phosphorylation occurs as a result of DNA fragmentation during the process of apoptosis [16]. Therefore directly or indirectly, the majority of DNA damaging agents are likely to cause H2AX phosphorylation, and cells that subsequently retain γH2AX foci may be more likely to die no matter how the DNA damage was initially produced. To explore this possibility, the fraction of cells that retained γH2AX foci was compared to the fraction of clonogenic surviving cells measured after a short exposure to 8 drugs known to damage DNA and cause H2AX phosphorylation [2].
A correlation between clonogenicity and fraction of cells lacking foci does not constitute proof that cells that retain γH2AX foci are the cells that will die. Realtime imaging of γH2AX foci is complicated by the necessity of identifying the phosphorylated form of H2AX. However, RAD51 molecules also aggregate as clusters at sites of DNA damage in irradiated cells and are retained by γH2AX [10]. When labeled with green fluorescent protein (GFP), RAD51-GFP can be used for live cell analysis to determine the fate of an individual cell that retains RAD51 foci [17,18]. The ability to follow live cells allowed a direct test of the hypothesis that cells that retain RAD51 foci 24 hours after irradiation are the cells that will eventually die.
Cell lines and drug treatment
Chinese hamster V79 and CHO cells were maintained by twice weekly sub-cultivation in minimal essential medium (MEM) containing 10% fetal bovine serum (FBS). SiHa human cervical carcinoma cells and HT144 human melanoma cells were obtained from American Type Culture Collection. SKOV3 human ovarian carcinoma cells were obtained from the DCTD tumor repository in Frederick MD. M059J and M059K human glioma cell lines were obtained from Dr. J. Allalunis-Turner, Cross Cancer Center. All tumor cell lines were sub-cultured twice weekly in MEM containing 10% FBS.
To obtain cells that expressed RAD51-GFP, SiHa cells were transfected with a plasmid kindly supplied by Dr. Roland Kanaar [18]. Transfection was accomplished using Lipofectamine Plus using the protocol supplied by Invitrogen; stably transfected cells were selected by growth in 200 μg/ml G418 (Gibco), and a clone was chosen for further studies.
For drug treatment, 5 × 10 5 cells/60 mm dish and were exposed as exponentially growing monolayers to selected drugs usually for 30 min or for 60 min (cisplatin, temozolomide) in medium containing 5% FBS. Tirapazamine treatment was conducted using cells in suspension culture incubated for 30 min in drug-containing medium pre-equilibrated for one hour with 95% oxygen and 5% CO 2 . Cisplatin was obtained from Mayne Pharma and diluted from a stock solution of 1 mg/ml. Temozolomide was prepared in DMSO using a 250 mg capsule from Schering Canada. Etoposide was purchased from Novopharm and diluted from a stock concentration of 20 mg/ml. Doxorubicin, MNNG and hydrogen peroxide (H2O2) were obtained from Sigma and diluted in medium. Camptothecin was purchased from GBiosciences and prepared from a stock solution of 2 mM in DMSO. Tirapazamine was supplied by Dr. J. Martin Brown, and diluted from a stock solution of 2.5 mM in phosphate buffered saline. For experiments using X-rays, cells were exposed using a 300 kV unit at a dose rate of 5.2 Gy/min. After drug incubation, drug was removed, dishes were rinsed several times, and cells were incubated for 24 hours in fresh complete medium. Trypsin treatment (0.1% for 5 min) was used to produce a single cell suspension. Samples of single cells were plated in duplicate to measure colony formation and resulting colonies were stained and counted two weeks later. The survival of the treated cells was normalized to the plating efficiency of the non-treated cells. Experiments were performed 2-4 times using a range of drug doses. The remaining cells were fixed in 70% ethanol for analysis by flow or image cytometry to measure γH2AX and RAD51 antibody binding.
Flow Cytometry for gH2AX
Antibody staining was performed as previously described using mouse monoclonal anti-phosphoserine-139 H2AX antibody (Abcam #18311; 1:4000 dilution) [19]. After secondary antibody labelling with Alexa-488 conjugated goat anti-mouse IgG, cells were rinsed and resuspended in 1 μg/mL 4',6-diamidino-2-phenylindole dihydrochloride hydrate (DAPI; Sigma), a UV-excitable DNA stain. Samples were analyzed using a dual-laser Coulter Elite flow cytometer using UV and 488 nm laser excitation. The γH2AX signal was divided by DNA content per cell to account for differences in cell cycle distribution, and results were normalized relative to untreated controls within each experiment. Normalized γH2AX intensity is reported for all of the cells within the population.
Live cell analysis of RAD51-GFP
On average, one SiHa-RAD51-GFP cell was seeded into each well of an 8 well chamber slide with coverslip bottoms (Nalge Nunc International) containing 100 μl fresh complete medium and 100 μl conditioned medium prepared by filtering the complete medium recovered after two days of growth with high density cell cultures. Cells were allowed 4 hours to attach before exposing the chamber slide to 0 or 3 Gy and returning the chambers to the incubator for 24 hours. After 24 hours, each well was examined using a Zeiss inverted microscope with a 63× objective. To ensure high plating efficiency, analysis was restricted to cell doublets since this indicated that cells had attached and divided in the 24 hours period after irradiation. Images of cell doublets were obtained under phase and 488 nm excitation, and the location of the doublet in the well was noted. Almost invariably, both cells of a doublet exhibited RAD51-GFP foci or foci were absent from both cells at 24 hours. After scoring for the presence of foci, chamber slides were returned to the incubator for 2 weeks to allow time for colonies with greater than 50 cells to form. Approximately 40 doublets with foci and 40 doublets without foci were scored for clonogenicity.
Immunohistochemistry for RAD51 and gH2AX
Antibody stained cells prepared for flow cytometry as described above were cytospun onto microscope slides. Alternatively, cells grown on coverslips were fixed for 20 min in 2% freshly prepared paraformaldehyde before incubating with mouse monoclonal antibodies against γH2AX (Upstate, 1:500 dilution) and/or rabbit polyclonal antibodies against RAD51 (Calbiochem, 1:500 dilution or Oncogen Sciences, 1:100 dilution). Cells were viewed using a Zeiss epifluorescence microscope using a 100× Neofluor objective and images were analyzed for foci/nucleus. Analysis of foci/nucleus was always concluded before analysis of clonogenicity so that objectivity of scoring using image analysis was maintained. Experiments were repeated 3 times and independent results for each sample (clonogenic fraction and # cells lacking foci) were plotted.
Comet assays
Alkaline and neutral versions of the comet assay were used to measure MNNG-induced DNA single-strand breaks and DSBs respectively. Exponentially growing V79 hamster cells were exposed to MNNG for 30 min and then embedded in low gelling temperature agarose on a microscope slide. For the alkaline comet assay, slides were placed in a high-salt lysis solution for 1 h at pH 12.3 as previously described [20]. The neutral comet assay was performed using a 4 h lysis at 50°C, pH 8.3, as previously described [21]. For each drug dose and time, 150 comet images were analysed for DNA content, tail moment and percentage of DNA in the comet tail. Mean values are shown.
Results
Comparison between gH2AX formation, DNA break induction and cell killing by MNNG Flow cytometry analysis of H2AX phosphorylation is a practical method to assess the kinetics of phorphorylation of this DNA damage reporter as well as the importance of cell cycle position. Flow cytometry profiles of γH2AX expression versus DNA content indicated that V79 cells respond differently to low and high doses of MNNG when examined 1 h after a 30 min exposure to the drug. Initial accumulation of γH2AX was limited to cells with S phase DNA content after treatment with low drug doses, consistent with replication fork collapse as a cause of H2AX phosphorylation (Fig. 1a-c). However, after exposure to higher drug doses, γH2AX developed in cells in all phases of the cell cycle ( Fig. 1d-f).
Since our goal was to determine whether retention of γH2AX might be useful as an indicator of cell viability after exposure to genotoxic drugs, the loss of clonogenic ability was measured following a 30 min exposure to MNNG. Exponential cell killing was observed for V79 cells treated for 30 min with 0 to 5 μg/ml MNNG (Fig. 2a). A dose dependent increase in the population average expression of γH2AX was also detected using flow cytometry, and phosphorylation of H2AX continued to rise for several hours after exposure (Fig. 2b, c). This is consistent with γH2AX formation occurring as damaged DNA undergoes replication. By 24 hours after a 30 min drug treatment, the expression of γH2AX remained high (Fig. 2c) suggesting that many sites marked by γH2AX foci remained unrepaired. Nonetheless, when analyzed using flow cytometry, 7-22% of the cells within these populations expressed control levels of γH2AX (data not shown), consistent with the measured clonogenicity of these populations.
MNNG is an alkylating agent known to produce both DNA single-strand breaks as well as damage to the DNA bases. Production of alkali-labile DNA lesions including single-strand breaks and base damage was measured using the alkaline comet assay, a gel electrophoresis method that measures migration of broken DNA strands from individual nuclei. Tail moment, a measure of the amount and distance of DNA migration, was dose dependent, and there was no indication that the number of alkali-labile sites increased with time after exposure (Fig. 2d). Since γH2AX is formed in response to direct or indirectly-produced double-strand breaks and not single-strand breaks, the presence of double-strand breaks was also measured using a neutral version of the comet assay. Results indicated a small increase tail moment as a function of time after treatment with much higher doses (Fig. 2e), consistent with the observed increase in γH2AX (Fig. 2b). A comparison of the slopes in Fig. 2d and 2e indicated that approximately 40 times more alkali-labile lesions were produced than DNA double-strand breaks. Rejoining of alkalilabile lesions was negligible over the first few hours; however, half of the breaks were rejoined by 24 hours after treatment (Fig. 2f). The lack of a decrease in average γH2AX measured for all cells within the population after 24 hours could suggests that much of this repair, even after exposure to low doses, may not have been accurate or that a subset of lesions (e.g., single-strand breaks) that were repaired did not give rise to γH2AX foci. As residual γH2AX is a measure of the loss of foci as well as possible formation of new foci, the lack of a decrease in γH2AX does not necessarily indicate lack of repair of DNA breaks. Nonetheless, no rejoining of DNA double-strand breaks was detected after exposure to 50 μg/ml MNNG (data not shown). These results confirm that MNNG induces double-strand breaks that can be detected using the neutral comet assay but only after exposure to supra-lethal doses. The neutral comet assay lacks the sensitivity to predict response to MNNG in the low dose region.
The relation between clonogenic surviving fraction (Fig 3a) and fraction of cells with residual γH2AX foci (Fig. 3b) was then compared using SiHa human cervical carcinoma cells exposed for 30 min to MNNG. To improve resolution for detecting the relevant resistant cells within the population, the fraction of nuclei lacking foci was scored microscopically 24 hours after exposure to MNNG. The fraction of cells lacking foci at 24 hours was then compared with the fraction of cells from the same treated population that retained clonogenicity as measured two weeks later. A good correlation was observed between clonogenicity and fraction of cells lacking foci 24 hours after treatment (Fig. 3c) and there was a progressive increase in the number of foci per cell Figure 1 Development of gH2AX after MNNG treatment. Flow cytometric analysis was used to detect γH2AX formation in V79 cells exposed to MNNG for 30 min and then allowed 1 h to develop γH2AX foci. Single cells were fixed and analyzed for γH2AX antibody binding in relation to DNA content using flow cytometry. Cells that expressed control levels of γH2AX are contained within the boxes and are given as percentages. over this dose range (Fig. 3d-f). Representative images of MNNG-treated cells show the heterogeneity in foci number per nucleus (Fig. 3g-i). Therefore unlike the neutral comet assay, retention of γH2AX foci appears to have the requisite sensitivity to predict cell survival over the first log or two of cell killing.
Ability of residual gH2AX to predict clonogenic survival after exposure to other drugs To determine whether the fraction of cells with residual γH2AX foci would predict surviving fraction for other drugs, the response of SiHa cells to a broader range of drugs was subsequently examined. Drug concentrations that typically produced no more than 1 log of cell kill were used to avoid rapid cell loss often associated with higher drug doses [22]. In all cases, SiHa cells were exposed for 30 or 60 min to each drug and incubated for 24 hours after drug treatment to allow time for DNA repair and concomitant loss of γH2AX foci. After 24 hours, cells were plated for measurement of clonogenic fraction or fixed for measurement of the fraction of cells lacking residual γH2AX foci. Clonogenic fraction and the relative fraction of cells lacking foci 24 h after exposure showed similar dose response patterns for each drug (Fig. 4). The slopes comparing clonogenicity with residual foci together with correlation coefficients are provided in Table 1. The 95% confidence limits on the slopes included the value of one for 7/8 drugs, indicating that the fraction of cells that retained residual γH2AX foci was directly comparable to the fraction of cells that would die. The lowest surviving fraction produced by temozolomide was only 0.58 which may have been insufficient to accurately estimate the slope. These results indicate that residual γH2AX is useful for predicting clonogenic fraction following exposure to a broad range of genotoxic drugs. This suggests, but does not prove, that cells that retain γH2AX foci 24 hours after drug treatment are likely to die. Residual RAD51 foci predict cell response to radiation As it was not possible to confirm directly that cells that retained γH2AX foci 24 hours after treatment were the cells that died, we attempted to accomplish this in another way by using live-cell analysis of SiHa cells expressing RAD51-GFP. First, it was important to confirm results of a previous study that showed that the fraction of cells that died after irradiation was correlated with the fraction of cells with RAD51 foci 24 hours after irradiation [6]. Exponentially growing SiHa cells were exposed to ionizing radiation and allowed to recover for 24 hours before plating for survival or fixing in 2% paraformaldehyde for analysis of RAD51 and γH2AX foci. The fraction of SiHa cells that expressed γH2AX and RAD51 foci 24 hours after irradiation increased with dose, and the majority of foci-positive cells exhibited both foci (Fig. 5a), although not necessarily co-localized. About 5-10% of the cells that lacked RAD51 foci exhibited γH2AX foci, but <5% of cells showed RAD51 foci in the absence of γH2AX foci. As mitotic cells did not exhibit RAD51 foci, this could account in part for the presence of residual γH2AX foci in the absence of RAD51 foci. Differences in rate of foci development and removal may also account for minor discrepancies. The correlation between the fraction of SiHa cells that lacked RAD51 foci and surviving fraction after irradiation was excellent (Fig. 5b). To determine whether this correlation would hold for other cell types, several cell lines were exposed to 2 Gy and examined for the presence of RAD51 foci 24 hours after irradiation. In agreement with results of Sak et al. [6], the fraction of cells that exhibited RAD51 foci 24 hours after exposure to 2 Gy was higher for the more radiosensitive cell lines (Fig. 5c). RAD51 foci appeared in many micronuclei 24 hours after exposure, either alone or with γH2AX foci, and daughter cell pairs showed similar RAD51 foci patterns as has been reported for γH2AX foci (Fig 5d, e). Although RAD51 foci may only mark a subset of the double-strand breaks, the number of residual RAD51 foci per cell was not invariably lower than the number of residual γH2AX foci (Fig. 5e). Therefore, residual RAD51 foci appear to behave much like residual γH2AX foci, at least at 24 hours post-treatment, and should therefore be useful for predicting response to treatment.
Use of RAD51-GFP transfected cells to determine cell fateafter irradiation
Having established that the fraction of cells that retain RAD51 foci is similar to the fraction of cells that die after irradiation, SiHa cells were stably transfected with a plasmid containing a RAD51-GFP reporter construct. A clone with moderate RAD51-GFP expression (GFP expression was 3.9 times background) that showed similar growth kinetics and radiation response as the parental cell line was selected (data not shown). Detection of individual GFP-labelled foci was possible using live cells (Fig. 6a), and excellent co-localization was seen between RAD51-GFP and RAD51 antibody staining (Fig 6b). Although RAD51-GFP filament formation was occasionally observed, the majority of cells with RAD51-GFP foci showed a punctuate pattern, and cells with RAD51-GFP foci typically expressed γH2AX foci (Fig. 6c). The number of cells with RAD51-GFP foci reached a maximum 16 -24 h after irradiation (Fig. 6d), exhibiting kinetics similar to those reported using immunoblotting [23] but slower than observed using immunofluorescence staining (dotted line in Fig. 6d). This difference is likely to be a resolution issue since sufficient RAD51-GFP molecules must aggregate to become microscopically visible. The fraction of cells with microscopically visible RAD51-GFP foci was similar to the fraction of cells with antibody-labeled RAD51 foci 24 hours after 4 or 8 Gy (Fig. 6d). In SiHa cells exposed to 4 or 8 Gy, the fraction of cells with RAD51-GFP foci was also similar to the fraction of cells with γH2AX foci when detected microscopically (Fig. 6e). In SiHa cells sorted on the basis of RAD51-GFP 24 h after irradiation and then stained and reanalyzed for γH2AX using flow cytometry, there was a good correlation between the expression of these two molecules (Fig. 6f). Therefore RAD51-GFP expression appears to be a useful surrogate for γH2AX expression when analyzed 24 hours after treatment.
To determine whether SiHa cells that exhibit RAD51-GFP foci 24 hours after irradiation are more likely to die, SiHa-RAD51-GFP cells were exposed to 0 or 3 Gy. Twenty-four hours later, daughter-cell doublets were scored for the presence or absence of GFP foci. Preliminary results established that 58% of unirradiated cells formed doublets by 24 hours and a similar percentage (50%) of those exposed to 3 Gy formed doublets. For control cells, 95% of the doublets lacked foci at 24 hours and for the irradiated cells, 60% lacked foci. After allowing 2 weeks for doublets to form colonies, wells were scored. Only 16% of the doublets with RAD51-GFP foci after 24 h formed colonies whether they were exposed to 0 or 3 Gy (Fig. 6g) whereas 75-82% of doublets without foci were able to form colonies. These results support the hypothesis that cells that retain foci 24 hours after irradiation are the cells that lose clonogenic potential.
Discussion
We have previously reported that the fraction of SiHa cells that exhibit more than the background level of γH2AX 24 hours after treatment can be correlated with the fraction of cells that will ultimately die after exposure to X-rays and/or cisplatin [5,24]. We now show Figure 4 Comparison between clonogenic fraction and fraction of cells lacking residual gH2AX foci for SiHa cells exposed to 8 drugs.
With the exception of tirapazamine treatment that was conducted in suspension culture under anoxia, attached cells were exposed at 37°C in complete medium for either 30 min or 60 min (cisplatin, temozolomide). After rinsing, cells were allowed to recover for 24 hours before microscopic analysis of γH2AX foci and plating to measure clonogenic fraction. Combined results from 2-4 experiments are shown. that retention of γH2AX foci can predict clonogenicity after exposure to a variety of DNA damaging drugs, several of which do not produce direct DSBs. DSBs can be formed indirectly by two closely opposing single-strand breaks. This is likely to be the case with hydrogen peroxide since sufficiently high drug doses will produce DSBs [25]. MNNG-induced double-strand breaks were also detected using the neutral comet assay after exposure to high doses (Fig. 2e), and closely opposed damaged sites, perhaps undergoing base excision repair could be responsible [26]. However, as a high lysis temperature was used for the neutral comet experiments, opposing heat-labile base damage sites may also be involved in the formation of physical breaks detected using this method [27].
Several patterns of γH2AX formation and loss have been observed using the drugs listed in Table 1. Foci reach a maximum size within an hour after exposure to X-rays, tirapazamine, doxorubicin or etoposide. However, development of γH2AX is slower when DSB formation requires transit through S phase, for example, after treatment with cisplatin or low doses of MNNG. Significant DSB rejoining was not detected during the 24 hours after treatment with MNNG and γH2AX levels also remained high (Fig. 2c). The inability to repair DSBs caused by MNNG was also suggested by Stojic Figure 5 The fraction of cells with residual RAD51 foci is correlated with the fraction of cells that die. Panel a: SiHa cells were exposed to X-rays and allowed to recover for 24 hours. Cells were fixed and co-immunostained for γH2AX and RAD51, and cells with foci were scored. Results are the means and SD for 3 experiments. Panel b: The fraction of SiHa cells lacking RAD51 foci is compared with the clonogenic fraction measured 24 hours after exposure to X-rays. Panel c: Several cell lines were exposed to 2 Gy, allowed to recover for 24 hours, and then examined for the fraction of cells that lacked RAD51 foci. Panel d: RAD51 (green) and γH2AX (red) antibody staining of SiHa cells 24 hours after exposure to 2 Gy. Nuclei are stained blue with DAPI. Panel e: 24 hours after exposure to 8 Gy. Note the micronuclei stained with antibodies to RAD51 and/or γH2AX foci and the co-localization of some foci in some cells but not all cells. et al, [28] based on the persistence of γH2AX foci in MNNG treated cells. Unfortunately, the presence of an unrejoined double-strand break at the site of a residual γH2AX focus cannot be confirmed using a physical method to detect DSBs since these methods lack sufficient sensitivity. A subsequent paper by Stojic et al. indicated that a different mechanism operated to induce γH2AX foci after exposure to high MNNG doses (30 μM) because foci formation became independent of mismatch repair which was associated with replication [29]. Our flow cytometry results (Fig. 1) support this observation by indicating that cell cycle position is important for γH2AX focus formation after exposure to low but not high (> 20 μg/ml) MNNG doses.
Although our results suggest that residual, not initial γH2AX is the critical factor determining cell fate, initial γH2AX can also be predictive for response if some cells within a population are resistant to the induction of DBSs. Etoposide causes DSBs only in the outer proliferating cells of multicellular spheroids; the non-proliferating inner fraction of cells do not develop γH2AX and therefore survive treatment [30]. In this case, determining the fraction of cells lacking γH2AX immediately after exposure was also predictive of cell survival [31]. In the same way, doxorubicin penetrates poorly through the outer cells of spheroids so that only the outer cells developed significant numbers of γH2AX foci. Again the fraction of cells lacking foci immediately after exposure was correlated with the fraction of cells that survived Panel e: The fraction of cells that exhibit RAD51-GFP foci or γH2AX foci 24 hours after exposure to radiation, measured microscopically. Panel f: SiHa-RAD51-GFP cells expressing high levels of RAD51-GFP after 24 hours after irradiation were sorted on the basis of GFP, fixed and stained for γH2AX. The average intensity of the populations was measured using flow cytometry. The mean and standard error for 3 sorted populations is shown. Panel g: Clonogenicity of SiHa-RAD51-GFP cells after 0 or 3 Gy exposure. Cells in 8-well dishes were irradiated and 24 hours later, wells containing one or two doublets were scored for the presence or absence of γH2AX foci (daughter cell pairs show the same foci patterns). Dishes were returned to the incubator for 2 weeks to form colonies. The fraction of doublets with foci that survived treatment or the fraction lacking foci that survived treatment was calculated.
Banáth et al. BMC Cancer 2010, 10:4 http://www.biomedcentral.com/1471-2407/10/4 [31]. However, by counting the fraction of cells lacking foci 24 hours after treatment, the importance of repair capacity as well as susceptibility to a direct-acting genotoxin can be included in the estimate of survival. Moreover, effects of drugs that produce γH2AX only when cells transit S phase can also be evaluated, provided of course that drug-treated cells are given an opportunity to progress through the cell cycle.
A RAD51-GFP construct provided a way to directly address the importance of residual DNA repair foci in determining cell fate. Cells deficient in H2AX also show a deficiency in homologous recombination and RAD51 focus formation [32,33], and it is possible that retention of γH2AX may be the signal for retention of repair molecules like RAD51 [10]. Most but not all cells with RAD51-GFP foci 24 hours after irradiation failed to form colonies. Similarly, most but not all cells lacking foci did form colonies (Fig. 6g). Unfortunately, resolution for detecting foci was reduced under the technical constraints imposed by live cell imaging in multiwells, and 24 hours may not have been an optimum time to score microscopically visible RAD51-GFP foci for all cells. Although the answer was not unequivocal, it does support the idea that residual DNA repair foci mark cells that are likely to die. Since all of the DNA damaging agents we have examined produced residual γH2AX that were predictive of clonogenic survival, it should be possible to use residual foci as a biomarker of response to genotoxic agents.
There are limitations to the application of this approach in vivo. Tumor heterogeneity is a major consideration especially since both induction of DNA damage and its repair are influenced by the tumor microenvironment. Obtaining a representative biopsy and/or multiple biopsies will be essential [34]. Second, it is important to consider the endogenous expression of γH2AX since this can be quite variable and will affect the ability to detect residual γH2AX foci [35]. A pretreatment biopsy must also be obtained and if endogenous γH2AX is excessive, prediction based on residual foci may not be possible. Third, loss of heavily damaged cells by apoptosis or other mechanisms within the first 24 hours after treatment, or sequestration of foci into micronuclei before scoring will reduce the accuracy of prediction. Although early apoptotic cells exhibit γH2AX foci, necrosis secondary to apoptosis can result in loss of the signal [16,36]. Finally, for drugs that produce foci only when DNA replicates, it will be necessary to ensure that all treated cells have the opportunity to transit S phase. In spite of these limitations, the recent application of γH2AX to predict response to cisplatin combined with radiation in xenograft tumors [24] indicates that this approach has promise for early prediction of tumor response to treatment. Moreover, it should be possible to predict response not only to single drugs but to combinations of DNA damaging agents.
Conclusions
Our results support the hypothesis that tumor cells that retain γH2AX foci 24 hours after exposure to a DNA damaging agent are unlikely to survive treatment. The direct relationship between loss of clonogenic ability and retention of γH2AX foci holds for drugs that damage DNA by different mechanisms. Therefore, it should be possible to identify drug-resistant tumor cells simply by measuring the fraction of cells that lack residual γH2AX foci. | 7,900.2 | 2010-01-05T00:00:00.000 | [
"Medicine",
"Biology"
] |
Information loss, made worse by quantum gravity?
Quantum gravity is often expected to solve both the singularity problem and the information-loss problem of black holes. This article presents an example from loop quantum gravity in which the singularity problem is solved in such a way that the information-loss problem is made worse. Quantum effects in this scenario, in contrast to previous non-singular models, do not eliminate the event horizon and introduce a new Cauchy horizon where determinism breaks down. Although infinities are avoided, for all practical purposes the core of the black hole plays the role of a naked singularity. Recent developments in loop quantum gravity indicate that this aggravated information loss problem is likely to be the generic outcome, putting strong conceptual pressure on the theory.
Introduction
There is a widespread expectation that quantum gravity, once it is fully developed and understood, will resolve several important conceptual problems in our current grasp of the universe. Among the most popular ones of these problems are the singularity problem and the problem of information loss. Several proposals have been made to address these questions within the existing approaches to quantum gravity, but it is difficult to see a general scenario emerge. Given such a variety of possible but incomplete solution attempts, commonly celebrated as successes by the followers of the particular theory employed, it is difficult to use these models in order to discriminate between the approaches. In this situation it may be more fruitful to discuss properties of a given approach that stand in the way of resolving one or more of the big conceptual questions. Here, we provide an example regarding the information loss problem as seen in loop quantum gravity.
Loop quantum gravity [1,2,3] is a proposal for a canonical quantization of space-time geometry. It remains incomplete because it is not clear that it can give rise to a consistent quantum space-time picture (owing to the so-called anomaly problem of canonical quantum gravity). Nevertheless, the framework is promising because it has several technical advantages compared to other canonical approaches, in particular in that it provides a welldefined and tractable mathematical formulation for quantum states of spatial geometry. The dynamics remains difficult to define and to deal with, but there are indications that a consistent version may be possible, one that does not violate (but perhaps deforms) the important classical symmetry of general covariance. These indications, found in a variety of models, lead to the most-detailed scenarios by which one can explore large-curvature regimes in the setting of loop quantum gravity.
The word "loop" in this context refers to the importance attached to closed spatial curves in the construction of Hilbert spaces for geometry according to loop quantum gravity [4]. More precisely, one postulates as basic operators not the usual curvature components on which classical formulations of general relativity are based, but "holonomies" which describe how curvature distorts the notion of parallel transport in space-time. If we pick a vector at one point of a closed loop in curved space and move it along the loop so that each infinitesimal shift keeps it parallel to itself, it will end up rotated compared to the initial vector once we complete the loop. The initial and final vectors differ from each other by a rotation with an angle depending on the curvature in the region enclosed by the loop. Loop quantum gravity extends this construction to space-time and quantizes it: It turns the rotation matrices into operators on the Hilbert space it provides. An important consequence is the fact that (unbounded) curvature components are expressed by bounded matrix elements of rotations. Most of the postulated loop resolutions of the singularity problem [5,6,7,8,9,10,11] rely on this replacement.
Classical gravity, in canonical terms, can be described by a Hamiltonian H that depends on the curvature. If H is to be turned into an operator for loop quantum gravity, one must replace the curvature components by matrix elements of holonomies along suitable loops, because only the latter ones have operator analogs in this framework. One has to modify the classical Hamiltonian by a new form of quantum corrections. The classical limit can be preserved because for small curvature, the rotations expressed by holonomies differ from the identity by a term linear in standard curvature components [12,13]. At low curvature, the classical Hamiltonian can therefore be obtained. At high curvature, however, strong quantum-geometry effects result which, by virtue of using bounded holonomies instead of unbounded curvature, can be beneficial for resolutions of the singularity problem.
Given the boundedness, it is in fact easy to produce singularity-free models. But one of the outstanding problems of this framework is to show that the strong modification of the classical Hamiltonian can be consistent with space-time covariance. This question is not just one of broken classical symmetries (which might be interesting quantum effects). Covariance is implemented by a set of gauge transformations which eliminate unphysical degrees of freedom given by the possibility of choosing arbitrary coordinates on space-time. When these transformations are broken by quantum effects, the resulting theory is meaningless because its predictions would depend on which coordinates one used to compute them. Showing that there are no broken gauge transformations (or gauge anomalies) is therefore a crucial task regarding the consistency of the theory. The problem remains unresolved in general, but several models exist in which one can see how it is possible to achieve anomaly-freedom, constructed using operator methods [14,15,16,17] or with effective methods [18,19,20,21,22].
A model of deformed canonical symmetries
As a simple, yet representative, example, we consider a model with one field-theoretic degree of freedom φ(x) and momentum p(x). There is no room for gauge degrees of freedom in this model, and therefore we use it only to consider the form of symmetries of gravity, not the way in which spurious degrees of freedom are removed.
Algebra of transformations
For the example, we postulate a class of Hamiltonians with a function f to be specified, and with the prime denoting a derivative by the one spatial coordinate x. As in general relativity, the Hamiltonian depends on a free function N(x) because there is no absolute time. [18,19,20,21,22] follow from the structure of derivatives in (1) in combination with a function f (p) which modifies the classical momentum dependence. The Hamiltonian, as a generator of local time translations, is accompanied by a second generator of local spatial translations, the form of which is more strictly determined: It is given by D[w] = dxwφp ′ with another free function w(x). It generates canonical transformations given by as they would result from an infinitesimal spatial shift by −w(x): (The transformation of φ is slightly different owing to a formal density weight.) Of special importance is the algebra of symmetries, which can be computed by Poisson brackets (as a classical version of commutators). We obtain Two local time translations have a commutator given by a spatial shift. (The numerical coefficients chosen in (1) ensure that the algebra of symmetry generators H and D is closed.) Although our model is simplified, the result (3) matches well with calculations in models of loop quantum gravity, constructed for spherical symmetry [19,22] and for cosmological perturbations [20,21]. The same type of algebra has also been obtained for H-operators in 2 + 1-dimensional models [14]. Since our choice (1) extracts the main dynamical features of loop models, it serves to underline the genericness of deformed symmetry algebras when f (p) is no longer quadratic.
Geometry
For the classical case in which f (p) = p 2 is a quadratic function of p, half the second derivative in (3) as expected. Holonomy effects of loop quantum gravity can be modeled by using a bounded function f (p) instead of a quadratic one. (A popular choice in the field is f (p) = p 2 0 sin 2 (p/p 0 ) with some constant p 0 , such as Planck-sized curvature.) The number of classical symmetries remains intact because the relation (3) is still a closed commutator. But the structure of space-time changes: we can no longer think in terms of local Minkowski geometry because the spatial shift in (3) with 1 2 d 2 f /dp 2 = 1 violates the relation ∆x = v∆t found classically in (4). The deviation from classical space-time is especially dramatic at high curvature, near any maximum of the holonomy function f (p): Around a maximum, the second derivative is negative, d 2 f /dp 2 < 0. For the popular choice of f (p) = p 2 0 sin 2 (p/p 0 ), we have 1 2 d 2 f /dp 2 = cos(2p/p 0 ) which is equal to −1 at the maximum of f (p). The counterintuitive relation ∆x = −v∆t can be interpreted in more familiar terms: the change of sign means that the classical Lorentz boost is replaced by an ordinary rotation. (An infinitesimal rotation by an angle θ in the (x, y)-plane and a spatial shift by ∆y commute to ∆x = −θ∆y.) At high curvature, holonomy-modified models of general relativity replace space-time with pure and timeless higher-dimensional space, a phenomenon called signature change [23,24,25].
Field equations
At the level of equations of motion, signature change means that hyperbolic wave equations become elliptic partial differential equations (in all four dimensions, or two in the model). Indeed, if one computes equations of motion from the Hamiltonian (1), one obtains where d 2 f (p)/dp 2 is a function ofφ viaφ = Ndf (p)/dp. This partial differential equation, which is hyperbolic for 1 2 d 2 f (p)/dp 2 > 0, becomes elliptic for 1 2 d 2 f (p)/dp 2 < 0. In the latter case, the equation requires boundary values for solutions to be specified; it is not consistent with the familiar evolution picture implemented by an initial-value problem. Instead of specifying our field and its first time derivative at one instant of time, once curvature (orφ in the model) becomes large enough to trigger signature change we must specify the field on a boundary enclosing a 4-dimensional region of interestincluding a "future" boundary in the former time direction. We can no longer determine the whole universe from initial data given at one time.
Although our specific model is simplified, the main conclusion about signature change agrees with the more detailed versions cited above, which latter directly come from reduced models of loop quantum gravity combined with canonical effective techniques. Our model presented here shows that the main reason for signature change is the modified dependence of gravitational Hamiltonians on curvature components when holonomies are used to express them, together with the general structure of curvature terms. (Especially the presence of spatial derivatives seems crucial for derivatives of the modification function to show up in the symmetry algebra after integrating by parts.) The rest of our discussions does not rely on the specific model but rather on the general consequence of signature change.
General aspects of signature change
As shown in [26], the structure of constraint algebras or gauge transformations, of which (3) provides a model, is much less sensitive to details of regularization effects or quantum corrections than the precise dynamics implied. Even if there may be additional quantum corrections in (5) in a fully quantized model, structure functions of the algebra, such as 1 2 d 2 f /dp 2 in (3), provide reliable effects of a general nature. For details, the reader is referred to the above citation, but the crucial ingredient in this observation is the definition of effective constraints C I = Ĉ I as expectation values of constraint operators, and their brackets as {C I , C J } = [Ĉ I ,Ĉ J ] /i . A regularization of a constraint operatorĈ I leads to corresponding modifications of the effective constraint Ĉ I . For any consistent operator algebra, the bracket of effective constraints mimics the commutator of constraint operators. Even if Ĉ I , computed to some order in quantum corrections, may give a poor approximation to the quantum dynamics, the possible consistent forms of effective constraint algebras restrict the possible versions of quantum commutators. If effective constraints of a certain form, such as those obtained with holonomy modifications, always lead to a change of sign of structure functions, the same must be true for operator algebras.
As noted also in [24,27], equations of the form (5) sometimes appear for matter systems with instabilities, in cosmology but also in other areas such as transonic flow. An instability would normally not be interpreted as signature change as long as a standard Lorentzian metric structure remains realized, as is the case in all the known matter examples. The present context, however, is different, because the instability affects the geometry of spacetime itself, and not just matter propagating in space-time. (In models of loop quantum gravity, φ in (5) stands for metric inhomogeneities.) Such an instability is more severe, and at the same time more inclusive because it affects all excitations -matter and geometry -in the same way. Indeed, the most fundamental structure where it appears is not the equation of motion (5) but the symmetry algebra (3). If matter is present, its Hamiltonian would be added to the gravitational one, the resulting sum satisfying a closed algebra of the form (3). (If adding matter terms would break the algebra, there would be anomalies making the theory inconsistent.) Matter and geometry are then subject to the same modified symmetries, and correspondingly to a modified evolution picture with a boundary rather than initial-value problem at high density.
Solutions might exist for elliptic partial differential equations with an initial-value problem. However, such solutions are unstable and depend sensitively on the initial values; therefore, initial-value problems for elliptic partial differential equations are not well-posed. Sometimes, a physical model of this form may just signal a growing mode which is increasing rapidly in actual time. In quantum gravity and cosmology, however, instabilities from signature change in (3) or (5) are much more debilitating. In this context, one does not perform controlled laboratory experiments in which one can prepare or directly observe the initial values. When signature change is relevant, it happens in strong quantum-gravity regimes where the analogs of f (p) differ much from the classical behavior. Not only initial values but also the precise dynamical equations (subject to quantization ambiguities) are so uncertain that an initial-value formulation can give no predictivity. (In cosmological parlance, instabilities from signature change present severe versions of trans-Planckian and fine-tuning problems. For more information on the dynamics of affected modes see [28].) In contrast to some matter systems in which elliptic field equations may appear, quantumgravity theories do not allow initial-value formulations in such regimes but rather require 4-dimensional boundary-value problems.
Evolution in these models is no longer fully deterministic. In the remainder of this article, we apply this conclusion to black holes and show that even low-curvature regions, where observers have no reason to expect strong quantum-gravity effects, will be affected by indeterminism. In this context, consequences of signature change are therefore much more severe than their analogs in cosmological models.
Black holes
Black holes in general relativity have singularities where space-time curvature diverges. Loop quantum gravity has given rise to models in which curvature is bounded, apparently resolving the singularity problem [29]. As in some other approaches [30,31,32,33], there is then no event horizon but only an apparent horizon which encloses large curvature but eventually shrinks and disappears. If there is no singularity and information can travel freely through high-curvature regions, there is no information loss, so this important problem seems to be resolved too. However, previous black-hole models of this type in loop quantum gravity did not consider the anomaly problem. In an anomaly-free version, curvature may still be bounded, but when it is large (Planckian, or near the upper bound provided by the models), there can be signature change, preventing information from travelling freely through this regime. It is no longer obvious that the information loss problem can be resolved in singularity-free models of black holes.
If the singularity is resolved, there are two scenarios for Hawking-evaporating black holes: The black-hole region enclosed by an apparent horizon could reconnect with the former exterior at the future end of high curvature, or it could split off into a causally disconnected baby universe. The latter case does not solve the information loss problem because information that falls into the black hole is sealed off in the baby universe. The former case resolves the information loss problem only if information can travel through high curvature. If signature change happens, nothing travels through the high-curvature region and the fate of information must be reconsidered.
The elliptic nature of field equations in the high-curvature core of black holes requires one to specify fields at the future boundary, which would evolve into the future space-time after black-hole evaporation. In Fig. 1, boundary values on the bottom line surrounding the hashed high-curvature region would be determined by evolving past initial values forward in time, but boundary data on the top line around the region would have to be specified, unrestricted by any field equations. Their values are not predicted by the theory, and yet they are essential for determining the future space-time. Once the high-curvature region is passed by an outside observer, space-time is no longer predictable. The black-hole's event horizon H extends into a Cauchy horizon C: The region above C is affected by undetermined boundary data. Even if there are no infinities, the classical black-hole singularity is, for practical purposes, replaced by a naked singularity, a place out of which unpredictable fields can emerge.
In terms of information loss, whatever infalling matter hits the high-curvature core of the black hole determines some part of the boundary conditions required for the elliptic region, and thereby influences part of the solution in the core. But it does not restrict our choice for the future boundary data, or anything that evolves out of it at lower curvature. Infalling information is therefore lost even if there is no black-hole singularity. Similar conclusions apply to the alternative of a baby universe: Infalling information cannot be retrieved in the old exterior, and it cannot be passed on to the baby universe. Figure 1: Acausality: Penrose diagram of a black hole with signature change at high curvature (hashed region). In contrast to traditional non-singular models, there is an event horizon (dashed line H, the boundary of the region that is determined by backward evolution from future infinity) and a Chauchy horizon (dash-dotted line C, the boundary of the region obtained by forward evolution of the high-curvature region). After an observer crosses the Cauchy horizon, space-time depends on the data chosen on the top boundary of the high-curvature region and is no longer determined completely by data at past infinity. Information that falls through H affects field values in the hashed region, but not on the top boundary or its future; it is therefore lost for an outside observer. Unrestricted boundary values at the top part of the hashed region influence the future universe even at low curvature (zigzag arrow), a violation of deterministic behavior.
Conclusions: A no-heir theorem?
We have presented here a mechanism which appears to be generic in loop quantum gravity and helps to resolve curvature divergence, but makes the information loss problem of black holes worse. Black-hole singularities can turn into naked singularities in this framework, which implies an end to predictivity. In classical general relativity, there is strong evidence that cosmic censorship applies: given generic initial data, singularities may form but are enclosed by black-hole horizons; no naked singularities appear that would affect observations made from far away. In loop quantum gravity, a stronger version of cosmic censorship would be required if signature change is confirmed to be generic. Naked singularities (Cauchy horizons) could be avoided only if black-hole interiors split off into baby universes. But even then, information could not be passed on to the baby universe. From the point of view of observers in this new universe, the former black-hole singularity would appear as a true beginning, just as the big bang appears to us in our universe.
The information loss problem has turned into a more-severe problem of indeterminism. Two options remain for loop quantum gravity to provide a consistent deterministic theory without Cauchy horizons. First, one might be able to show that signature change does not happen under general conditions in the full theory, a question which requires an understanding of the off-shell constraint algebra and the thorny anomaly problem. All current indications, however, point in the opposite direction and suggest that signature change is generic. With signature change, Cauchy horizons can be avoided only if the high-curvature regions of black holes always remain causally disconnected from the universe in which they formed, that is if black holes open up into new baby universes. In this scenario, information that falls in a black hole is still lost even for the baby universe, but at least the more-severe problem of a Cauchy horizon can be avoided. In either case, a detailed analysis of possible consistent versions of the constraint algebra of loop quantum gravity could lead to a "no-heir theorem" if deterministic evolution through the high-density regime of black holes turns out to be impossible under all circumstances. Black holes would have no heirs since everything possessed by a collapsing star, including the information carried along, would be lost even if space-time did not end in a curvature singularity.
So far, loop quantum gravity is not understood sufficiently well for a clear model of black holes to emerge from it, but the mechanism analyzed here shows that, at the very least, scenarios obtained from generalizations of simple homogeneous models, such as the one postulated in [29], are likely to be misleading. Inhomogeneity can change the picture drastically, not just because there may be back-reaction on a homogeneous background but also, and often more surprisingly, because the non-trivial nature of symmetry algebras such as (3) is much more restrictive for inhomogeneous models. (The right-hand side would just be identically zero with homogeneity, hiding the crucial coefficient and its sign which indicates signature change.) Our considerations of black-hole models provide a concrete physical setting in which loop quantum gravity and its abstract anomaly problem can be put to a clear conceptual test. | 5,213.6 | 2014-09-10T00:00:00.000 | [
"Physics"
] |
Trends in Age Distributions, Complications, Hospital Stay, and Cost Incurred in a Chinese Diabetic Population from 2006 to 2013
The prevalence of chronic diseases has been rising since 1980s as people’s living standard gets higher and the pace of modern life gets faster. Chronic diseases have become a major threat to people’s health, accounting for 80% of the death toll [1]. Diabetes is the most common metabolic diseases. Its incidence has been increasing over the past decades [2]. In 2013, the rate of diabetes has been up to 9.6% in China and 9% all over the world. Studies show that the prevalence rate of diabetes in China will reach 13% by the year 2035 [3-5]. Diabetes has become an important factor that endangers human health. Medical cost and labor loss caused by diabetes have laid enormous economic burden on the patients [6-8]. Cost on diabetes accounts for about 13% of the cost on health service in China while 5%-6% in developed countries. The direct cost for type 2 diabetes in China in 2007 is about 26 billion US dollars and will reach 47.2 billion US dollars in 2030 [9]. Therefore, under the condition of the increasing rate of diabetes, the mechanism of follow-up survey on diabetics should be set up in China, in which includes the statistics on the amount of cost on health and fully implementation of perfect survey on health economics. This research collected the clinical data of in hospital diabetics in the First Affiliated Hospital of Nanjing Medical University and analyzed the basic situation of diabetes. The exploration of the changes and the analysis of the reasons provide scientific basis for the clinical treatment and hospital management. They also offer the references for the prevention of local diseases and health service in China.
Introduction
The prevalence of chronic diseases has been rising since 1980s as people's living standard gets higher and the pace of modern life gets faster. Chronic diseases have become a major threat to people's health, accounting for 80% of the death toll [1]. Diabetes is the most common metabolic diseases. Its incidence has been increasing over the past decades [2]. In 2013, the rate of diabetes has been up to 9.6% in China and 9% all over the world. Studies show that the prevalence rate of diabetes in China will reach 13% by the year 2035 [3][4][5]. Diabetes has become an important factor that endangers human health. Medical cost and labor loss caused by diabetes have laid enormous economic burden on the patients [6][7][8]. Cost on diabetes accounts for about 13% of the cost on health service in China while 5%-6% in developed countries. The direct cost for type 2 diabetes in China in 2007 is about 26 billion US dollars and will reach 47.2 billion US dollars in 2030 [9]. Therefore, under the condition of the increasing rate of diabetes, the mechanism of follow-up survey on diabetics should be set up in China, in which includes the statistics on the amount of cost on health and fully implementation of perfect survey on health economics. This research collected the clinical data of in hospital diabetics in the First Affiliated Hospital of Nanjing Medical University and analyzed the basic situation of diabetes. The exploration of the changes and the analysis of the reasons provide scientific basis for the clinical treatment and hospital management. They also offer the references for the prevention of local diseases and health service in China.
Methods
The data we used is directly extracted from in hospital records. All the patients this research studied is hospitalized patients. Using principal diagnoses of hospitalized patients, diseases cases are selected to establish a database. By counting the times of patients be in hospital for diabetics, we gained the percentage of times of diabetics out of the whole inpatients. Gender distribution is gained by counting gender of diabetics. Using the same method, age distribution in different groups comes out. In hospital days and hospital costs are calculated, so as to get daily costs on average. Through the combination with the age distribution, we got the distribution of in hospital days in different age groups. According to the data of other diagnosis, we got the situation of diabetic complications. We also analyzed the mean number of complications of diabetics.
General review
During the years from 2006 to 2013, there were 531,718 in hospital patients, in which males accounted for 49.36% and females for 50.64%. The number of diabetics was 12,214, accounting for 2.30% of the total in hospital patients. Of all the 12,214 diabetics, 57.30% were males and 42.70% were females.
Age analysis
We divided the diabetics into five groups by every 20 years old. Age distribution of diabetics in different age-groups shows as Figure 1. All the five groups show an increasing trend. Diabetics between 41 and 80 years old occupied over 80% of the whole diabetics in all the eight years. Chi-square test is used on the number of 1-20 years old diabetics and over 20 years old diabetics. The result shows there is significant difference (p<0.001) between the numbers of the two groups of diabetics. The number of 1-20 years old group has faster average growth in the eight years, which is 25.34%, than the over 20 years old group, which is 14.52%.
Hospital stay analysis
Hospital stay (days patient stay in hospital) of diabetics in different age-groups shows as Figure 2a. The in hospital days of diabetics in different age groups are counted. The number of diabetics in hospital for 8 days is the largest. Most patients (81.83%) tend to stay in hospital for 4 to 16 days. Diabetics between 41-60 years old have the longest hospital stay.
Average hospital stay of diabetics in different age groups is generated by dividing the total days by the number of total diabetics of different age groups and is shown as Figure 2b. Mean hospital stay increases with age, especially for the diabetics of 81-100 years old. Meanwhile, we noted that in hospital days of diabetics of 1-10 years old (9.8 days) are more than one day longer than those of 11-20 years old (8.1 days).
Cost analysis
We analyzed the total cost on diabetes and total diabetics number of each year form 2006 to 2013. In hospital costs of diabetics in different age group shows as Figure 3a. Diabetics between 81-100 years old have the most in hospital cost, nearly half of the whole diabetics' cost of each year. In hospital costs of diabetics under 80 years old shows an increasing trend, although there is a fluctuation in both 2011 and 2013.
By analyzing in hospital days and cost we get the average hospital cost of diabetics per day which is shown in Figure 3b. The cost of each diabetic is 508 RMB per day in 2006, and increases year by year to 930 RMB per day in 2013. The average growth of cost in the eight years is 9.02%.
Complications
To see how complications influence costs incurred, we analyzed the complications of diabetics. The number of other diagnostic data shows as Table 1. In all 12,214 cases of hospitalized patients, there are 10,086 patients who have complications, accounting for 82.58%, most of which (3,039 cases) have one complication, accounting for 24.88%. The number of complications of all patients is 37,968, averaging 3.108 complications per patient. Table 2 with four complications spend up to 11,178.0 RMB on average, which increases to 1.9 times compare with the former. It is obvious that along with the increase of complication number, hospital costs are getting higher.
We also analyzed comorbidity of diabetics (complication that is not caused by diabetes). Table 3 lists the most 9 comorbidities. We can see that common cardio-cerebrovascular diseases are the main comorbidities.
Analysis by year
The patient information and hospital cost in different years shows as Table 4
Discussion
From the analysis above we can see that in China although aged people tend to have higher rates of diabetes [10], the incidence of diabetes in the young people (1-20 years old) is increasing faster (p<0.001) (Figure 1) than other people (over 20 years old). The rate of people with diabetes under 40 years old is getting higher, which is closely related to the unhealthy lifestyle of modern people. It could be the reason by fast life pace, high life pressure, lack of sleep and sports exercise and unhealthy diets. So, residents should pay attention to their own health and the change of blood glucose, strengthen sports exercises, control their diet, and stay far away from diabetes and impaired fasting glucose. As for people with diabetes, they should pay more attention to For diabetes, the days of staying in hospital and hospitalization costs have direct relation to the age. The elder the diabetics are, the more days and costs in hospital. With the basic disease and the decreasing of immune ability in elder patients the number of complications and mean hospitalization cost increased. Average cost in hospital of diabetics with four complications is 11178.0 RMB, which is 1.9 times of diabetics without complication. Therefore, elder diabetics should pay attention to physical changes, effectively control blood glucose and stabilize illness condition. In a sense, it can alleviate the economic burden by decreasing the time in hospital.
Meanwhile, hospital stay and cost of diabetics also have close relation to the number of complications. Our results showed that diabetics without complications spend less in hospital than those with complications on things such as medicine, assay, and inspection. The more complications they have and the later complications are discovered, the longer diabetics stay in hospital. In the wake of all other increasing cost such as bed and medicine, hospital stay increases unavoidably. From the above, hospital cost can be effectively controlled by controlling on the number of the complications, so does the economic burden.
In the research data, for the first other diagnosis, common cardiocerebrovascular diseases have 3,928 cases, such as hypertension, hyperlipidemia, fatty liver, coronary heart disease and cerebral thrombosis. It is obvious that common cardio-cerebrovascular diseases are the main comorbidities; in which hypertension have 3,233 cases, accounting for 87.43%. The previous study showed that hypertension has serious impact on the happening and developing of diabetic complications [11]. For the diabetics with hypertension, if hypertension cannot be well controlled, the diabetes can't be either. If hypertension can be well taken care of, hospitalization cost would be greatly lowered.
It can be seen from Table 4 that the average days in hospital decreases while average hospital costs and daily hospital costs increasing. It shows that average time spend on cure is shortened with the improvement in health care, more medicines, and closer hospital management. But in terms of the increasing in costs in hospital, it is clear that government should take various positive measures, including effectively controlling medical prices and alleviating economic burden on diabetics to improve people's health.
Conclusion
Diabetes has become one of the major diseases that bothered residents' health and caused economic burden [12]. In view of the serious damages brought by diabetes, it should be given high priority by health policy makers, health workers and diabetics in China. Since heredity, diet, obesity, lack of physical exercise and anxiety are factors for diabetes, International Diabetes Federation comes up with best solution from five aspects. The solution involves diet, medicines, tests, physical exercise and physiological treatment and care. Diabetes and high risk groups start from little aspects, by positive prevention and treatment on diabetes and complications. Meanwhile, policy-making sectors should timely make out effective manage policy and control the incidences of diabetes from the development of diabetes, so as to improve the quality of residents' life and lower economic burden on people with diabetes [13]. | 2,728.6 | 2015-04-10T00:00:00.000 | [
"Economics",
"Medicine"
] |
Designing Self-Enforcing Programs Embedded in the Local Economy: The Key for Successful International Cooperation of Vocational Education
: International education cooperation is conducive to promoting the cross-border circulation of advanced educational experiences and enhancing global human capital. China has become the main destination for students from developing countries, and how to better carry out international cooperation programs in Chinese education has become an important issue. This paper focuses on international cooperation in the field of vocational education and takes a vocational college in China as a case to explore the performance of different international cooperation modes between China and developing countries. The study did a within-case design, and three projects representing three modes of cooperation were selected for comparison. The article believes that the key to success lies in the design of cooperation, and those projects which closely integrate local industries, colleges and students will thrive and prosper. The article answers the question of how international cooperation in vocational education should be carried out.
Introduction
In recent years, as China's international influence have increased, the pattern of international cooperation and exchange in higher education has also changed [1][2][3].The mode of international cooperation in China has changed from "inviting in" to "combining inviting in and going out".In the past, China focused on introducing foreign educational resources to speed up the process of education reform and open up and narrow the gap with developed countries.In recent years, China has been paying more and more attention to going out with its own educational experiences [4].Vocational colleges and universities in CHINA have begun to export their experiences in education, teaching, and academics, and to strengthen exchanges with foreign institutions.
The projects contain the Chinese experience of going abroad are growing rapidly, and many researchers have conducted studies mostly around language and cultural programs such as Confucius Institutes and language learning centers, but studies on international students and their career-pathbuilding in vocational education are still rare.As an emerging country with promising potential job prospects, China is increasingly becoming a major study destination for international students.The total number of international students coming to China has reached 500,000 in 2021 [5].The question worth exploring is how to design international exchange programs in vocational education in China, and what cooperation mode could benefit oversea students, colleges, and local industries by fully utilizing the advantages of Chinese culture, industry, and education fields.
This paper explores the international cooperation mode of Chinese vocational higher education institutions through a case study approach.The research selects three international cooperation projects carried out by a Chinese vocational institution over the years and compares their backgrounds, cooperation modes, and project effectiveness.And the comparative case study analysis is based on the interviewer, news reports collection, and participating observation.The reasons for the success or failure of the three projects are analyzed, and the key to the effectiveness of the cooperative projects was explored.
Literature Review
The study has identified two major forms of international cooperation in vocational institutions, one is to invite international educational advanced experience and science and technology, and another is to send high-quality educational resources and technology abroad.
"Inviting in" Mode in Vocational Institutions
China's vocational education still has room for improvement and needs to introduce advanced foreign professional and industrial experience into domestic vocational colleges to reach international professional standards.Foreign experience can be invited in through two modes, one is to introduce advanced education systems, and the other is to send national talents abroad for professional training or learning.
Invite in Advanced Foreign Education Systems
The mode of inviting foreign advanced education systems can be carried out in the form of learning the advanced foreign systems [1].Specifically, includes foreign advanced industry standards, systems, talent training modes and curriculum contents, and localized transformation and promotion of implementation in China vocational colleges.For example, Hangzhou Vocational Institute of Technology introduced Japanese comic industry standards, the studio system, talent training mode, curriculum content, etc., set up a modern apprenticeship class for Japanese flip animation, and adopted a modern apprenticeship class mode for tiered teaching [6].Based on Sino-Japanese schoolenterprise cooperation and modern apprenticeship system, the local vocational institute has promoted the innovation of international talent cultivation mechanism, increased the international content, and improved the quality of talent cultivation.
Going Abroad for Training
The mode of inviting in also can be conducted in foreign advanced institutions.For instance, the cooperation between vocational colleges in China, named Jining Technical College, and the Industrial Education Authority of Singapore (ITE for short) is the case of "going out to learn from the experience".Singapore ITE has formed a mature international education consulting service [7].Jining Technical College sends its managers to Singapore for course training or students to attend international training programs in Singapore.For example, in the student internationalization training program, students gained the chance to intern in Singapore school-run enterprises or cooperative enterprises.etc.Therefore, they could learn by doing in multinational companies, study and practice there, and even could be employed by Singapore local companies after they graduated.Through the introduction of ITE programs, domestic institutions have invited higher quality educational resources from abroad, broadened the horizons of teachers and students, and improved the competitiveness of the school.Through the introduction of the foreign advanced talent training mode, domestic vocational institutions solve the problems of mismatch between the supply of local talent resources and the job demand of enterprises, as well as to helping schools broaden their enrollment channels and inject new connections of school talent training.
The "Going out" Mode in Vocational Colleges
After years of development of higher vocational colleges and universities, China's vocational colleges and universities have accumulated numerous experiences in exploring the path of "going out" in all aspects and have formed two types of international cooperation and exchange modes.One is to go abroad with advanced experience and train foreign students on site, and the other is to provide education and training to foreign students in China.
Sending Chinese Experience Abroad
In terms of sending advanced educational experiences abroad, two typical successful modes are Confucius Institute and "Luban Workshop".Confucius Institute which centered on the Chinese language and culture had been researched a lot in recent years [2][3] [8].However, unlike the Confucius Institute, the "Luban Workshop" which resembles vocational education and focuses on the exchange of industrial skills and production modes, will be briefly introduced below.
The "Luban Workshop" mode refers to a cooperation mode based on the rigorous craftsmanship of the Chinese traditional figure named Luban who was diligent, innovative, and dedicated to his work.By maintaining Luba's spirit, Chinese vocational institutes export advanced education experience abroad, especially to countries along the One Belt and One Road.For example, Tianjin Light Industry Vocational and Technical College, Tianjin Transportation Vocational College, Ain Shams University, and Egypt TEDA Development Company.Ltd jointly established the training and employment base of the Egyptian "Luban Workshop".By integrating the needs of local factories, colleges, and students, China's vocational education talents are "going out" with advanced vocational education experience to help the overseas country.Unlike the previous mode which focused on student and teacher exchanges, "Luban Workshop" mode is larger in scale, mature, comprehensive, and systematic, and establishes a new mode of cooperation and exchange in vocational education.Therefore, this mode is not only a new mode of Sino-foreign cooperation in running vocational colleges but also a new way of foreign educational assistance to silk and belt countries.Moreover, this mode can be rapidly replicated, and 20 "Luban Workshop" have been estabilsed in 19 countries and regions [9] [10].As a result, the cooperation outcomes create more work opportunities for the locals, and the unemployment rate decreased in local countries.In short, this is a self-reinforcing circle that is sustainable and beneficial to the local economy.
Cultivation of International Students in China
In the cultivation of international students in China, most contemporary researchers are mainly concerned with the cultivation of international students in general colleges and universities [11].The emerging mode of Chinese-foreign cooperation in running colleges and universities also has been explored by researchers.Universities such as Xijiao Liverpool University, Nottingham University Ningbo, Duke University Kunshan, and Michigan College of Shanghai Jiaotong University are Chinese-foreign joint-venture universities, established by one famous foreign university and one domestic university.These universities can not only issue foreign universities diplomas but also provide unique study and living experiences in China for oversea students, thus attracting a large number of foreign students.
Regarding the cultivation of foreign students in vocational education, the relevant researches are still at the stage of describing the basic situation, such as the research on the scale of foreign students in vocational education, the scholarship situation, and the difficulties faced by foreign students [4].In recent years, the international cooperation in vocational education around the "One Belt, One Road Initiative" has been growing rapidly and has attracted attention of academics.However, there is not much discussion on the program design for international students training in China [12].Among the few studies discussing this issue, researchers have only preliminarily summarized the practices.There is a need for a methodology for social science studies [6][13][14] [15].For example, the importance of the integration of industry and education is mentioned, but those studies did not clearly describe the relationship between the cooperation mode and performance of the project, i.e., without a methodology for causal inference.
Summary
There are many studies on the invite-in mode of international cooperation, with analysis and case studies on its components, operation mechanism, and effectiveness.The projects of Chinese experience going out are growing rapidly, and researches on Confucius Institute and Luban mode increased as well.However the studies on foreign students in vocational education and training are still rare, and the published articles not only lack international vocation education standard knowledge but also need to be improved to support the design of cooperation programs.
China, as an emerging country, is increasingly becoming an attractive study destination for international students.As a major manufacturing country with a well-developed industrial chain, China's achievements in engineering project construction and other areas are recognized globally.It is a question that is worth exploring, how international exchanges in vocational education in China should be designed and what modes of cooperation could make the most use of the advantage of China's cultural, industrial, and educational domains?Exploring effective modes of international cooperation in Chinese vocational education could be helpful for China to improve the mode of education and training mechanism, support to spread Chinese experience to the countries that needed and also could enhance the effectiveness of South-South cooperation (among the south parts countries of the earth).
Methodology and Data
This paper uses a case study approach to explore the foreign exchange and cooperation modes of a Chinese vocational higher education institute (hereafter referred to as College H).Three projects representing three cooperation modes were selected for comparison in the study, and an attempt was made to analyze the reasons that lead to success or failure.Data collection method: Interviews were conducted with the head of the International Exchange and Cooperation Office and some students of College H. Materials such as information from the school's official website and reports from mass media were compiled, while detailed field notes were made in a participatory observation style.
Basic information of the case: College H is located in the southernmost part of China, an island tourist city, surrounded by countries are the Philippines, Brunei and Malaysia, and with close interaction with Myanmar, Nepal and India.The college was established in 2005, and its foreign cooperation and exchange work started in 2011 and began to recruit international students in the same year.It has become one of the colleges with the largest number of international students among the higher education institutions in Hainan Province.Most foreign students came from Nepal, Russia, Ukraine, Kazakhstan, Israel, India, Sri Lanka, and Thailand.College H has been committed to training "international professional craftsmen" for foreign students' home countries' local economy, and rich experience has been accumulated in the processes of cultivating international students through local school-enterprise cooperation mode.The efficient education mode made the city become a window for the city to open up to the world.
Basic information of the projects: In this paper, three projects that have been cooperated with foreign colleges before 2019 were selected.There are cooperation projects between College H and the institutional units in Nepal, Thailand, and Belarus.These three projects are all about vocational education, reflecting the characteristics of the Chinese experience going out, and all of them had achieved certain effectiveness to some extent.Considering the global outbreak of the Covid-19 epidemic and the disruption of international exchanges in education, the study only selects international cooperation projects that were already underway before the epidemic began.To exclude the impact of the epidemic on the projects, the discussion of the effectiveness of the projects also ends in early 2020.
Results
The study will present three projects that College H has conducted with Nepal, Thailand, and Belarus, discussions on their backgrounds, collaboration modes, and project effectiveness will be carried out and based on this, a comparison and analysis of the keys to project success will be applied.
Background
The project between the International Institute of Tourism and Hospitality Management (IITHM) in Kathmandu, Nepal and College H in China started negotiated in 2009, and both sides had signed agreements that included sending students and teachers to each other's colleges.The first group of 27 Nepal students who majored in Hospitality in College H arrived in 2011 and started their one-year study journey.College H held a grand opening ceremony to show their warm welcome.It implied that formal cooperation in between was started.The school leaders of both sides have long attached importance which is essential to building solid cooperation.From 2011 to 2020, during nine years of cooperation, In every term, the president or the head of Nepal university would come along with a student delegation came to College H and the two sides would exchange new ideas on project development thoroughly and in detail.The high frequency of meetings between the two colleges has promoted the two sides to build a friendly and solid cooperation relationship which enhanced the rapid growth of the scale of the program.In December 2017, the number of enrolled Nepalese students reached 781, which was also the highest number of foreign students in the college's history
Cooperation Mode
The project contains three parts, namely, students, , and local industries, and they are integrated by meeting each party's needs.In detail, Sanya(the city College H located) is a world tourism destination and many luxurious hotels need staffs who could not only speak English fluently but also possess hospitality knowledge.College H had built cooperation relationships with local hospitality industries which could accept students to intern or work there, so students could learn professional knowledge by working in the real workplace.Students from Nepal who could speak English and also master hospitality knowledge are the ideal staff for Sanya Hotels.In addition, the intern students' source is solid and sustainable becauce of the long-term and friendly cooperation between two colleges.The local hotels could save a great budget and time on staff employing and training by accepting the Nepal students to intern there.
Nepal students are also benefited in two ways.Firstly, earn tuition fees by working in cooperated hotels.For Nepal students, the tuition fee and living expenses in China are quite high compared to their currency buying power.Their living and eating expense was covered by the hotel, and the salary they earn could pay the tuition fees.Therefore, Nepal students study in College H is financially affordable.Secondly, the working experience gained in local star hotels can be directly applied to their practical work when they return home.Many students who participated in the project reported that the Chinese work experience they obtained played a big role in job hunting in the hospitality field as the work experience in China was recognized and valued by Nepal local companies.
This mode is similar to the Chinese vocational education mode which takes place among Chinese students, the order-form [1] [16].However, applying the order-form in cultivating foreign student processes is a creative option in educating foreign vocational students.Nepal project has not only adopted the order-form, emerged with local industries but also integrate well with their home countries' industries.
Impacts
The project worked very well and has attracted a large number of foreign students come to college H.According to the statistics, there were 40,000 foreign students in 21 universities and colleges in Hainan in 2019, and College H occupied 1/3 of the total, exceeding most undergraduate universities' foreign students.In addition, the Nepal program is so popular among Nepalese students that students from other countries such as Israel, India, Sri Lanka, Thailand were also attracted and came to college H. On March 11, 2019, the Acting Consul General of the Consulate General of Nepal in Guangzhou visited College H and expressed their gratefulness for the great efforts and contributions College H did to the educational cooperation and exchange between China and Nepal.It is because Nepal students and their local industries have benefited a lot from the project and have built a good reputation in the region, the Consul General of the Embassy of Nepal in China was able to visit College H actively and this behavior enhanced cooperation between two colleges even two regions.
Background
In September 2017, in Bangkok, Thailand, in the presence of the Governor of Hainan Province, the Minister of Public Health Department of Thailand, and others, Hainan Province Cihang Public Welfare Foundation, the Confucius Institute of the Maritime Silk Road in Thailand, and College H signed a tripartite agreement.The agreement focused on aviation service training for Thai students.The project aims to help train more aviation service talents in Thailand and increase the friendly exchange between China and Thailand.Nearly 200 students applied for the program and 30 students were selected to enter the flight attendant training class, which officially started on May 18, 2018.
Cooperation Mode
The project is an international cooperation project which only emphasizes on airline flight attendant training.Four parties are responsible for different work, the cooperation agreement is signed by the Chinese and Thailand governments, Hainan Provincial Charity Found provided more than 1.5 million RMB and the Confucius Institute on the Silk Road of the Sea in Thailand assisted in local enrollment in Thailand.H college is responsible for providing three-month training courses.They designed the setting, funding arrangement, curriculum, interview and selection, talent training program, and various activities together.In terms of teaching, the program has not only designed professional courses related to the training of cabin crew but also arranged Chinese cultural courses such as tea art and calligraphy to improve the overall quality of Thai students and help them integrate into Chinese culture.After the students finished the training course, eight Thai students were selected by Chinese airline companies after interviews [2].
Impacts
The projest was reported by famous media reports, such as the People's Daily and other media, and had obtained outstanding public attention at the early stages.Working as a flight attendant, the high salary and career path are attractive to Thai students, so many students are looking forward to staying in China and working in the aviation service industry.
However, with only 30 students participating, the projest was not renewed and expanded at a later stage for many reasons.Firstly, the main problem was that the investment was too high, needed about 50,000 RMB per student which much higher than domestic sutdents educating, without futher financial support from China Cihang, the program is impossible to be conducted.Secondly, the project was initially driven by two countries' governments and financially sponsored by Charity Found of Cihang, so the cooperation is more like political strategy cooperation instead of an education agreements.Also, the flight attendants supplied exceeded the demand in the human resource market, and the job vacancies in airplane companies in China were limited which makes the Charity Found of China Cihang reluctant to invest more found on the project.
In 2018, when the project reached the point of renewal, College H found that the government and HNA Group did not show interest in moving forward.As a result, the project was put on hold at the end of the first phase.Afterward, HNA Group itself suffered from declining profits and a broken capital chain and was reorganized in bankruptcy, which also had a huge negative impact on project maintenance.
Background
With the help of teachers from College H who had studied in Belarus before, in April 2017, College H signed an international exchange and cooperation agreement with the Belarusian State University.Then in December 2017, the Hainan Education Center of the Belarusian State University was established in College H.The center is dedicated to building a training base for Russian-speaking personnel in Hainan and a base for students of Belarusian State University to study Chinese.Students from College H also study in Belarus through the cooperation arrangements.
Cooperation Mode
The Belarusian program embodies a mode of cooperation with language learning at its core.This mode originated from the staff's connection from College H with Belarus college.As the platform of cooperation between the two universities was built by the Russian language teachers of College H, who used to study in Belarusian, and the two colleges' presidents have not met or communicated as frequency as Nepal project.In addition, the core of the cooperation between the two universities was mainly focused on College H's staff's connections with Belarus University.
Around the platform of the Hainan Education Center of the Belarusian State University, various programs have been held.For example, the two schools have established a mechanism of mutual visits, exchange of teachers and students, a platform for sharing information resources, and academic exchange activities between teachers, to achieve "two-university" barrier-free academic exchanges and two-way teacher training and exchange activities; learning, intern and employment; to make full use of modern information technology, the establishment of information resources shared by two universities, and learning management experience from each other [1].
Impacts
The project has attracted a certain number of international students from Belarus.By 2019, 60 Belarusian students in total have been studied in college H.However, the project is more of a student exchange program and other cooperative programs were not conducted fully.
College H benefits a lot from the project.Firstly, Chinese students who majored in Russian have the chance to pursue a higher degree in Belarus, 34 students have got the chance of undergraduate and graduate study, two of them are pursuing the doctoral degrees now.Secondly, the cooperation provides a good platform for continuing education of Russian teachers and students and helps to improve their qualifications as well as their overall language skills.Also, College H has made full use of the resources of the international academic community to improve the ability of teachers and students to communicate and cooperate across cultures.Therefore, College H has also become the institution with the largest number of students majoring in Russian among vocational institutions in the country.
However, after the expiration of the cooperation agreement, the project was not renewed.One reason is that the faculty members who were previously in charge of the program were no longer responsible for the program due to a change in an administrative position.Another reason is that the two headmasters of the two colleges were not positive to promote the program as the financial income from this project is not satisfying.
Comparisons of three projects
The three collaborative projects included in the study are all projects that focus on foreign students coming to China for exchange, but the significant differences can be identified in the scale, duration, and effectiveness of the three projects (Table 1).The Nepal project lasted for nine years, with a peak enrollment of nearly 1,000 students a year, and the project had a sustained impact on local enterprises as well as in neighboring countries.The Thailand project was implemented only once and lasted for less than six months, with only thirty students involved.The social impact is limited as the was suspended due to project fund shortage and other issues.The Belarus project was implemented for one agreement cycle, about 2 years, and the number of international students coming to China through the project was around 60.The project was not able to renewed due to the replacement of the person who in charge and the inactivity of the two colleges' headmasters.
Why did the Nepal program achieve great results and continued growth in scale, while the Thailand and Belarus programs were limited in size and difficult to renew?The study found that the biggest difference among the three projects is the degree of integration with the local economy.The Nepal program's student training was designed to integrate with local Chinese industries from the initial program design, curriculum arrangement, and internship.Every phrase closely integrates the students' overall abilities and talents' qualifications requirements from local enterprises and forms a positive interest chain and a symbiotic relationship between the students, school, and enterprises.The entire project was perfectly merged with the international tourist destination and the practical needs of star hotels.
However, in contrast, the integration of Thai and Belarusian programs with the local economy are very loose and the benefits gained are not significant.Although the Thailand project is a professional training project, its connection is limited to airlines and the project is very costly.The Belarus project was limited to language training and student exchanges, and thus did not integrate local industries into the project design at an organizational level.As a cooperative program organized by a vocational college, the Belarusian program of College H has a limited attraction to foreign students, and the overall program size has not increased significantly.
Conclusion
With the growing influence globally, China is increasingly becoming a major study center for international students, especially for those from developing countries.How to improve the access to domestic education for international students and how to improve the international competitiveness of China education have become an important issue.This article discusses the above issues from the perspective of how international cooperation in vocational education institutions should be carried out.It is found that in the oversea student program in China, to make the projects produce good learning outcomes and make incredible progression, it is necessary to integrate industry and education well and fully combine the programs with local industries.The program should be launched by combining the strengths of students, enterprises, and schools in terms of their abilities, resources, and needs.And the establishment of a systematic and realized self-reinforcing project requires efficient interaction between Chinese and foreign institutes.Chinese institutions should fully explore the connection between local industries and international students and provide the opportunity for the growth of international cooperation projects so that the cooperation projects could benefit multiple parties and gain long-lasting vitality.
International education cooperation is conducive to promoting the cross-border circulation of advanced educational experiences and enhancing global human capital.The findings of this paper shed light on how to conduct the international cooperation of vocational education in the context of south-south cooperation.Emerging countries like China will play a more important role in leading the rest of the south and take more responsibility for providing well-designed projects of vocational education for oversea students.
Table 1 :
Summary of the three international cooperation projects. | 6,265 | 2023-11-10T00:00:00.000 | [
"Economics",
"Education"
] |
Magnitude of observer error using cone beam CT for prostate interfraction motion estimation: effect of reducing scan length or increasing exposure
Objective: Cone beam CT (CBCT) enables soft-tissue registration to planning CT for position verification in radiotherapy. The aim of this study was to determine the interobserver error (IOE) in prostate position verification using a standard CBCT protocol, and the effect of reducing CBCT scan length or increasing exposure, compared with standard imaging protocol. Methods: CBCT images were acquired using a novel 7 cm length image with standard exposure (1644 mAs) at Fraction 1 (7), standard 12 cm length image (1644 mAs) at Fraction 2 (12) and a 7 cm length image with higher exposure (2632 mAs) at Fraction 3 (7H) on 31 patients receiving radiotherapy for prostate cancer. Eight observers (two clinicians and six radiographers) registered the images. Guidelines and training were provided. The means of the IOEs were compared using a Kruzkal–Wallis test. Levene's test was used to test for differences in the variances of the IOEs and the independent prostate position. Results: No significant difference was found between the IOEs of each image protocol in any direction. Mean absolute IOE was the greatest in the anteroposterior direction. Standard deviation (SD) of the IOE was the least in the left–right direction for each of the three image protocols. The SD of the IOE was significantly less than the independent prostate motion in the anterior–posterior (AP) direction only (1.8 and 3.0 mm, respectively: p = 0.017). IOEs were within 1 SD of the independent prostate motion in 95%, 77% and 96% of the images in the RL, SI and AP direction. Conclusion: Reducing CBCT scan length and increasing exposure did not have a significant effect on IOEs. To reduce imaging dose, a reduction in CBCT scan length could be considered without increasing the uncertainty in prostate registration. Precision of CBCT verification of prostate radiotherapy is affected by IOE and should be quantified prior to implementation. Advances in knowledge: This study shows the importance of quantifying the magnitude of IOEs prior to CBCT implementation.
Objective: Cone beam CT (CBCT) enables soft-tissue registration to planning CT for position verification in radiotherapy. The aim of this study was to determine the interobserver error (IOE) in prostate position verification using a standard CBCT protocol, and the effect of reducing CBCT scan length or increasing exposure, compared with standard imaging protocol. Methods: CBCT images were acquired using a novel 7 cm length image with standard exposure (1644 mAs) at Fraction 1 (7), standard 12 cm length image (1644 mAs) at Fraction 2 (12) and a 7 cm length image with higher exposure (2632 mAs) at Fraction 3 (7H) on 31 patients receiving radiotherapy for prostate cancer. Eight observers (two clinicians and six radiographers) registered the images. Guidelines and training were provided. The means of the IOEs were compared using a Kruzkal-Wallis test. Levene's test was used to test for differences in the variances of the IOEs and the independent prostate position. Results: No significant difference was found between the IOEs of each image protocol in any direction.
Mean absolute IOE was the greatest in the anteroposterior direction. Standard deviation (SD) of the IOE was the least in the left-right direction for each of the three image protocols. The SD of the IOE was significantly less than the independent prostate motion in the anterior-posterior (AP) direction only (1.8 and 3.0 mm, respectively: p 5 0.017). IOEs were within 1 SD of the independent prostate motion in 95%, 77% and 96% of the images in the RL, SI and AP direction. Conclusion: Reducing CBCT scan length and increasing exposure did not have a significant effect on IOEs. To reduce imaging dose, a reduction in CBCT scan length could be considered without increasing the uncertainty in prostate registration. Precision of CBCT verification of prostate radiotherapy is affected by IOE and should be quantified prior to implementation. Advances in knowledge: This study shows the importance of quantifying the magnitude of IOEs prior to CBCT implementation.
INTRODUCTION
The use of intraprostatic gold markers has improved the accuracy of radiotherapy treatment to the prostate by providing a surrogate of the prostate position which is visible on kV or MV X-ray imaging. 1,2 However, information regarding deformation of the prostate and organs at risk is not available on 2D planar imaging. The implementation of in-room CT imaging devices has provided 3D information to quantify target motion, rotation and deformation in addition to movement of organs at risk. [3][4][5] This enables online 3D imaging, soft-tissue registration and has the potential to reduce planning target volume (PTV) margins, allowing dose escalation with the aim to improve the therapeutic ratio.
Whilst online imaging reduces the uncertainties associated with the prostate position, residual errors will remain. One source of residual error is interobserver error (IOE) which has been shown to be significant, albeit with the majority of the studies exporting the cone beam CT (CBCT) and contouring the prostate on treatment planning systems which is not replicating registration at the treatment console. [5][6][7][8][9][10] IOEs gave a standard deviation (SD) of contoured prostate volumes as great as 20% of the average prostate volume (10) and a SD of IOE of .2 mm was found. 5,8 A common finding in all studies was greater interobserver variation in the SI direction. [5][6][7][8][9][10] This is consistent with studies which investigated interobserver on CT images. 11-13 A few studies have compared CBCT with 2D imaging devices 14,15 and found that compared with kV imaging of intraprostatic fiducial markers, there were differences in set-up errors of .3 mm in the SI and AP direction 14 and an additional 1 mm margin was required, when using CBCT without intraprostatic fiducial markers. 15 Although IOEs were not investigated, it was postulated that these differences were, in part, due to the difficulty in visualizing the prostate on the CBCT images.
The above studies demonstrate that IOEs should be considered when using 3D imaging without gold markers for verification and defining planning treatment volume margins. It may also be appropriate to investigate the methods of reducing IOEs. Currently in our department, CBCT images for prostate verification radiotherapy are acquired using XVI v. 4.5 (Elekta Oncology Systems, Crawley, UK). The imaging options can be selected from a choice of small, medium and large, field of view (FOV) equating to approximately 26, 40 and 52 cm diameter FOV and three field lengths 10, 15 and 20 equating to 12, 17 and 26 cm length at the isocentre. The standard imaging protocol for the prostate at our centre is a 12 cm scan length, 40 cm FOV (M10) and 1644 mAs exposure resulting in a cone beam CT dose index (CTDI) of 27 mGy. All the scans for this study are medium FOV, and we refer to the length by the actual lengths, i.e. 12 cm for standard and 7 cm for the test length.
One method to reduce IOEs might be to improve the visualization of the prostate by improving CBCT image quality. We propose that the image quality of our standard CBCT images would be improved if a smaller length of tissue was imaged thereby reducing the amount of scattered radiation which has the benefit of reduced integral dose. We also propose to investigate if reducing image scan length increasing the X-ray exposure improves image quality and reduces IOE.
The aim of this study was to firstly, determine IOE in determining prostate position on CBCT images acquired using standard protocol. Secondly, to investigate if reducing the scan length or increasing the exposure affects the magnitude of IOE. IOEs associated with registering the prostate position on CBCT images to planning CT scans were determined using our standard CBCT image protocol (12 cm and 1644 mAs) and compared to those obtained when using: (1) a reduced CBCT scan length with the same X-ray exposure (7 cm and 1644 mAs) giving a reduced integral dose compared with the standard protocol. The dose length product is reduced from 324 to 189 mGy*cm. (2) a reduced CBCT scan length and increased X-ray exposure (7 cm and 2632 mAs : CTDI 43.2 mGy) giving equivalent integral dose to the standard protocol. The dose length product is 302 mGy*cm.
IOEs using CBCT were compared with independent prostate motion and the accuracy of prostate position measurement using automated software match.
METHODS AND MATERIALS
Patients referred for radical radiotherapy to the prostate and seminal vesicles were recruited for the study, which was approved by the local research and ethics committees. Patients were immobilized using the Combi Fix system (Oncology Systems Ltd, Shropshire, UK) and had been given information sheet detailing instructions regarding maintenance of a comfortable full bladder throughout treatment. Planning CT scans were acquired with 3-mm slice thickness. If patients had an anteroposterior rectal dimension of .4 cm at the time of CT planning then the patient was rescanned. If the rectum was distended due to faeces rather than gas, enemas were prescribed for the repeat scan and during treatment. Patients were treated using a three field forward planned intensity modulated radiotherapy treatment (Pinnacle; Philips) delivered in either 2 Gy and 37 fractions or 3 Gy and 20 fractions.
Prior to treatment delivery, CBCT images were acquired using a novel 7 cm length image with standard exposure (1644 mAs) at Fraction 1 (7), standard 12 cm length image (1644 mAs) at Fraction 2 (12) and a 7 cm length image with higher exposure (2632 mAs) at Fraction 3 (7H) (Figure 1). The remainder of verification images used the standard length and exposure.
Patients were positioned to skin marks and the isocentre position set according to plan set-up. CBCT images were registered to planning CT images retrospectively by eight observers (two clinicians and six radiographers) (Elekta Synergy® XVI v. 4.5; Crawley, UK) using treatment console software (XVI v. 4; Elekta Oncology Systems). Guidelines and training were provided for the observers which included identification and comparison of the prostate in images acquired by MRI, CT and CBCT. Observers were asked to firstly register the images using bony anatomy and then manually adjust, where necessary, using soft tissue. To do this, the observer defined a region of interest which was used by the software to perform automated rigid registration to bony anatomy (chamfer matching). The observer visually checked the registration of the prostate and manually adjusted the registration if necessary to obtain a closer match. Observers recorded prostate position (this was the total set-up error including both patient and prostate displacement) and whether manual adjustment had been performed. In addition, they indicated their confidence in the prostate match on a visual analogue scale of 0-10, where 0 was not confident and 10 was very confident. Patient width, laterally and anteroposterior, and the presence of gas in the images were also recorded to evaluate their effect on IOEs. In addition, one observer registered the images using the automatic dual registration software. This firstly registers the bony anatomy position, followed by the "greyscale" registration (cross correlation) to the soft tissue using an irregular region of interest (mask) defined automatically as the clinical target volume of the prostate plus 0.5 cm margin. It was ensured that there was no bony anatomy included in the mask since this would affect the soft-tissue registration.
Statistical analysis
For each patient, the average prostate position, across all the observers, provided an estimate of the "gold standard" Where left is positive sign, superior is positive sign and anterior is positive sign. Full paper: Interobserver error using CBCT for assessing prostate motion BJR prostate position (total set-up displacement). To determine independent prostate motion, the bony anatomy positions were subtracted from the prostate position to determine independent prostate motion and to compare that found by other studies.
The IOE for each image was calculated as the SD of the prostate displacement recorded by all eight observers. IOE were tested for normality using a Quantile-Quantile probability plot (Q-Q plot). To compare the means of the IOEs between the three different imaging protocols, a non-parametric unrelated samples test, Kruzkal-Wallis, was used.
Patient size (lateral width and anteroposterior depth) and the presence of gas in the images were also recorded and the effect on IOEs was determined using a Spearman's rank correlation coefficient and Mann-Whitney U-test independent samples U-test, respectively. The relationship between the visual analogue score and IOE was assessed using Spearman's rank correlation coefficient.
To enable clinical implementation of CBCT soft-tissue imaging, we defined that the uncertainty in the registration should be less than the uncertainty of using bony anatomy. Nonparametric Levene's test was used to test if the variances of the IOEs and the independent prostate position were equal or different.
The automatic greyscale registrations were compared to the observer registrations to determine the magnitude and frequency of manual adjustments.
Set-up errors
93 CBCT images were acquired in 31 patients (3 per patient) and the mean (SD) and median (range) of the total set-up interfraction errors (patient and prostate displacement) are shown in Table 1.
Interobserver errors
The mean IOE was the greatest in the AP direction for each of the three image protocols but the SD of the IOE was greater in the SI direction (Table 2).
No significant difference was found between the IOEs in any direction between the image protocols; therefore, the IOEs were analysed using all images from hereon in. There was also no significant difference between the two clinicians' results and the six radiographers' results.
The SD of the IOE was significantly less (p 5 0.017) than the independent prostate motion in the AP direction only (Table 3). IOEs were not significantly different to independent prostate motion in the LR and SI directions.
The IOEs were within 1SD of the independent prostate motion in 95%, 77% and 96% of the images in the RL, SI and AP direction (Figure 2a-c). The IOE was greater when there was gas present in the CBCT image in the RL (p 5 0.03) and AP direction (p 5 0.01). There was no significant difference in the SI direction. The IOEs were not affected by patient dimensions.
The confidence score measured with the visual analogue scale was not correlated with the IOE in the RL and SI direction but as confidence increased the IOE decreased (r 5 0.6; p 5 0.01) in the AP direction.
Comparison of observer registrations and automatic registrations
The average interobserver registration correlated strongly (Pearson's product-moment correlation coefficient) with the automatic "greyscale" match in the RL direction (r 5 0.89; p 5 0.01 and less strongly in the SI direction (r 5 0.78; p 5 0.01) and AP direction (r 5 0.58; p 5 0.01).
The difference between the observer registrations and greyscale registrations was .3 mm in 5%, 21% and 15% of the images and .5 mm in 2%, 11% and 8% of the images in the RL, SI and AP directions, respectively. The registration was manually adjusted in 42% of registrations.
DISCUSSION
Increasing the dose and reducing the length of the CBCT did not have a significant effect on the IOEs. We found the IOEs of CBCT registration with planning CT to be of a magnitude that ought to be considered a component of the residual error. Residual errors can arise from geometrical uncertainties (phantom transfer error), errors with the position measurement and inaccurate couch movement, IOEs associated with CBCT and planning CT registration or patient motion. The SD of residual errors due to mechanical couch movement is reported to be in range of 0.8-1.6 mm. 16,17 Residual errors .2 mm are generally thought to be due to prostate motion; [17][18][19] however, these studies investigated residual error with pre-and post-treatment images, and subsequent investigations have shown that large prostate motions during treatment can be transient. 20 Our study has shown that IOEs from soft-tissue registrations are similar in magnitude to other sources of residual error and therefore should be quantified and taken into account when calculating clinical target volume (CTV) to PTV margins. The SD of observer displacements was significantly less than that of the independent prostate motion in one direction only (AP).
Comparing the IOEs to a standard 2 mm tolerance used in radiotherapy 1%, 13% and 16% of images in the RL, SI and AP directions, respectively had IOEs of .2 mm. This suggests that centres considering using soft-tissue CBCT match rather than bony anatomy matching, and it is important that IOEs are quantified and compared to expected independent prostate motion. If observer errors are greater, there may be no benefit from using CBCT soft-tissue match.
The greater magnitude of manual moves in the SI direction could be explained by the known difficulty in assessing prostate position in CT scans in this direction. [11][12][13] However, the 3-mm slice thickness of the reference CT planning may also contribute to the larger discrepancies on the SI direction. In addition, the difficulty in visualizing the prostate (made worse by blurring because of gas pockets) may also have affected the registrations in all directions.
The lack of "ground truth" of the prostate motion is a weakness of this study; however, the distribution of the set-up errors of both patient and prostate is within expected ranges compared with previously published results. 1,2,21,22 Reducing the length of the scan and increasing the dose did not improve IOEs, and we suggest that other methods of decreasing IOEs are investigated. Possible solutions to aid image registration include implanting fiducial markers and with the 3D imaging would still provide the additional soft-tissue information regarding organs at risk position and deformation of the target compared with kV or MV planar imaging. Improved training for radiographers may decrease IOE; however, two of these observers were clinicians and the mean of the radiographers and the mean of the clinicians were not significantly different. This agreement suggests that for routine clinical practice prostate matching does not require clinician intervention. Furthermore, routine practice involves two radiographers when checking the final registration which has been shown to make a difference in concordance when selecting PTV for bladder patients. 23 Reducing the slice thickness of the planning CT scan may decrease the error in the SI direction. However, the findings of this study have highlighted that reducing the scan length did not increase IOEs, and therefore a reduction in scan length may benefit patients, by reducing integral dose with no loss of precision.
CONCLUSION
Reducing CBCT scan length and increasing exposure did not have a significant effect on IOEs. To reduce imaging dose, a reduction in CBCT scan length could be considered without increasing the uncertainty in prostate registration. Precision of CBCT verification of prostate radiotherapy is affected by IOE which should be quantified prior to implementation. | 4,342.8 | 2015-09-03T00:00:00.000 | [
"Medicine",
"Physics"
] |
Physical-Chemical and Microhardness Properties of Model Dental Composites Containing 1,2-Bismethacrylate-3-eugenyl Propane Monomer
A new eugenyl dimethacrylated monomer (symbolled BisMEP) has recently been synthesized. It showed promising viscosity and polymerizability as resin for dental composite. As a new monomer, BisMEP must be assessed further; thus, various physical, chemical, and mechanical properties have to be investigated. In this work, the aim was to investigate the potential use of BisMEP in place of the BisGMA matrix of resin-based composites (RBCs), totally or partially. Therefore, a list of model composites (CEa0, CEa25, CEa50, and CEa100) were prepared, which made up of 66 wt% synthesized silica fillers and 34 wt% organic matrices (BisGMA and TEGDMA; 1:1 wt/wt), while the novel BisMEP monomer has replaced the BisGMA content as 0.0, 25, 50, and 100 wt%, respectively. The RBCs were analyzed for their degree of conversion (DC)-based depth of cure at 1 and 2 mm thickness (DC1 and DC2), Vickers hardness (HV), water uptake (WSP), and water solubility (WSL) properties. Data were statistically analyzed using IBM SPSS v21, and the significance level was taken as p < 0.05. The results revealed no significant differences (p > 0.05) in the DC at 1 and 2 mm depth for the same composite. No significant differences in the DC between CEa0, CEa25, and CEa50; however, the difference becomes substantial (p < 0.05) with CEa100, suggesting possible incorporation of BisMEP at low dosage. Furthermore, DC1 for CEa0–CEa50 and DC2 for CEa0–CEa25 were found to be above the proposed minimum limit DC of 55%. Statistical analysis of the HV data showed no significant difference between CEa0, CEa25, and CEa50, while the difference became statistically significant after totally replacing BisGMA with BisMEP (CEa100). Notably, no significant differences in the WSP of various composites were detected. Likewise, WSL tests revealed no significant differences between such composites. These results suggest the possible usage of BisMEP in a mixture with BisGMA with no significant adverse effect on the DC, HV, WSP, and degradation (WSL).
Introduction
Since the 1960s, resin-based composites (RBCs), which are composed of a resin matrix, fillers, and a matrix-filler coupling agent, have been the most widely utilized biomaterials to restore dental caries and other defects [1,2].They are the first choice in restorative dentistry for patients and practitioners due to their aesthetics and fabrication simplicity.A resin matrix comprises crosslinking monomers, a photoinitiator system, and other additives forming a dense polymeric net upon photopolymerization [3].Typically, crosslinkers (also called multifunctional monomers) play an essential role in the final properties of the composites and are primarily of (meth)acrylic type.Bisphenol A-glycidyl methacrylate (BisGMA) is the common base monomer in RBC's matrix, favored due to its benefits to the resulting materials, including aesthetics, low shrinkage, and thermal stability [1,4].However, its high viscosity brings some issues to the handling and application of the product.Furthermore, it prevents the use of high filler loads, which is necessary for restorative mechanical quality [5,6].In addition to BisGMA, the matrix usually contains diluents to reduce the matrix's viscosity and, thus, overcome the BisGMA-related drawbacks.The most commonly used diluent is triethylene glycol dimethacrylate (TEGDMA).However, TEGDMA has its own disadvantages, such as higher hydrophilicity and polymerization shrinkage, susceptibility to cyclization rather than crosslinking, and possible cytotoxicity [5,7].As a result, researchers have investigated alternatives for BisGMA [8][9][10][11], either by modifying its structure or synthesizing new di(multi)functional analogs, targeting the matrix's viscosity reduction to achieve the desired features of the final composite.
RBCs are the material of choice to restore minimal invasive cavities, not only because of their aesthetic qualities but also because of their biocompatibility and ability to adhere to tooth structures [12,13].However, discoloration with time, poor marginal sealing, and degradation are the main disadvantages of their use and are directly related to their composition [14][15][16], including polymer matrix and filler content.Hence, the long-term existence of restorative materials in the oral environment necessitates a strong and stable product.The oral pH and temperature cycles may alter the composite components, resulting in filtration and reducing their durability [14].Although the physical and chemical properties of the RBCs may be affected by solvent uptake, the two main concerns to be firmly taken into consideration when developing resin materials are the short-term release of uncured components and the long-term elution of degradation products [17,18].The clinical performance of the dental material is highly influenced by water sorption; therefore, it plays a crucial role in deciding the clinical success, despite dental composite being considered stable and impermeable to water [19].Water uptake is generally associated with the polymer network, which in turn is fostered by the chemical structure of the matrix [20,21].Considering the structural properties of the composite components, the hydrophilic matrix has been reported to be the primary cause of water uptake [22].The higher the hydrophilicity of the organic matrix, the greater the water uptake.The process may result in restorative discoloration, lower wear resistance, mechanical quality deterioration, the release of unreacted monomers, and hydrolytic degradation of bonds [13,19].Conversely, solubility can contribute to discoloration and bulk weakening [23] of the dental restoratives.It is a sign of the low reactivity of the monomers used in the matrix, the degradation possibility brought on by the composite-making process, and the degree of hydrophobicity of the contents.
The conventional resin in the RBDs is commonly based on the monomers BisGMA, UDMA, TEGDMA, and others with mono-, di-, or multi-functionalities.However, BisGMA is the popular one, which presents at a higher rate than other monomers [24].Chemically, BisGMA holds two hydroxyl groups, which supposedly drive its high viscosity and hydrophilicity [25].As water diffuses into the restorative material, it may trigger chemical degradation, resulting in the formation of hydrolytic products.Hence, water also facilitates removing such degradation products and further contributes to water solubility via a releasing event.Water diffusion will also lead to the erosion of the organic matrix due to the release of unreacted monomers, which is more prominent in the early phase after restoration and will result in mass loss of the dental composite material [26].
The mechanical properties of dental materials determine how long they endure when used in the mouth [27].Hence, flexural resistance and hardness are two of the most studied mechanical attributes because they closely resemble the forces generated during mastication and those supported by the material [28].The hardness of RBCs is typically linked with the degree of conversion, which in turn depends on polymerization conditions and composite substances (type and quantity) [29,30].It determines the material's abrasion resistance, indirectly influencing bacterial adhesion by making surfaces more easily roughened [16,17].
The Vickers hardness test is a versatile method for measuring macro and microhardness, easy to carry out, and can be applied to small areas and various types of materials [31].
The 3-(4-allyl-2-methoxyphenoxy)propane-1,2-diyl bis(2-methylacrylate), shortened as 1,2-bismethacrylate-3-eugenyl propane (BisMEP), is a new synthesized dimethacrylated monomer containing eugenol moiety as a pendent group.The monomer was analyzed for its structural integrity and then incorporated in place of the BisGMA matrix of experimental RBCs.Then, the composites were characterized for their thermal stability, flowability, and degree of conversion [8], resulting in promising features to be investigated further.Hence, composite stability and mechanical withstanding are crucial properties of dental composites that could be targeted for analysis.
Eugenol is a versatile bioactive molecule and one aromatic building block for obtaining bio-based monomers [32].Additionally, it has a bright history of use in medicine as an antimicrobial, antiseptic, and anesthetic agent [33], making it one essential precursor for several transformations, including the production of adhesives and polymerizable and non-polymerizable derivatives [32,34].Therefore, many methacrylate-derivatives of eugenol have been synthesized and analyzed as adhesives, dental fillings, and orthopedic cements [34][35][36][37].Indeed, eugenyl moiety is an attractive substance that could retain its bioactivity and function as an effective antimicrobial agent for particular applications, including cosmetics and dentistry [38].Additionally, its allylic double bond enables further reactivity of functional polymers [32].
Recently, trials have been conducted to modify eugenol with polymerizable functional groups, predominantly of (meth)acrylate type, to be incorporated within resin matrices where moldable and sit-setting fabrication with long-live functioning is required [8,35,39].In this project, BisMEP difunctional monomer was applied to incrementally substitute Bis-GMA in experimental RBCs to assess the effect of such replacement on the microhardness, depth of cure via degree of conversion, water sorption, and water solubility properties of photocured model composites.It is hypothesized that the replacement of BisGMA by BisMEP has no significant effect on (1) degree of conversion, (2) microhardness, (3) water sorption, and (4) water solubility.Furthermore, (5) there is no effect of curing thicknesses of 1 and 2 mm on the degree of conversion within the p-value of 0.05.
Preparation of Model Composites
Four groups of experimental composites (CEa0, CEa25, CEa50, and CEa100) were formulated by mixing the organic components (BisGMA, TEGDMA, BisMEP, DMAEMA, and CQ) with the synthesized silanized silica fillers, as summarized in Table 1.The control group consists of BisGMA as the base resin, TEGDMA as the diluent monomer, and silanized silica as the filler.The test groups were prepared by replacing 25, 50, or 100 wt% BisGMA with BisMEP, the monomer of interest, to obtain CEa25, CEa50, and CEa100, respectively.Typically, the monomers were manually homogenized using a stainless-steel spatula.Then, the initiator system (CQ and DMAEMA as 0.2 and 0.8 wt% in reference to the total mass of the monomers) was dissolved in the monomer mixture.After the complete dissolution of CQ, the predetermined amount of the fillers was added in a portion with vigorous mixing.The composites were thus manually mixed using a spatula tool, then further homogenized using an asymmetric centrifugation technique in a TM DAC 150 FVZ speed mixer, Hauschild and Co. (Hamm, Germany), three times (for 1 min each and 2 min rest in between) at 3000 rpm.After that, the composites were vacuumed for 10 min at room temperature and then refrigerated at about 8 • C until used.
Degree of Conversion and Dept of Cure
The depth of cure of the photocured model composites (CEa0-CEa100) was assessed in terms of the degree of conversion using the FTIR technique; an attenuated total reflectance-Furrier transform infrared (ATR-FTIR) technique was employed with a Nicolet iS10 FTIR spectrometer from Thermo Scientific (Madison, WI, USA).For this, samples were packed in 5 mm diameter stainless-steel disks of 1 or 2 mm thickness, covered with plastic strips on either side of the mold followed by glass slides, and then light-cured from the top side for 1 min using an LED curing unit (Bluephase, Ivoclar Vivadent, Schaan, Liechtenstein) characterized with a light density of 650 mW/cm 2 , broad wavelength range of 385-515 nm, and an approximately 10 mm light guide tip.The FTIR spectra for the bottom side were collected before and after irradiation over the range from 650 to 4000 cm −1 , with 16 runs per spectrum and a 4 cm −1 wavelength resolution.The DC was quantified by comparing the peak area of the polymerizable aliphatic C=C bonds (1638 cm −1 ) before and after curing in reference to the peak area of C-H bending at 1451 cm −1 in the matrix monomers [8,41], as given in Equation (1).
where A 1638 and A 1451 are the area at 1638 and 1451 cm −1 , respectively.
Vickers Hardness Test
The hardness of the specimens was studied using a microhardness tester (INNOVAT-EST Europe BV, Maastricht, The Netherlands) equipped with a diamond indenter.Hence, disc-shaped 5 mm diameter and 2 mm thickness samples were prepared (photo-cured from one side for 60 s, as above), and after 10 min, samples were moved to plastic containers and conditioned at 37 • C humified environment for 24 h before testing.The measurement was carried out using 200 gf as a loading force and 15 s as a dwell time for three replicates, with three readings per specimen selected at a distance of at least 1 mm from each other.The model composites' mean Vickers hardness number (VHN) values were calculated using the machine software based on the formula in Equation (2).The indentation was monitored with the 40× magnification lens of the microscope.
where F and D 2 are the applied load (kilograms-force) and the indent area (mm 2 ), respectively.
Water Uptake and Solubility
Water sorption and water solubility of the examined model composites were assessed in distilled water [37,42] to simulate the oral environment.Hence, disc-shaped specimens (15 mm diameter, 2 mm thickness, n = 3) were fabricated in stainless-steel molds and light-cured using a 10 mm diameter-tip curing unit as above.After disc preparation, a drying-swelling-drying process was performed.Typically, the discs were dried in a desiccator containing anhydrous potassium sulfate maintained at 37 ± 2 • C.After every 24 h, the desiccator containing the test discs was transferred to cool at room temperature for about 2 h, then the specimen dry weight was recorded.When the dry weight was unchanged (termed m1), samples were immersed in the distilled water and incubated in the oven at 37 ± 2 • C for the water sorption test.Every 24 h, the disk temperature was brought to room temperature.Then, the samples were carefully taken from the swelling water, gently swabbed, weighed again, and returned to the water, and the attained constant weight (m2) was considered the maximum swell.To assess the solubility of the materials, the swollen discs were dried again until constant weight (m3) was attained, as conducted with m1.W SP and W SL from three replicates were calculated using Equations ( 3) and (4).
Statistical Analysis
Statistical analysis was conducted using IBM SPSS statistics version 21 (IBM Corp., Armonk, NY, USA).One-way analysis of variance (ANOVA) followed by Tukey post-hoc test and paired samples t-Tests were used for evaluation, and a p-value of less than 0.05 was considered significant.Figures were prepared using Origin 2018 software (OriginLab Corporation, Northampton, MA, USA) for the mean ± standard deviation of five replicates.
Characterization
Figure 1 illustrates the working experimental route, including synthesis, characterization, and applications.The structural integrity of the BisMEP and synthesized silanized silica (S-SiO 2 ) was confirmed and reported previously [8,40].The chemical structures of matrix monomeric components are shown in Figure 2. BisGMA was used as the main resin, TEGDMA was the diluent, and BisMEP was the newly introduced monomer to be tested, which, as can be seen, is incrementally employed in place of BisGMA in the target composites CEa0-CEa100.By comparing the structural properties of BisGMA and BisMEP, one can see that BisGMA is unbranched, has an aromatic core structure of bisphenol A, and involves two hydroxyl groups.This semi-linearity and the presence of two OH groups drive its high viscosity by providing strong intermolecular H-bonding interaction between molecules.BisMEP, on the other hand, has a lower molecular weight (374.4 g/mol) than BisGMA (512.6 g/mol) and has no ability to create H-bonding.Under similar conditions, BisMEP viscosity (0.379 Pa•s) is remarkably lower than that of BisGMA (580.977Pa•s).According to the previous study [8], composites incorporating BisMEP, which partially replaced BisGMA (up to 50 wt%) in matrices containing 50 wt% TEGDMA, exhibited better rheological properties and comparable DC to composites with BisGMA-unreplaced matrix.The effect of such substitution of BisGMA by BisMEP on the composite's DC, depth of cure, microhardness, water sorption, and water solubility were intended to be investigated in this work.
TEGDMA, exhibited better rheological properties and comparable DC to composites with BisGMA-unreplaced matrix.The effect of such substitution of BisGMA by BisMEP on the composite's DC, depth of cure, microhardness, water sorption, and water solubility were intended to be investigated in this work.
Analysis of Curing Degree
The DC at the bottom side of the photocured CEa composites with 1-and 2-mm disc thicknesses are given in Table 2, symbolled as DC1 and DC2, respectively.As seen among the tested composites, the DC decreases as BisMEP content increases.Thus, DC1 insignificantly differs due to the replacement of BisGMA by BisMEP up to 50% (CEa50).However, the difference became significant as BisMEP completely replaced BisGMA (CEa100), so the first hypothesis was rejected.This result agrees with the previous report, in which the DC was analyzed for the cured side of the disk with 1 mm thickness [8].Analysis of DC2 indicated an earlier inhibitory effect of the curing process than DC1 due to incorporating a high quantity of BisMEP, which becomes significant above CEa25, thus confirming the
Analysis of Curing Degree
The DC at the bottom side of the photocured CEa composites with 1-and 2-mm disc thicknesses are given in Table 2, symbolled as DC1 and DC2, respectively.As seen among the tested composites, the DC decreases as BisMEP content increases.Thus, DC1 insignificantly differs due to the replacement of BisGMA by BisMEP up to 50% (CEa50).However, the difference became significant as BisMEP completely replaced BisGMA (CEa100), so the first hypothesis was rejected.This result agrees with the previous report, in which the DC was analyzed for the cured side of the disk with 1 mm thickness [8].Analysis of DC2 indicated an earlier inhibitory effect of the curing process than DC1 due to incorporating a high quantity of BisMEP, which becomes significant above CEa25, thus confirming the
Analysis of Curing Degree
The DC at the bottom side of the photocured CEa composites with 1-and 2-mm disc thicknesses are given in Table 2, symbolled as DC1 and DC2, respectively.As seen among the tested composites, the DC decreases as BisMEP content increases.Thus, DC1 insignificantly differs due to the replacement of BisGMA by BisMEP up to 50% (CEa50).However, the difference became significant as BisMEP completely replaced BisGMA (CEa100), so the first hypothesis was rejected.This result agrees with the previous report, in which the DC was analyzed for the cured side of the disk with 1 mm thickness [8].Analysis of DC2 indicated an earlier inhibitory effect of the curing process than DC1 due to incorporating a high quantity of BisMEP, which becomes significant above CEa25, thus confirming the rejection of the first hypothesis in such a case.By comparing the DC at 1 and 2 mm thickness, DC1 and DC2, respectively, it is found that there are no significant differences between them at low dosages of BisMEP (i.e., CEa0 and CEa25).Indeed, BisMEP is a low viscosity monomer (0.379 Pa•s) compared to BisGMA (580.977Pa•s), a character that supports increasing DC; on the other hand, the structural properties of BisMEP may retain a bit inhibitory characters of eugenol-moiety, causing reduction of DC [8].Hence, it seems that the two influencers have contributed to balancing of DC close to each other at a low BisMEP quantity.However, in the incorporation of high quantity, e.g., close to that in CEa50 and CEa100, the inhibitory effect dominates, resulting in a significant drop in the values of DC.According to the literature, there is no consensus regarding the minimum DC required for most restoratives, but a minimum value of 55% was suggested as suitable for clinical approaches [43].In the current course, the DC was found to be higher than 55%, the minimum limit, for CEa0-CEa50 at 1 mm thickness (DC1) and CEa0-CEa25 at 2 mm thickness (DC2), which supports the possible incorporation of BisMEP in place of BisGMA up to 25%.The 2 mm increment thickness is reported as the gold standard for placement and curing of composite [44], despite manufacturers competing on introducing materials with a higher depth of cure to address the issues correlated with the 2 mm curing process, including time-consuming and technique sensitivity.Within the column, the different lowercase letters mean significant differences, p < 0.05.Within the row, the different uppercase letters mean significant differences between DC1 and DC2 of the same composite.
Vickers Hardness
The result of Vickers microhardness for the investigated model composites (CEa0-CEa100) is summarized in Table 2.As can be seen, the microhardness is insignificantly decreased with BisMEP increases (p > 0.05), from 50.04 ± 1.23 for the control (CEa0) to 46.66 ± 3.74 for CEa50.However, the difference became statistically significant (p < 0.05) for the composite with the complete replacement of BisGMA by BisMEP (CEa100); therefore, the second hypothesis was partially rejected.This result is mainly associated with composite materials and their DC.The DC was slightly but insignificantly reduced as BisMEP replaced BisGMA up to 50%; by approaching CEa100, the DC differs significantly from the control (CEa0).Hence, the microhardness varies with the DC; the viscosity differences between BisGMA and BisMEP could be the leading influencers.The viscosity of BisMEP was found to be more than 1500 times less than that of BisGMA [8]; the viscosity of CEa100 is more than 16 times lower than that of CEa0.The viscosity is structure-dependent; thus, as the hydrogen bonding associated with BisGMA is absent in BisMEP (Figure 2), the viscosity of the latter is less.A closer look reveals that the historical inhibitory effects of eugenol against free radical polymerization may still be a bit retained, which could explain why the DC drives the process rather than viscosity, as was previously claimed [8].
Water Sorption and Solubility
The data obtained for W SP and W SL of the investigated model composites are listed in Table 2 as well.As can be seen, there is no significant effect (p > 0.05) of replacing BisMEP in place of BisGMA on W SP and W SL , thus accepting the third hypothesis.W SP was insignificantly reduced from 2.39 ± 0.25 to 2.03 ± 0.77 wt% as replacement approached from CEa0 to CEa100.W SL was found to be increased but within the proposed statistically insignificant range (p < 0.05) and supported accepting the fourth hypothesis.These findings suggested material-dependent behavior by showing the tendency of the composites to uptake less water and leach more substances as BisMEP quantity increases in the composite.The decrease in W SP is supposedly a result of hydrophilicity reduction due to the replacement of BisGMA (high hydrophilic monomer) by BisMEP (low hydrophilic monomer).On the other hand, as there are no significant differences between test composites (CEa25-CEa100) and control (CEa0) even after the total replacement of BisGMA (CEa100), other factors may also participate in the W SP mechanism.It could be suggested that the effect of TEGDMA is dominant, thus balancing the hydrophilicity decrease due to BisGMA replacement by the less hydrophilic monomer BisMEP.
W SL is mainly associated with DC.Accordingly, the DC was insignificantly decreased as BisMEP developed from CEa0 to CEa50; however, by complete replacement of BisGMA by BisMEP in CEa100, the DC differed significantly from that of the control (BisMEP-free composite, CEa0).The DC is somehow affected by eugenol moiety, both their free radical scavenging and viscosity.Therefore, the hydrophobicity of BisMEP may support less water uptake, while its lower DC promotes water diffusion.The latter case may be the cause of an insignificant decreasing trend in W SP .
The process and impact of water sorption on RBCs can be illustrated based on the physical and chemical properties of both organic and inorganic components.Basically, hydrophilic organic molecules 'resins' have a high affinity to water.Thus, sorption and solubility occur when they come in contact with saliva 'water'.During this process, water diffuses into the material, causing gradual expansion 'swelling'.Then, swelling promotes hydrolysis 'degradation' as well as diffusion 'leaching' of hydrolysate and unreacted monomers, respectively.
The overall results for DC, depth of cure, W SP , and W SL could be discussed in terms of the chemical structure of BisGMA and BisMEP.Hence, BisGMA molecular weight is higher and is more hydrophilic than BisMEP, a property that drives its high viscosity compared to BisMEP.On the other hand, BisMEP holds an eugenyl pendent group, which may decrease the molecular freedom during polymerization reactions.Even though the viscosity of BisMEP is low and could support higher DC, the inhibitory effect of eugenyl moiety via radical scavenging may be retained.Therefore, the insignificant difference in the DC due to the addition of BisMEP may be balanced by these two opposite characters, i.e., reduced viscosity and free radical inhibitory effect.Indeed, the DC affects most other properties, including material hardiness, water uptake, and water solubility.Therefore, close trends to that of DC among the investigated composites were observed.i.e., with VHN, W SP , and W SL , as discussed above.The results support no change in the composite properties when BisGMA is replaced by BisMEP up to ca. 25 wt%, featuring possible improvement of dental composite eugenol moiety.
Conclusions
Four model composites consisting of 66% silanized silica as fillers and 34% Bis-GMA/TEGDMA (1:1) as matrices were prepared, while BisGMA was replaced by 0.0, 25, 50, and 100% BisMEP.Based on the data obtained, the investigated BisMEP monomer can be added to a dental composite approximately up to 25 wt% of the total matrix contents to benefit the composite with handling properties without compromising its primary desired properties, including DC, mechanical hardness, W SP , and W SL .The DC-based depth of cure was statistically the same at 1 and 2 mm thickness.The swelling and degradative properties of the investigated composites in distilled water have proven minimal and statistically insignificant differences compared to the control (BisMEP-free composite, CEa0).Such findings support the possible incorporation of BisMEP monomer as a diluent for BisGMA in the resin-based dental composite.However, eugenol moiety is attractive to be further analyzed as a potential contact-active antimicrobial, making a case that is open for subsequent study. | 5,759.6 | 2023-10-27T00:00:00.000 | [
"Materials Science",
"Medicine",
"Chemistry"
] |
A Study of Pattern Prediction in the Monitoring Data of Earthen Ruins with the Internet of Things
An understanding of the changes of the rammed earth temperature of earthen ruins is important for protection of such ruins. To predict the rammed earth temperature pattern using the air temperature pattern of the monitoring data of earthen ruins, a pattern prediction method based on interesting pattern mining and correlation, called PPER, is proposed in this paper. PPER first finds the interesting patterns in the air temperature sequence and the rammed earth temperature sequence. To reduce the processing time, two pruning rules and a new data structure based on an R-tree are also proposed. Correlation rules between the air temperature patterns and the rammed earth temperature patterns are then mined. The correlation rules are merged into predictive rules for the rammed earth temperature pattern. Experiments were conducted to show the accuracy of the presented method and the power of the pruning rules. Moreover, the Ming Dynasty Great Wall dataset was used to examine the algorithm, and six predictive rules from the air temperature to rammed earth temperature based on the interesting patterns were obtained, with the average hit rate reaching 89.8%. The PPER and predictive rules will be useful for rammed earth temperature prediction in protection of earthen ruins.
Introduction
Earthen ruins are ancient ruins based on mud brick, rammed earth, or any type of construction using soil as the main building material and have great historical, cultural and scientific value [1]. Lacking the very architectural devices necessary for survival as originally designed (e.g., protective roofs and plastered surfaces), earthen ruins exist in a continual state of 'unbecoming' [2]. At the micro level, earth swelling and shrinkage coupled with decohesion of the earth-silt-sand agglomerate are the fundamental mechanisms responsible for soil destabilization, which will eventually lead to various scales of damage from the loss of surface and surface finishes. Most of these earthen ruins will either collapse over time from differential erosion, or eventually stabilize as unrecognizable lumps [2]. Figure 1 shows the cracking and weathering conditions (marked by circles) in the Ming Dynasty Great Wall in Shaanxi Province of China. Research shows that dramatic rammed earth temperature changes are one of the major reasons causing the destruction of earthen ruins [3,4]. Therefore an understanding of the changes of the rammed earth temperature of earthen ruins is important for protection of earthen ruins. The technology of the Internet of Things (IoT) has become an ideal solution for monitoring earthen ruins, because it is easily deployable and suitable for long-term and real-time data collection in remote areas. Most of the existing early studies have focused on collecting the monitoring data, but lack suitable data processing procedures [5][6][7]. Subsequent research has begun to focus on IoT monitoring data processing in the field of cultural heritage [8][9][10][11][12]. However, it is difficult for the earthen ruins conservation experts to make intelligent and scientific based decisions for protecting earthen ruins based on collected IoT data [13] due to the following challenges: (1) It is difficult to describe the complex and interesting hidden relationship between internal earthen ruin parameters and external environmental parameters. For example, Figure 2 shows a plot of air temperature (line with ○) and rammed earth temperature of an earthen ruin (line with ×) sequences. As shown in the figure, in the area indicated by the left dashed box, both air temperature and rammed earth temperature values decrease. However, in the area indicated by the right dashed box, although the air temperature falls, the rammed earth temperature shows no significant change. In order to predict the rammed earth temperature trend based on the air temperature, we must know what kind of change in temperature is meaningful for earthen ruins conservation experts. However, not all the air temperature changes are interesting; for instance, earthen ruins conservation experts may only pay attention to air temperature changes that would cause damage in earthen ruins. (2) With the increase in the number of IoT nodes, as well as the continuous monitoring, we will face dealing with huge amounts of time-series data. For example, the Ming Dynasty Great Wall ruins are more than 6000 km in length; if monitoring nodes were deployed every meter along the Ming Dynasty Great Wall, the number of nodes would be more than 6 million. Faced with large amounts of time-series data, we expect algorithms to provide prediction results efficiently. (3) There is a hysteresis effect in rammed earth temperatures relative to air temperatures.
Therefore, in the monitoring data of earthen ruins, a change in the rammed earth temperature has a certain delay relative to the change in air temperature. The delay parameter needs to be studied to be able to predict the exact rammed earth temperature trends with time.
To solve the aforementioned challenges, we propose a method called the Pattern Prediction on the monitoring data of Earthen Ruins (PPER) to acquire rammed earth temperatures of earthen ruins from IoT data. The proposed PPER can describe the interesting patterns of the earthen ruin variables by efficiently discovering the hidden relations between the time series of two correlated variables and pattern prediction. Specifically, the contributions of this paper are: (1) Some terms for interesting patterns for earthen ruin monitoring are formalized. Terms, such as interesting pattern, direction, delay and variation of rule, are properly defined to precisely describe the patterns and changes in IoT data. (2) The air temperature and rammed earth temperature are used as examples to show how the hidden relationships between the two earthen ruin monitoring data are discovered. Since The technology of the Internet of Things (IoT) has become an ideal solution for monitoring earthen ruins, because it is easily deployable and suitable for long-term and real-time data collection in remote areas. Most of the existing early studies have focused on collecting the monitoring data, but lack suitable data processing procedures [5][6][7]. Subsequent research has begun to focus on IoT monitoring data processing in the field of cultural heritage [8][9][10][11][12]. However, it is difficult for the earthen ruins conservation experts to make intelligent and scientific based decisions for protecting earthen ruins based on collected IoT data [13] due to the following challenges: (1) It is difficult to describe the complex and interesting hidden relationship between internal earthen ruin parameters and external environmental parameters. For example, Figure 2 shows a plot of air temperature (line with ) and rammed earth temperature of an earthen ruin (line with ×) sequences. As shown in the figure, in the area indicated by the left dashed box, both air temperature and rammed earth temperature values decrease. However, in the area indicated by the right dashed box, although the air temperature falls, the rammed earth temperature shows no significant change. In order to predict the rammed earth temperature trend based on the air temperature, we must know what kind of change in temperature is meaningful for earthen ruins conservation experts. However, not all the air temperature changes are interesting; for instance, earthen ruins conservation experts may only pay attention to air temperature changes that would cause damage in earthen ruins. (2) With the increase in the number of IoT nodes, as well as the continuous monitoring, we will face dealing with huge amounts of time-series data. For example, the Ming Dynasty Great Wall ruins are more than 6000 km in length; if monitoring nodes were deployed every meter along the Ming Dynasty Great Wall, the number of nodes would be more than 6 million. Faced with large amounts of time-series data, we expect algorithms to provide prediction results efficiently. (3) There is a hysteresis effect in rammed earth temperatures relative to air temperatures. Therefore, in the monitoring data of earthen ruins, a change in the rammed earth temperature has a certain delay relative to the change in air temperature. The delay parameter needs to be studied to be able to predict the exact rammed earth temperature trends with time.
To solve the aforementioned challenges, we propose a method called the Pattern Prediction on the monitoring data of Earthen Ruins (PPER) to acquire rammed earth temperatures of earthen ruins from IoT data. The proposed PPER can describe the interesting patterns of the earthen ruin variables by efficiently discovering the hidden relations between the time series of two correlated variables and pattern prediction. Specifically, the contributions of this paper are: (1) Some terms for interesting patterns for earthen ruin monitoring are formalized. Terms, such as interesting pattern, direction, delay and variation of rule, are properly defined to precisely describe the patterns and changes in IoT data. (2) The air temperature and rammed earth temperature are used as examples to show how the hidden relationships between the two earthen ruin monitoring data are discovered. Since dramatic temperature change is one major cause for the destruction of earthen ruins, we focus on the rising and falling of the temperature. PPER searches the whole dataset to find the rising and falling pattern and obtains the predictive rules to predict the rammed earth temperature. (3) Two pruning rules are proposed to reduce the number of computations. At the same time, a new data structure version based on the R-tree [14] data structure is used to group similar patterns. Both techniques can significantly reduce the time complexity when processing huge amounts to IoT data. (4) A set of experiments were conducted on the monitoring data of the Ming Dynasty Great Wall to demonstrate the effectiveness and efficiency of PPER. Interesting relationships between the air temperature and the rammed earth temperature were discovered to predict the rammed earth temperature pattern. dramatic temperature change is one major cause for the destruction of earthen ruins, we focus on the rising and falling of the temperature. PPER searches the whole dataset to find the rising and falling pattern and obtains the predictive rules to predict the rammed earth temperature. (3) Two pruning rules are proposed to reduce the number of computations. At the same time, a new data structure version based on the R-tree [14] data structure is used to group similar patterns. Both techniques can significantly reduce the time complexity when processing huge amounts to IoT data. (4) A set of experiments were conducted on the monitoring data of the Ming Dynasty Great Wall to demonstrate the effectiveness and efficiency of PPER. Interesting relationships between the air temperature and the rammed earth temperature were discovered to predict the rammed earth temperature pattern. The rest of this paper is organized as follows: Section 2 briefly reviews the most related works to this study. In Section 3, the required definitions are presented. In Section 4, the proposed method is described in detail. Section 5 presents the experimental studies and the results. Finally, the concluding remarks and some ideas for further studies are discussed in Section 6.
Related Work
In this section, we present and discuss related studies on: (i) destructive, nondestructive and micro-destructive measurements for acquiring rammed earth temperatures of earthen ruins; (ii) IoT technologies in the cultural heritage domain, and (iii) time-series prediction methods.
Destructive, Nondestructive and Micro-Destructive Measurements for Earthern Site Monitoring
In order to acquire rammed earth temperatures of earthen ruins, there are a few options, which can be groups into destructive, nondestructive and micro-destructive measurements. With destructive measurement methods, such as using a dynamic nuclear polarization (DNP) series of temperature probes and digital thermometers, the rammed earth temperature can be accurately obtained, but the earthen ruin will be damaged to a certain degree [15]. Nondestructive measurement methods, such as infrared imaging systems, can be used to get the surface temperature of rammed earth [15,16]. Using this measurement method, only the rammed earth surface temperature can be obtained, and the internal temperature of the rammed earth is still unknown. The third type is micro-destructive measurement methods, such as measurements obtained using the IoT. The rest of this paper is organized as follows: Section 2 briefly reviews the most related works to this study. In Section 3, the required definitions are presented. In Section 4, the proposed method is described in detail. Section 5 presents the experimental studies and the results. Finally, the concluding remarks and some ideas for further studies are discussed in Section 6.
Related Work
In this section, we present and discuss related studies on: (i) destructive, nondestructive and micro-destructive measurements for acquiring rammed earth temperatures of earthen ruins; (ii) IoT technologies in the cultural heritage domain, and (iii) time-series prediction methods.
Destructive, Nondestructive and Micro-Destructive Measurements for Earthern Site Monitoring
In order to acquire rammed earth temperatures of earthen ruins, there are a few options, which can be groups into destructive, nondestructive and micro-destructive measurements. With destructive measurement methods, such as using a dynamic nuclear polarization (DNP) series of temperature probes and digital thermometers, the rammed earth temperature can be accurately obtained, but the earthen ruin will be damaged to a certain degree [15]. Nondestructive measurement methods, such as infrared imaging systems, can be used to get the surface temperature of rammed earth [15,16]. Using this measurement method, only the rammed earth surface temperature can be obtained, and the internal temperature of the rammed earth is still unknown. The third type is micro-destructive measurement methods, such as measurements obtained using the IoT.
Large amounts of monitoring data, such as the air temperature, can be obtained with IoT methods, such as the rammed earth temperature acquired by IoT nodes. A prediction model for IoT monitoring data based on a linear function between the air temperature and the rammed earth temperature was proposed to predict the rammed earth temperature of an earthen ruin in a completely closed environment [4]. Sufficient information on the rammed earth temperature with as little damage to the earthen ruin as possible was obtained. However, not all earthen ruins can be a completely closed environment, and the relation between the air temperature and the rammed earth temperature cannot always by satisfied with a linear function. For example, as shown in Figure 3, there is no simple linear relationship between the air temperature and the rammed earth temperature in the monitoring dataset of the Ming Dynasty Great Wall. Large amounts of monitoring data, such as the air temperature, can be obtained with IoT methods, such as the rammed earth temperature acquired by IoT nodes. A prediction model for IoT monitoring data based on a linear function between the air temperature and the rammed earth temperature was proposed to predict the rammed earth temperature of an earthen ruin in a completely closed environment [4]. Sufficient information on the rammed earth temperature with as little damage to the earthen ruin as possible was obtained. However, not all earthen ruins can be a completely closed environment, and the relation between the air temperature and the rammed earth temperature cannot always by satisfied with a linear function. For example, as shown in Figure 3, there is no simple linear relationship between the air temperature and the rammed earth temperature in the monitoring dataset of the Ming Dynasty Great Wall.
IoT Technology in Culture Heritage Domain
IoT technologies have made great progress in cultural heritage applications. These technologies have two phases, the data monitoring and the data processing phase.
In the data monitoring phase, for example, Abrardo and Rodriguez-Sanchez used the Internet of Things technology collect environmental information of heritage sites [5]. In addition, the Institute of Computing Technology Chinese Academy of Sciences has developed an intelligent environment monitoring system which collected environmental information such as temperature, humidity, light at the Palace Museum heritage site [6]. A new sensor network for indoor environmental monitoring has been developed [7].
In the data monitoring and data processing phase, there have been some interesting applications in the field of cultural heritage based on IoT technology. In [8], an IoT architecture was designed to support the design of a smart museum based on an innovative model of sensors and services. An intelligent IoT system, designed with the aim of improving user experience and knowledge diffusion within a cultural space, was presented in [9]. Typical IoT smart technologies represent an effective mean to support users' understanding of cultural heritage [10]. An authoring platform, named FEDRO was presented automatically generate textual and users profiled artworks biographies, employed to feed a smart app for guiding visitors during the exhibition. An indoor location-aware architecture was designed and validated able to enhance the user experience in a museum [11]. In particular, the proposed system relies on a wearable device that combines image recognition and localization capabilities to automatically provide the users with cultural contents
IoT Technology in Culture Heritage Domain
IoT technologies have made great progress in cultural heritage applications. These technologies have two phases, the data monitoring and the data processing phase.
In the data monitoring phase, for example, Abrardo and Rodriguez-Sanchez used the Internet of Things technology collect environmental information of heritage sites [5]. In addition, the Institute of Computing Technology Chinese Academy of Sciences has developed an intelligent environment monitoring system which collected environmental information such as temperature, humidity, light at the Palace Museum heritage site [6]. A new sensor network for indoor environmental monitoring has been developed [7].
In the data monitoring and data processing phase, there have been some interesting applications in the field of cultural heritage based on IoT technology. In [8], an IoT architecture was designed to support the design of a smart museum based on an innovative model of sensors and services. An intelligent IoT system, designed with the aim of improving user experience and knowledge diffusion within a cultural space, was presented in [9]. Typical IoT smart technologies represent an effective mean to support users' understanding of cultural heritage [10]. An authoring platform, named FEDRO was presented automatically generate textual and users profiled artworks biographies, employed to feed a smart app for guiding visitors during the exhibition. An indoor location-aware architecture was designed and validated able to enhance the user experience in a museum [11]. In particular, the proposed system relies on a wearable device that combines image recognition and localization capabilities to automatically provide the users with cultural contents related to the observed artworks. A collaborative reputation system (CRS) was designed to establish the people reputation within cultural spaces [12].
Time-Series Prediction
Time-series prediction has many practical applications, such as weather forecasting and stock market prediction and therefore it has attracted a great deal of attention [17]. A large number of studies on time-series forecasting have utilized statistical models, such as autoregressive integrated moving average (ARIMA) [18] or exponential smoothing [19][20][21]. These models have been widely used for financial data analyses. However, the nonlinear and irregular nature of real-world time-series data has always been a problem. A newer approach to solving this problem uses machine learning techniques to predict the future results based on the knowledge learned from available data. For instance, artificial neural networks (ANN) have been widely used in time-series prediction [22][23][24]. However, disadvantages, such as over fitting and a time-consuming training phase, occur. Other techniques applied to solving this problem are support vector machines (SVM) [25] and the k-nearest-neighbor method [26].
Since many time-series variables are exposed to translation or dilatation in time, those approaches used traditionally for behavior forecasting will fail in providing useful hints about the future. A solution takes the behavior of the sequences into account rather than the exact values. For example, [27,28] proposed methodologies for predicting patterns in time series. Both of these studies suggested a new representation of data and then tried to find the most frequent patterns. However, these solutions had some problems. The data representation proposed in these solutions did not reduce the dimensionality of the data, especially for highly variant data. Therefore, a lot of data processing, like clustering, was necessary in [27], which can result in high time complexity.
Another problem was the inability to interpret the output rules and relationships. For instance, in [26] the relationship between patterns was defined using Allen's interval relations [29], which did not provide enough information for calculating delay. Reference [28] had the same problem for extracting information, such as sensitivity and direction of the variations, since it used symbolic aggregation approximation (SAX) representation, which summarized the data sequences based on the average value of intervals and did not pay attention to direction of variations.
These approaches try to learn from previously seen or predicted data and then predict the future values. However, if they use predicted values as an input for the next prediction step, error accumulation problem is possible [17]. Moreover, in reality, it is highly probable that the values of a time series are influenced by variations of other groups of time series.
Studies about multivariate forecasting methods take this problem into account and aim at solving it using statistical or artificial approaches. Neural networks [30][31][32] are one of the most important techniques applied in this area. The proposed methods try to find the future value of a time series based on previously seen data. However, in many real world situations, we are more interested in the upcoming trend is of more interest, not the exact and specific values. To achieve this goal, some studies have been done on univariate time-series data, using a set of previously observed values, called a pattern, to predict a set of future value(s) [33][34][35][36]. These methodologies try to find the hidden trends in the data and predict the future using the discovered patterns.
Predicting multivariate time-series data using patterns has been studied with interval data knowledge discovery [27,28,32]. Hoppner's work was aimed at solving a problem similar to the current study [27]. The author tried to find frequent patterns and use them for mining hidden rules in the data. However, their approach was different from our methodology in some aspects. They partitioned the sequences based on different data trends and coded the data using these intervals. However, this representation of the data may not reduce the complexity of the data, especially when the variations are very frequent. Moreover, clustering of similar intervals is a highly complex step, due to the large number of intervals and the sequential nature of the data.
Our proposed method solves these problems by considering only intervals of the data that are of higher importance to the user, instead of all partitions. Additionally, since Hoppner's method used Allen's interval relations [29], it did not take the pattern locations into account and could not, therefore, extract important information, such as delay and direction of the relationships, which are essential for decision making. We solve this problem using concepts such as pattern location and slope.
Definitions and Notations
In this section, all air and rammed earth monitoring data can be represented as sequential data.
Definition 1 (Sequence). A sequence S = {s 1 , · · · , s l } is an ordered set of values, where s i (1 ≤ i ≤ l) is a value of a sequence variable, and l is length of the sequence.
In this regard, each value shows a data point (e.g., in time or space). For each data point, i indicates the index of the data point in the ordered set S. Time-series data are a special case of sequential data where the order is temporal. For example, S = {4.6, 4.3, 4.1, 4.4, 7.3, 8.5, 6.6, 5.3} is a daily rammed earth temperature sequence measured every three hours, with l as 8. s 1 is 4.6, which means the rammed earth temperature is 4.6 • C at 0:00, and so on.
Here k is the length of the subsequence, and the subsequence S is an ordered subset of S starting from start index I. Definition 4 (Slope of sequence). Slope m of a sequence is defined as: The slope of a sequence describes the rising or falling direction of the sequence. If the slope is positive, the sequence is rising. If the slope is negative, it is falling. Definition 5 (Multivariate sequence). Consider n number of sequence S 1 , S 2 , · · · S n . A multivariate sequence Y is a set of sequences denoted as Y = For example, S 1 is a sequence of air temperature and S 2 is a sequence of rammed earth temperature, ) as a multivariate sequence formed from the monitoring data. Multivariate sequential analysis is used to model and explain the interactions and co-movements among a group of sequence variables.
Definition 6 (Interesting pattern). Given sequence S of length l, subsequence S of S is interesting pattern p if its variance is greater than a threshold δ min , i.e., δ(S ) ≥ δ min .
In this paper, we are interested in finding the rising and falling patterns in the observed air and rammed earth temperature data. We define the concept of interesting pattern based on the variance.
Given a set of variables, variables (e.g., air temperature) that can be used for predicting other variables are called conditional variables. The other variables that can be predicted using conditional variables are called decision variables (e.g., rammed earth temperature). In this paper, we study how to predict rammed earth temperature with air temperature, so an important concept is the correlation rule.
Definition 7 (Correlation rule). Let P be a set of candidate pattern sets related to condition al variables in a dataset of n multivariate sequential data, P be another set of candidate pattern sets related to decision variables, and P and P are complement regarding the whole set of candidate pattern sets of the dataset. The correlation rule is defined as the form r = p ⇒ p where p ∈ P and p ∈ P .
For example, since we focus on prediction of rammed earth temperature with the air temperature of earthen ruins, the correlation rule will be like this r = p air_3 ⇒ p earth_5 . In this application, earthen ruins conservation experts are interested in not only the combination of the conditional variable patterns influence the decision variables, they are also interested in how the decision variables such as the rammed earth temperature corresponding to every single conditional variable. Therefore, we introduce the following concepts of direction, delay and variation of the rules to describe the possible relationships between single conditional variable and the decision variable. . Definition 8 (Direction of the rule). Given rule r = p ⇒ p , the direction of rule r is defined as: Here, a positive result means the corresponding variables in the patterns' time intervals move together (positive correlation), while a negative value means opposite movements of the variables in the patterns' time intervals (negative correlation). Otherwise, the direction of the rule is defined as zero.
Definition 9 (Delay of the rule). Given rule r = p ⇒ p , let p.I means the start index value of pattern p, the delay of rule r is defined as: The delay explains how long it takes to see the effects of changes of one variable in the value of another variable. For r = p air_3 ⇒ p earth_5 , start index I of p air_3 is 296, start index I of p earth_5 is 295, then ∆(r) = 296 − 295 = 1. If the monitoring frequency is every 3 hours, then ∆(r) = 1 means the delay is 3 h.
The PPER Algorithm
In this section, we present the proposed PPER algorithm, which can be viewed as a three-stage process: (1) find interesting patterns; (2) generate predictive rules; and, (3) predict with these predictive rules.
The first stage of PPER is finding interesting patterns that summarize data behavior. In this study, we are specifically interested in variations of the data, so we are looking for rising and falling patterns. Since sequential data may contain repetitive patterns, then the algorithm is to identify similar patterns and group them together. For each group of similar patterns, a representative pattern will be defined. All representative patterns together form a filtered pattern set. The second stage of PPER is using conventional data mining algorithms to retrieve the correlation rules with the filtered pattern set. Then comes filtering and merging, where the correlation rules are then filtered and merged into predictive rules based on earthen ruins prediction requirements. The predictive rules are used to predict the rammed earth temperature of the earthen ruins in the last stage of PPER. The remainder of this section describes the major steps of the algorithm in detail.
Finding Intersting Patterns
The first stage of PPER algorithm is to find interesting patterns that summarize the data behavior. It includes two steps: (1) identification of the candidate pattern set; and, (2) grouping of similar patterns.
Identifying the Candidate Pattern Set
For each variable in the dataset (e.g., air temperature and rammed earth temperature variables in the earthen ruin monitoring dataset), the algorithm explores all the sequences to find the rising and falling patterns. For each sequence, it starts with a sliding window with a prespecified size from the first data point of the sequence. It then calculates the variance of the subsequence based on Definition 3 and checks whether the variance is greater than a threshold (δ min ). If the variance is less than δ min , then there is no interesting pattern in the sliding window and the sliding window is discarded.
Interesting patterns may overlap, for example, the end point of the previous interest pattern is the starting point of a later interesting pattern. Therefore, the partially overlapping window is slid across the sequence and continues with a new sliding window. If the sliding window meets the variance condition, the algorithm continues to extend the sliding window to find the pattern with the maximum possible length. The extension starts by adding the next data point of the sequence to the sliding window. The algorithm continues adding points while the variance keeps increasing and the slope of the last added points is the same as the starting slope. After the extension, the data in the sliding window and its origin information (the identifier of the corresponding sequence) are used to form a pattern. The algorithm then skips those data points and continues finding patterns in the rest of the sequence.
The algorithm is described below. In Algorithm 1, the number of variables of the dataset and the length of each interesting pattern is usually very small, so the time complexity impact caused by them can be ignored. Therefore, the time complexity of the algorithm is O(n), where n is the number of sequences. Lines 1 to 7 check whether the variance is greater than a threshold (δ min ). If the variance is more than δ min , lines 8 to 18 show how to find the longest pattern, and lines 10 to 12 show the details of the extension.
. Grouping Similar Patterns
After discovering all the candidate patterns of different sequence groups in the dataset, we use these patterns to obtain the hidden relationships between the conditional variable (air temperature in this case) and decision variable (rammed earth temperature). However, for real earthen ruin monitoring datasets with a large number of long sequences, the number of candidate patterns may be very large. As a result, using this collection of patterns for prediction purposes (in the next stage) may not only be computationally expensive in terms of time and space requirements, but also can result in sparse and meaningless predictions. Moreover, the behavior of each sequence may be very similar in different intervals. Such similar behavior may result in the discovery of many similar patterns for one variable. Therefore, we tend to find and filter such similar patterns in the discovered pattern collection to reduce the size of the pattern set and avoid doing repetitive computations for similar patterns. In other words, for every group of similar patterns, we try to determine a representative pattern and use this representative in the next stages of the algorithm.
The first step in finding similar patterns is to generate a distance matrix between patterns. For each pair of patterns, an element of the matrix shows the distance between the two patterns. Therefore, if P is the set of candidate patterns, the distance matrix is a |P| * |P| matrix. A naïve algorithm can be used to calculate the distance between all pairs of patterns. However, the time complexity of generating the matrix is O(|P| 2 ) * O( f (dist)), where |P|, is the number of patterns identified and O( f (dist)) is the time complexity of the distance function (e.g., dynamic time warping (DTW) or Euclidian distance). It is obvious that the naïve algorithm is highly time consuming, regardless of the distance function, since applying distance measurements on sequential data requires exploring all the data points of the sequences. To tackle this problem we extend and utilize a new data structure based on an R-tree [14] and some pruning rules.
Data Structure
An R-tree is a tree-based data structure that is very popular for indexing spatial data [14]. Unlike Quad-tree, R-tree does not require non-spatial attribute to divide the space and can get better performance without fine tuning the tiling level [37]. The key idea of the data structure is that it represents a group of close data points determined using a minimum bounding rectangle (MBR) and then indexing the MBRs by applying a hierarchical structure. Since the original R-tree was designed for spatial data, it cannot be directly applied to the sequential data. In this study, we modified the R-tree concepts, so that we can use them for indexing subsequences.
Based on the first step of the algorithm, we have the start index I and length n of each pattern p in the candidate pattern set. We also have all the elements of the pattern subsequence. To be able to use the R-tree, the index of the sensor reading is mapped to the x coordinate and the value of the reading to the y coordinate. Therefore, for each pattern p in the candidate pattern set P, we can build an MBR where the corners are (I, min(p)) and (E, max(p)). E is the end index of the pattern p, and E = I + n − 1. For example, the start index I of p 1 is 5, the length n of p 1 is 8; therefore, the end index E = 12. Figure 4 shows the MBRs of patterns p 1 and p 2 . An R-tree is a tree-based data structure that is very popular for indexing spatial data [14]. Unlike Quad-tree, R-tree does not require non-spatial attribute to divide the space and can get better performance without fine tuning the tiling level [37]. The key idea of the data structure is that it represents a group of close data points determined using a minimum bounding rectangle (MBR) and then indexing the MBRs by applying a hierarchical structure. Since the original R-tree was designed for spatial data, it cannot be directly applied to the sequential data. In this study, we modified the R-tree concepts, so that we can use them for indexing subsequences.
Based on the first step of the algorithm, we have the start index and length of each pattern in the candidate pattern set. We also have all the elements of the pattern subsequence. To be able to use the R-tree, the index of the sensor reading is mapped to the x coordinate and the value of the reading to the y coordinate. Therefore, for each pattern in the candidate pattern set , we can build an MBR where the corners are ( , min ( )) and ( , max ( )).
is the end index of the pattern , and = + − 1. For example, the start index I of is 5, the length of is 8; therefore, the end index = 12. Figure 4 shows the MBRs of patterns and . As a result, candidate pattern set can be indexed using a modified R-tree, as shown in Figure 5, where each leaf is an MBR of a pattern and the intermediate entries of the R-tree index patterns with nearby MBRs.
This new data structure is used for reducing the time complexity by pruning the number of processed patterns.
Definition 11 (Minimum distance between patterns). Given two MBRs of two patterns A and B, the minimum distance between the two patterns is defined as:
where ( , ) is defined as the minimum distance between projections of and in the x dimension and ( , ) is the minimum distance between projections of and in the y dimension. As a result, candidate pattern set P can be indexed using a modified R-tree, as shown in Figure 5, where each leaf is an MBR of a pattern and the intermediate entries of the R-tree index patterns with nearby MBRs.
This new data structure is used for reducing the time complexity by pruning the number of processed patterns. Definition 11 (Minimum distance between patterns). Given two MBRs of two patterns A and B, the minimum distance between the two patterns is defined as: where d minX (A, B) is defined as the minimum distance between projections of A and B in the x dimension and d minY (A, B) is the minimum distance between projections of A and B in the y dimension.
Pruning-Based Calculation of Distance Matrix
In order to quickly and efficiently calculate the distance between pattern pairs, we apply the following two pruning rules in the algorithm of pruning-based calculation of distance matrix: (1) Pruning rule 1. If two similar patterns and do not appear in the area ( − , + ) of the sequences, then the distance of the two patterns is infinity.
is a user specified parameter. Pruning rule 1 is based on the fact that two similar patterns, far from each other, are considered to be two distinct patterns. As a result, there is no need to calculate their distance, and we can simply fill the corresponding matrix element by infinity.
(2) Pruning rule 2. If pattern is falling (negative slope) and pattern is rising (positive slope), then the distance of the two patterns is infinity. Pruning rule 2 uses for pruning the slope of the patterns. If one pattern is falling (negative slope) and the other one is rising (positive slope) they cannot be considered as similar and therefore we do not need to calculate their similarity. That is, we just simply fill the corresponding matrix element by infinity.
The process of pruning-based calculation of the distance matrix starts from the root node of the tree.
Pruning-Based Calculation of Distance Matrix
In order to quickly and efficiently calculate the distance between pattern pairs, we apply the following two pruning rules in the algorithm of pruning-based calculation of distance matrix: (1) Pruning rule 1. If two similar patterns and do not appear in the area ( − , + ) of the sequences, then the distance of the two patterns is infinity.
is a user specified parameter. Pruning rule 1 is based on the fact that two similar patterns, far from each other, are considered to be two distinct patterns. As a result, there is no need to calculate their distance, and we can simply fill the corresponding matrix element by infinity.
(2) Pruning rule 2. If pattern is falling (negative slope) and pattern is rising (positive slope), then the distance of the two patterns is infinity. Pruning rule 2 uses for pruning the slope of the patterns. If one pattern is falling (negative slope) and the other one is rising (positive slope) they cannot be considered as similar and therefore we do not need to calculate their similarity. That is, we just simply fill the corresponding matrix element by infinity.
Pruning-Based Calculation of Distance Matrix
In order to quickly and efficiently calculate the distance between pattern pairs, we apply the following two pruning rules in the algorithm of pruning-based calculation of distance matrix: (1) Pruning rule 1. If two similar patterns p 1 and p 2 do not appear in the area (I − W s , E + W s ) of the sequences, then the distance of the two patterns is infinity. W s is a user specified parameter. Pruning rule 1 is based on the fact that two similar patterns, far from each other, are considered to be two distinct patterns. As a result, there is no need to calculate their distance, and we can simply fill the corresponding matrix element by infinity. (2) Pruning rule 2. If pattern p 1 is falling (negative slope) and pattern p 2 is rising (positive slope), then the distance of the two patterns is infinity. Pruning rule 2 uses for pruning the slope of the patterns. If one pattern is falling (negative slope) and the other one is rising (positive slope) they cannot be considered as similar and therefore we do not need to calculate their similarity. That is, we just simply fill the corresponding matrix element by infinity.
The process of pruning-based calculation of the distance matrix starts from the root node of the tree. For the set of roots' children, it extracts all combinations of two are extracted. For each pair (e.g., ch 1 , ch 2 ), whether the pair elements are patterns or intermediate nodes is first checked. For intermediate nodes, with the first element of the pair (ch 1 ), and the algorithm of pruning-based calculation of distance matrix is used on the combinations of ch 1 's children. The same process is also done for the second element of the pair. We then decide whether we should compare ch 1 's children with ch 2 's children. To do so, the algorithm calculates the minimum distance between the MBRs of ch 1 and ch 2 . If the distance is larger than the distance threshold, it means that the two groups of patterns indexed by the MBRs of ch 1 and ch 2 are either happening far from each other, or the element's values are not close enough. Therefore, we can skip calculating the exact distance between these patterns and fill the corresponding elements of the distance matrix with infinity according to pruning rule 1. In the other case, where the minimum distance between ch 1 and ch 2 MBRs is less than the threshold, the algorithm continues with the combinations of ch 1 and ch 2 's children.
The process of pruning-based calculation of the distance matrix proceeds until it reaches the leaves of the tree, which means the elements of the input pairs are patterns. It applies two pruning rules to fill the corresponding element of the distance matrix with infinity. If neither of the pruning rules works, the algorithm calculates the distance between two patterns and fills the corresponding element of the distance matrix. Algorithm 2 details the process for calculating the distance matrix. Lines 2 to 4 show how to get the children list of the root. For each pair (e.g., ch 1 , ch 2 ), line 6 checks whether the pair elements are patterns or intermediate nodes. Lines 7 to 13 show how to fill the corresponding element of the distance matrix according to the two rules.
Finding Similar Patterns
After calculating the distance matrix, the algorithm uses the matrix to decide which patterns are similar. For each pair of patterns p i , p j , if their distance is less than distance threshold d, then are similar; and, the more representative one needs to be kept. We consider the one with highest number of similar patterns as the representative one (e.g., p i ) because it covers a wider range of similar patterns and represents a larger population. Therefore, p j should be added to the set of similar patterns of p i and then removed from the result list. Continuing this process results in a reduced set of patterns, which can be used for the next step of the algorithm.
Algorithm 3 shows the algorithm to find similar patterns. Line 4 shows which patterns are similar if their distance is less than d. Lines 5 to 12 show how to get the longest pattern to represent other similar patterns.
Generating Predictive Rules
Since the Apriori algorithm is a classic association rule mining algorithm which is simple, easy to understand and implement, and can obtain the correlation rules between conditional variables and decision variables, so the Apriori algorithm is used it to obtain correlation rules on these interesting patterns. The details of the Apriori algorithm are described in [38] and are not repeated here. Since the rule mining technique does not consider the semantics of the rules, in this paper we apply some filtering as a post-processing step to make sure the rules conform to domain constraints. In this step, based on the application requirements, we define a set of filters for the output rules. In the earthen ruin monitoring dataset, we have a set of patterns discovered in the air temperature data and another set of patterns from rammed earth temperature data. In this case, the rule miner finds all the relationships between patterns regardless of the conditional variable of the patterns. It is then possible to have some rules, such as r = p air_3 → p air_5 , where the conditional and decision patterns are air temperature data. Since the target of the current study is finding relationships between patterns from air temperature and rammed earth temperature, rules, such as r = p air_3 → p air_5 , are of no use. Therefore, we can define a filter to remove all the rules that have patterns from the same variable.
We then calculate the direction of each rule. We filter the rules whose directions are −1 or 0, because these rules are often due to the existence of some data anomalies or noise. We also calculate the delay and variation of each filtered rule and merge the above rules into more succinct predictive rules. The predictive rule may be in the following form: If min(p air_i ) > 0 • C and +5 • C ≤ V(p air_i ) ≤ + 10 • C then +2 • C ≤ V(p earth_j ) ≤ +3 • C with delay = 3 h
Predicting with Predictive Rules
After obtaining the prediction rule, we can use the time series of conditional variables to predict the time series of decision variables. The patterns are extracted in the time series of conditional variables to match the prediction rules based on the predictive rules. The matching results are used as the predictive results of the time series of decision variables. Hit rate H is defined to show the performance of the proposed algorithm: where numerator N is the number of the air temperature patterns that accurately predict the rammed earth temperature pattern, and denominator M. is the number of all the air temperature patterns in the testing dataset.
Experiments
To evaluate the proposed PPER method, we applied it to the Ming Dynasty Great Wall dataset monitored from the Ming Dynasty Great Wall in Shaanxi Province of China by an IoT network. We have deployed about 300 IoT nodes in five monitoring areas of the Ming Dynasty Great Wall in Shaanxi. The IoT based sensing infrastructure for the Ming Dynasty Great Wall is shown in Figure 7.
where numerator is the number of the air temperature patterns that accurately predict the rammed earth temperature pattern, and denominator is the number of all the air temperature patterns in the testing dataset.
Experiments
To evaluate the proposed PPER method, we applied it to the Ming Dynasty Great Wall dataset monitored from the Ming Dynasty Great Wall in Shaanxi Province of China by an IoT network. We have deployed about 300 IoT nodes in five monitoring areas of the Ming Dynasty Great Wall in Shaanxi. The IoT based sensing infrastructure for the Ming Dynasty Great Wall is shown in Figure 7. The sensing infrastructure includes three layers, sensing layer, network layer and the application service layer. In the sensing layer, there are different IoT nodes which can obtain environmental monitoring data, such as air temperature and humidity, and rainfall intensity, and earthen ruin monitoring data, such as rammed earth temperature, humidity and salinity. With Zigbee transmission technology, these nodes can transmit data to the gateway. In the network layer, with GPRS technology, data are finally reached the database server. A parser is developed to parse the received packets, and save the data into a database using MySQL Server. In the application service layer, user can visit the web server to query the data and view the analysis results. Some photos of the IoT node layouts in the five monitoring areas are shown in Figure 8. The sensing infrastructure includes three layers, sensing layer, network layer and the application service layer. In the sensing layer, there are different IoT nodes which can obtain environmental monitoring data, such as air temperature and humidity, and rainfall intensity, and earthen ruin monitoring data, such as rammed earth temperature, humidity and salinity. With Zigbee transmission technology, these nodes can transmit data to the gateway. In the network layer, with GPRS technology, data are finally reached the database server. A parser is developed to parse the received packets, and save the data into a database using MySQL Server. In the application service layer, user can visit the web server to query the data and view the analysis results. Some photos of the IoT node layouts in the five monitoring areas are shown in Figure 8.
The sensing infrastructure includes three layers, sensing layer, network layer and the application service layer. In the sensing layer, there are different IoT nodes which can obtain environmental monitoring data, such as air temperature and humidity, and rainfall intensity, and earthen ruin monitoring data, such as rammed earth temperature, humidity and salinity. With Zigbee transmission technology, these nodes can transmit data to the gateway. In the network layer, with GPRS technology, data are finally reached the database server. A parser is developed to parse the received packets, and save the data into a database using MySQL Server. In the application service layer, user can visit the web server to query the data and view the analysis results. Some photos of the IoT node layouts in the five monitoring areas are shown in Figure 8. In this work, we focused on obtaining rammed earth temperatures; therefore, we used the air temperature, rainfall intensity, rammed earth slope and aspect and rammed earth temperature monitoring data of the Ming Dynasty Great Wall dataset. The dataset consists of the air temperature, the rainfall intensity and the rammed earth temperature underground 5 cm in 5 monitoring areas collected from 29 October 2015 to 29 January 2016. It also consists of the rammed earth slope and aspect of each monitoring area. The air temperature monitoring frequency and the rainfall intensity were once every five minutes, and the rammed earth temperature monitoring frequency was once every three hours. The air temperature, the rainfall intensity and rammed earth temperature underground 5 cm of each monitoring area are considered as a multivariate sequence.
The algorithm was implemented in Java, and all experiments were run on a Windows 7 platform with a CPU speed of 3.4 GHz × 2, and 12 GB RAM. For the last part of the algorithm, which looks for potential relationships in the data, we used Weka library version 3.7.9.
Parameter Setting
The window size was set to 4 for the Ming Dynasty Great Wall dataset. The δ min parameter was used to determine whether the algorithm discarded the sliding window or not. For air temperature data, δ min was set to be 5, according to the minimum air temperature difference between day and night. For the rammed earth temperature data, δ min was set to be 1, according to the minimum difference of the rammed earth temperature between day and night. For the rainfall intensity, δ min was set to be 1, according to the minimum rainfall intensity difference between day and night. The overlapping length parameter was also set to 1, because the two adjacent interest patterns had a point overlap. The d parameter was used to decide which patterns were similar. The smaller the d parameter, the more the similar patterns, and vice versa. Therefore, d was set at 18 for the air temperature sequence, 8 for the rammed earth temperature, and 0.1 for the rainfall intensity. Using the Ming Dynasty Great Wall dataset, we designed two sets of experiments to evaluate PPER.
Experiment on the Performance of Pruning Rules
To prove the effect of pruning rules in reducing the time complexity of the algorithm, we utilized air temperature in the Ming Dynasty Great Wall dataset. In this experiment, we evaluate the performance of FindCandPatterns method using Ming Dynasty Great Wall dataset. The performance of FindCandPatterns Algorithm is showed in Figure 9. As shown in this figure, the time complexity increases linearly with the number of sequences. The experimental results are consistent with the results of our time complexity analysis in Section 4.1.1. We then conducted some experiments to investigate the performance of the distance matrix calculation with and without pruning rules. As Figure 10 shows, the performance of the pruning-based algorithm is much better than the naïve one for both Euclidian and DTW distance measures.
performance of FindCandPatterns Algorithm is showed in Figure 9. As shown in this figure, the time complexity increases linearly with the number of sequences. The experimental results are consistent with the results of our time complexity analysis in Section 4.1.1. We then conducted some experiments to investigate the performance of the distance matrix calculation with and without pruning rules. As Figure 10 shows, the performance of the pruning-based algorithm is much better than the naïve one for both Euclidian and DTW distance measures. Figure 11 presents the results of another experiment-one which studied the effect of two levels of pruning on the performance of the distance matrix calculation algorithm. In the figure, the blue-colored line with ○ shows the algorithm that only applied index and slope pruning rules without using the R-tree-like data structure; and, the red line with * shows the results related to the version of the algorithm that utilizes the R-tree-like data structure for further pruning. It can be seen that for almost all situations, use of the R-tree improved the performance of the calculations. However, for distance matrix calculation, using the R-tree like structure does not lead to a huge performance improvement.
(a) Euclidian distance measure (b) DTW distance measure Figure 11. Effect of pruning levels on the performance of distance matrix calculation algorithms.
Experiments on the Performance of Prediction
In this work, the Apriori algorithm was used to discover correlation rules from the conditional variables patterns and the decision variable patterns in the Ming Dynasty Great Wall dataset. We selected the monitoring data of the partial nodes with the same orientation of south; half of these data were used to mine the correlation rules, and the other half were used to test the pattern prediction function of the algorithm. Figure 11 presents the results of another experiment-one which studied the effect of two levels of pruning on the performance of the distance matrix calculation algorithm. In the figure, the blue-colored line with shows the algorithm that only applied index and slope pruning rules without using the R-tree-like data structure; and, the red line with * shows the results related to the version of the algorithm that utilizes the R-tree-like data structure for further pruning. It can be seen that for almost all situations, use of the R-tree improved the performance of the calculations. However, for distance matrix calculation, using the R-tree like structure does not lead to a huge performance improvement. Figure 11 presents the results of another experiment-one which studied the effect of two levels of pruning on the performance of the distance matrix calculation algorithm. In the figure, the blue-colored line with ○ shows the algorithm that only applied index and slope pruning rules without using the R-tree-like data structure; and, the red line with * shows the results related to the version of the algorithm that utilizes the R-tree-like data structure for further pruning. It can be seen that for almost all situations, use of the R-tree improved the performance of the calculations. However, for distance matrix calculation, using the R-tree like structure does not lead to a huge performance improvement.
(a) Euclidian distance measure (b) DTW distance measure Figure 11. Effect of pruning levels on the performance of distance matrix calculation algorithms.
Experiments on the Performance of Prediction
In this work, the Apriori algorithm was used to discover correlation rules from the conditional variables patterns and the decision variable patterns in the Ming Dynasty Great Wall dataset. We selected the monitoring data of the partial nodes with the same orientation of south; half of these data were used to mine the correlation rules, and the other half were used to test the pattern Pruning rules and R-tree Pruning rules Figure 11. Effect of pruning levels on the performance of distance matrix calculation algorithms.
Experiments on the Performance of Prediction
In this work, the Apriori algorithm was used to discover correlation rules from the conditional variables patterns and the decision variable patterns in the Ming Dynasty Great Wall dataset. We selected the monitoring data of the partial nodes with the same orientation of south; half of these data were used to mine the correlation rules, and the other half were used to test the pattern prediction function of the algorithm.
In this experiment, we applied PPER in a dataset with three conditional variables, including air temperature rammed earth slope aspect and rainfall intensity, and obtained 42 the correlation rules. Among all discovered rules, the following example correlation rule describes the relationship among three conditional variables and one decision variable, i.e., the air temperature, rammed earth slope and aspect, and the rainfall intensity, and the rammed earth temperature: r 5 = p air_8 , p rain_9 , p slope_1 ⇒ p earth_9 (9) where p air_8 = (7.9, 8.5, 10.3, 11.5, 13.9), p earth_9 = (2.1, 2.3, 2.7, 3.5, 4.3), p slope_1 = ( 170) and p rain_9 = (0.01, 0.01, 0, 0, 0). It shows when the air temperature increases from 7.9 to 13.9 • C, and the rainfall intensity decreases from 0.01 to 0, the rammed earth slope is toward the south with its aspect of 170 degrees, the temperature of rammed earth increases from 2.1 to 4.3 • C. Because precipitation in our study area is low, so the rainfall intensity data are sparse and does not have much variations, so the number of interesting patterns extracted based on the rainfall intensity are small, which does not provide too much in-depth knowledge to the earthen ruins conservation experts. Therefore, we continue analyzing the rules with air temperature as the conditional variable to predict the rammed earth temperature in order to solve the rammed earth temperature prediction with good performance. After keeping the correlation rules that describe the relationship from the air temperature to the rammed earth temperature and filtering the rules that had a negative direction, we determined 39 correlation rules with a confidence of 0.9. K-means is a commonly used clustering method which is efficient and fast with the time complexity O(n), where n is the number of data objects. In this application, we use k-means method to analysis the variation and delay of these correlation rules. Because we focus on the two kinds of patterns, falling and rising patterns, so the number of cluster k is set as 2. We obtained the results shown in Figure 12. In Figure 12a, all the rising rules were placed into cluster 1 with air temperature variation from 5.3 • C to 13.5 • C and rammed earth temperature variation from 1.7 • C to 4 • C; and, all falling rules were placed into cluster 2 with air temperature variation from −11.8 • C to −8.2 • C and the rammed earth temperature variation from −4.5 • C to −3.2 • C. Figure 12b shows the two clusters of the delay of the 39 rules. About 74% of the delay was an observation point of time of about 3 h. rainfall intensity decreases from 0.01 to 0, the rammed earth slope is toward the south with its aspect of 170 degrees, the temperature of rammed earth increases from 2.1 to 4.3 °C. Because precipitation in our study area is low, so the rainfall intensity data are sparse and does not have much variations, so the number of interesting patterns extracted based on the rainfall intensity are small, which does not provide too much in-depth knowledge to the earthen ruins conservation experts. Therefore, we continue analyzing the rules with air temperature as the conditional variable to predict the rammed earth temperature in order to solve the rammed earth temperature prediction with good performance. After keeping the correlation rules that describe the relationship from the air temperature to the rammed earth temperature and filtering the rules that had a negative direction, we determined 39 correlation rules with a confidence of 0.9. K-means is a commonly used clustering method which is efficient and fast with the time complexity O( ), where n is the number of data objects. In this application, we use k-means method to analysis the variation and delay of these correlation rules. Because we focus on the two kinds of patterns, falling and rising patterns, so the number of cluster k is set as 2. We obtained the results shown in Figure 12. In Figure 12a, all the rising rules were placed into cluster 1 with air temperature variation from 5.3 °C to 13.5 °C and rammed earth temperature variation from 1.7 °C to 4 °C; and, all falling rules were placed into cluster 2 with air temperature variation from −11.8 °C to −8.2 °C and the rammed earth temperature variation from −4.5 °C to −3.2 °C. Figure 12b shows the two clusters of the delay of the 39 rules. About 74% of the delay was an observation point of time of about 3 h. We continued to merge the above rules into more succinct predictive rules, as shown in Algorithm 4. There were six predictive rules that could be used to predict the rammed earth temperature pattern by rule matching. The hit rates of the rules are shown in Figure 13. We can observe that predictive rule 4 had the highest average hit rate (89.8%) and that predictive rule 6 had the lowest average hit rate (80.0%). In subsequent analysis, we found that predictive rule 3 merged 10 primitive correlation rules and predictive rule 6 only merged four primitive correlation rules; therefore, predictive rule 3 had the highest average hit rate. The hit rates of the rules are shown in Figure 13. We can observe that predictive rule 4 had the highest average hit rate (89.8%) and that predictive rule 6 had the lowest average hit rate (80.0%). In subsequent analysis, we found that predictive rule 3 merged 10 primitive correlation rules and predictive rule 6 only merged four primitive correlation rules; therefore, predictive rule 3 had the highest average hit rate.
Conclusions
In this paper, we investigated the problem of pattern prediction in data obtained from earthen ruins. We proposed the PPER pattern prediction method to find interesting patterns in air and rammed earth temperature sequences. In order to reduce the time complexity of the algorithm, we proposed two pruning rules and a new data structure. Using the correlation rules between the air and rammed earth temperature patterns, predictive rules were obtained. The rammed earth temperature patterns were well predicted with the predictive rules from the Ming Dynasty Great Wall dataset, which has great significance for the protection of the Ming Dynasty Great Wall. We have integrated PPER into the intelligent platform of Ming Dynasty Great Wall, and made a demonstration application in the Ming Dynasty Great Wall in Zhenbeitai, Shaanxi Province, China, and achieved good prediction results, which provided scientific decision-making for the protection of Ming dynasty Great Wall. PPER can be applied on multivariate problems. It can be extended to multiple pattern prediction with similar rising and falling interest patterns, such as the petroleum
Conclusions
In this paper, we investigated the problem of pattern prediction in data obtained from earthen ruins. We proposed the PPER pattern prediction method to find interesting patterns in air and rammed earth temperature sequences. In order to reduce the time complexity of the algorithm, we proposed two pruning rules and a new data structure. Using the correlation rules between the air and rammed earth temperature patterns, predictive rules were obtained. The rammed earth temperature patterns were well predicted with the predictive rules from the Ming Dynasty Great Wall dataset, which has great significance for the protection of the Ming Dynasty Great Wall. We have integrated PPER into the intelligent platform of Ming Dynasty Great Wall, and made a demonstration application in the Ming Dynasty Great Wall in Zhenbeitai, Shaanxi Province, China, and achieved good prediction results, which provided scientific decision-making for the protection of Ming dynasty Great Wall. PPER can be applied on multivariate problems. It can be extended to multiple pattern prediction with similar rising and falling interest patterns, such as the petroleum well log data, to predict the product rate pattern with pressure pattern. In the future, we will collect more monitoring data to improve the prediction performance of PPER. As the same time, we will extend and apply PPER to other multivariate problems. We plan to do more experiments and research to improve the efficiency and effectiveness of PPER. The generalizability of the algorithm will also be a focus of follow-up work, we plan to definite more interesting patterns to solve different pattern prediction. | 15,349.6 | 2017-05-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Effective three-body interactions of neutral bosons in optical lattices
We show that there are effective three- and higher-body interactions generated by the two-body collisions of atoms confined in the lowest vibrational states of a 3D optical lattice. The collapse and revival dynamics of approximate coherent states loaded into a lattice are a particularly sensitive probe of these higher-body interactions; the visibility of interference fringes depend on both two-, three-, and higher-body energy scales, and these produce an initial dephasing that can help explain the surprisingly rapid decay of revivals seen in experiments. If inhomogeneities in the lattice system are sufficiently reduced, longer timescale partial and nearly full revivals will be visible. Using Feshbach resonances or control of the lattice potential it is possible to tune the effective higher-body interactions and simulate effective field theories in optical lattices.
I. INTRODUCTION
The collapse and revival of matter-wave coherence is an expected consequence of two-body atom-atom interactions in trapped Bose-Einstein condensates (BECs) [1,2,3,4]. Collapse and revival of few-atom coherent states in optical lattices has been seen in a number of experiments, first in single-well lattices [5] and subsequently in double-well lattices [6,7]. In these experiments, a BEC is quickly loaded into a fairly deep 3D lattice such that the quantum state approximately factors into a product of coherent states localized to each lattice site [8,9,10]. Each coherent state, which is a superposition of different atom-number states, initially has a well-defined phase. If the lattice potential is quickly turned off before atom-atom interactions have a significant influence, the coherent states released from confinement at each site expand and overlap resulting in interference fringes in the imaged atom-density. However, if the atoms are held in the lattice for a longer duration before release, interactions will play a significant role by causing the phases of the different atom-number states in the superposition at each site to evolve at different rates. This will result in a dephasing of the coherent state, and a subsequent collapse of the interference fringe visibility after the atoms are released. For atoms in a homogenous lattice with two-body interactions and negligible tunneling, the coherent states at each lattice site are predicted to revive when the atom-number component states simultaneously re-phase after multiples of the time t 2 = 2π /U 2 , where U 2 is the two-body interaction energy [1,2,3,4,5].
In addition to the expected two-body physics described above, we show that the data in [5,6,7] should also contain strong signatures of coherent three-and higher-body interactions. In contrast to the coherent dynamics described in this paper, recent experiments have studied inelastic three-body processes, including recent observations of Efimov physics [11,12,13], by tracking atom loss from recombination [14]. There has been a growing interest in three-and four-body physics (e.g., [15,16,17,18,19,20]), and the role of intrinsic three-body interactions on equilibrium quantum phases in optical lattices has been studied in [21,22,23]. The influence of higher bands on the Mott-insulator phase transition has been analyzed in [24], and three-body interactions of fermions and polar molecules in lattices have also been explored [25].
In this paper, we use the ideas of effective field theory to show that virtual transitions to higher vibrational states generate effective, coherent three-body interactions between atoms in the lowest vibrational states of a deep 3D lattice where tunneling can be neglected. More generally, virtual excitations also generate effective four-and higher-body interactions giving the nonequilibrium dynamics multiple energy scales. We also show that loading coherent states into an optical lattice creates a sensitive interferometer for probing higher-body interactions. In a sufficiently uniform lattice, multiple frequencies manifested as beatings in the visibility of the collapse and revival oscillations give a direct method for measuring the energy and frequency scales for elastic higher-body interactions. Remarkably, multiple-frequency collapse and revival patterns have been seen in recent experiments [26].
Three-body interactions can also explain the surprisingly rapid damping of revivals seen in [5,6,7], where the overall visibility of the interference fringes decays after roughly 5 revivals (∼ 3 ms for the system parameters in [5,6,7]). This short timescale cannot be explained in terms of tunneling or atom loss. For example, for the system parameters in [5,6,7], the tunnelinginduced decoherence timescale has been found to be a factor of 10-100 times too long [27], and the atom loss from three-body recombination [14] appears to be negligible [26]. The latter observation is consistent with the expected three-body recombination timescales for 87 Rb in a lattice [28].
The damping of revivals can be partially explained by the expected variation in U 2 over a non-uniform lattice due to an additional harmonic term in the trapping potential. Inhomogeneity in U 2 causes dephasing due to the variation in the revival times for coherent states at different sites, however, the estimated 3-5% inhomogeneity of U 2 should allow as many as 10-20 revivals compared to the ∼5 seen in [5,6,7]. In contrast, we show below that coherent three-body interactions can cause dephasing of coherent states at each lattice site after only a few revivals.
The effective theory in this paper describes the low-energy, small scattering length, small atom number per lattice site regime, for deep 3D lattices with negligible tunneling. These approximations are reasonable for the experiments in [5,6,7]. Extensions of the analysis might include tunneling, including second-order [29] and interaction driven [30] tunneling, and the incorporation of intrinsic higher-body interactions. Effective field theory has also proven to be an important tool in the large scattering length limit [12]. It would be particularly interesting to simulate the controlled breakdown of the effective theory developed here by increasing the scattering length or atom number, or by tuning other lattice parameters. Looking beyond the realm of atomic physics, our analysis suggests interesting possibilities for using optical lattices to test important mechanisms in effective field theory [31].
In Section 2, we construct a multimode HamiltonianĤ which we use to obtain an effective single-mode HamiltonianH eff . In Section 3, we describe the physical processes that generate higher-body interactions. In Section 4, we estimate the effective three-body energy. In Section 5, we show how the coherent three-body interactions modify the collapse and revival dynamics. Finally, we summarize our results in Section 6.
II. EFFECTIVE THREE-BODY MODEL FOR NEUTRAL BOSONS IN AN OPTICAL LATTICE
A many-body Hamiltonian for mass m a neutral bosons in a single spin state can be written as where V m are intrinsic m-body interaction potentials, and H 0 is the Hamiltonian for a single particle in the optical lattice. We set V m>2 = 0 to focus on the physics of effective interactions induced by V 2 . In experiments, the effect of intrinsic and effective interactions are both present. It is our goal to construct a low energy, effective HamiltonianH eff for describing a small number of atoms in the vibrational ground state of a lattice site, while incorporating leading-order corrections from virtual excitation to higher bands. In the quantum mechanical approach, Huang et al. [32] have shown that a local regularized delta-function potential V 2 (r, r ) ∝ δ (3) (r − r ) (d/d ) , where = |r − r | , can be used to obtain the low-energy scattering for two particles. To go beyond the two-particle case, we find it convenient to instead use the renormalization methods of quantum field theory and the nonregularized delta-function potential We regularize the theory in perturbation theory by using a high-energy cutoff Λ in the sum over intermediate states, which is equivalent to using a regularized (non-singular) potential. We view Λ as a physical threshold beyond which the low-energy theory fails. We note that the low-energy physics does not, in the end, depend on the method of regularization, and that the physical results found below after renormalization are insensitive to Λ. The key observation is that even if a fully regularized form of V 2 is used renormalization is still required recognizing that the bare parameter g 2 is not the physical (renormalized) coupling strengthg 2 . (In the following we use a tilda to distinguish between bare and renormalized parameters.) Employing renormalized perturbation theory [31], we write g 2 =g 2 + c, wherẽ is chosen to reproduce the exact, low-energy limit given in [33] for two atoms in a spherically symmetric harmonic trap, and a scat is the scattering length at zero-collisional energy. The first-order approximation tog 2 suffices for the calculation of the three-body energy at second order given below. The value of the counter-term c, which cancels the contributions to the twobody interaction energy that diverge with Λ, is determined by the normalization condition Eq. (3). The local Hamiltonian with counter-term and physical coupling parameter becomes To develop a low-energy effective theory for a deep optical lattice, we expand the field over a set of bosonic annihilation operatorsâ iµ and single particle wavefunctions φ iµ (r) givingψ (r) = iµ φ iµ (r)â iµ , where the indices µ = {µ x , µ y , µ z } with µ x,y,z = 0, 1, 2, ... label 3D vibrational states and i labels the lattice sites. To focus on the role of interactions we assume a deep lattice with n s 3 states per spatial dimension at each site, making tunneling of atoms in the ground vibrational state µ = {0, 0, 0} ≡ 0 negligible on the timescale of interest [5]. Since we are not considering the role of tunneling, for simplicity we use isotropic harmonic oscillator wavefunctions at each site with frequency ω and length scale σ = /m a ω determined by the (approximately) harmonic confinement within a single lattice well. Note that even with tunneling neglected, anharmonicity of the lattice potential is a potentially significant effect. We also expect our model to break down or to require significant modification for very shallow lattices or near the Mott-insulator phase transition where the effects of tunneling are important [24,34,35,36].
Inserting the expansion forψ into H, interchanging the order of integration over r and summation over modes, and dropping terms that transfer atoms between sites (e.g. tunneling), we obtain for each lattice site the multimode HamiltonianĤ =Ĥ 0 +Ĥ 2 , whereĤ For brevity we suppress the lattice site index i. The single particle energies are E µ = (µ x + µ y + µ z ) ω, setting the ground state energy E 0 ≡ E {0,0,0} = 0. The two-body interaction energy for ground state atoms is and A = (2π) −3/2 c/σ 3 is the counter-term in units of energy. The matrix elements are normalized so that K 0000 = 1, and they vanish for transitions that do not conserve parity. It should be noted that, when there is a cutoff in the sum over modes, both the regularized and non-regularized delta-function potential lead to the same HamiltonianĤ and matrix elements in Eq. (8), and thus they produce the same results in the regularized (cutoff) quantum field theory. We emphasize that after the renormalization of the two-body interaction energy, the induced three-body interaction energy is insensitive to the cutoff Λ. We develop the perturbation theory in the small parameter ξ defined by The total interaction energy for n atoms in the vibrational ground state in the single mode per site approximation is E int = U 2 n (n − 1) /2. Commonly, a single-mode approximation is made based on the two-body interaction energy per particle being much less than the band gap, i.e., E int /n =Ũ 2 (n − 1) /2 ω or nξ 1. For 87 Rb with scattering length a scat 5.3 nm and a lattice with ω/2π 30 kHz, we haveŨ 2 /h 2.0 kHz, and ξ = 0.07. We will use these as typical system parameters in the following analysis. With ξ = 0.07, the single mode per site condition n ξ −1 ∼ 15 is easily satisfied and the influence of higher-bands will produce only small (though important) corrections. For coherent states, for example, we show that small threebody energies can lead to large phase shifts over time resulting in interferometric-like sensitivity to higher-body and higher-band processes.
To obtain an effective HamiltonianH eff , we use the multi-mode HamiltonianĤ =Ĥ 0 +Ĥ 2 to compute the atom-number dependent energy shift for atoms in the vibrational ground state. Our approach is essentially equivalent to the effective field theory procedure of summing up to a cutoff over all 'high-energy' modes µ with E µ ≥ ω, which generates a low energy effective theory with all consistent local interactions. We obtain an effective HamiltonianH eff for the µ = 0 mode that is valid in the low-energy regime E int /n ∼ nŨ 2 ω, which is consistent with the single mode approximation discussed above. Of course the multimode HamiltonianĤ itself is an effective Hamiltonian which is only valid for energy scales E µ + E int /n /(m a a 2 scat ). In the case of atoms confined in a deep well, the effective Hamiltonian forŨ 2 ω is The effective two-body interaction energyŨ2 is given through second order by diagrams (a)-(d). Diagram (d) is the counter-term that cancels the diagrams (b) and (c), fixingŨ2 as the physical (renormalized) two-body energy. Diagrams (f)-(i) are examples of processes contributing to the effective three-body interaction energyŨ3, represented by diagram (j). Diagram (g) gives the leading order contribution, assuming U3 = 0; it shows how an effective three-body interaction involving three distinct incoming particles arises at second order in perturbation theory. Diagrams (h) and (i) are two of the effective three-body processes that arise at third order (others are not shown). If the bare three-body vertex shown in (f) does not vanish additional three (and higher) body counter-terms are also required.
whereâ † creates an atom in a renormalized ground vibrational state. The E 0â †â term vanishes since we set E 0 = 0. The dominant term inH eff is the two-body energy, and the higher-body interaction energies scale as nŨ m /Ũ m−1 ∼ nŨ 2 / ω ∼ nξ 1. The energiesŨ m can be computed in perturbation theory in the small parameter ξ usingĤ to find the energy of n atoms in the ground vibrational mode. At m th order in ξ, all local interactions up through the (m + 1)-body termH m+1 = U m+1â †m+1âm+1 / (m + 1)! are generated. In this paper, we work to second order in ξ for which the effective Hamiltonian isH Usingn =â †â and â,â † = 1, the two-and three-body terms can be written asâ †2â2 =n (n − 1) andâ †3â3 = n (n − 1) (n − 2) ; the latter expression shows explicitly that the effective three-body interaction only arises when there are three or more atoms in a well. Eigenstates ofH eff with n atoms have energies E (n) =Ũ 2 n (n − 1) /2 +Ũ 3 n (n − 1) (n − 2) /6.
Note that the three-body energy scales as n 3 and thus its influence relative to the two-body term, though small, can be tuned by changing the number of atoms in a well.
III. MECHANISM FOR EFFECTIVE INTERACTIONS
We now describe the virtual processes that give rise to effective m-body interactions in a deep lattice. Writing the perturbative expansion for the energy of an n atom state |n through second order asẼ (n) = E (0) (n)+E (1) (n)+E (2) (n) , the zeroth-order energy is E (0) (n) = E 0 n = 0, recalling that E 0 = 0. The first-order energy shift, treatingĤ 2 as the perturbation Hamiltonian, is the usual expression This is the leading order result for the two-body interaction energy and, setting n = 2, the renormalization condition Eq. (7) shows that A = 0 to first order inŨ 2 . Figure 1(a) represents this first-order process. The second-order energy shift can be written as with K µν ≡ K µν00 and µ ≥ ν. The O Ũ 2 2 counter-term A now appears. At this order, A is determined by the renormalization conditionẼ (2) = E (0) (2) + E (1) (2) + E (2) (2) =Ũ 2 , implying that E (2) (2) = 0. The sum is over intermediate states νâ0â0 |n with energy E µν = E µ + E ν > 0; this excludes the µ = ν = 0 state. For regularization purposes we introduce a high-energy cutoff that limits the sum to E µν ≤ Λ. The factor s µν = {4, 1} if {µ = ν, µ = ν} comes from the two equivalent termsâ † µâ † ν and a † νâ † µ that appear inĤ 2 . Each term in E (2) involves a two-body collision-induced transition to a virtual intermediate state. For example, the state |1 x 1 x corresponds to two atoms both excited along the x direction with energy E 11 = 2 ω (note that K 2 11 = 1/4 for this transition), with the remaining n − 2 atoms left in the µ = 0 mode. Because collisions conserve parity, contributions from states like |1 x 1 y vanish.
The crucial observation is that the series in Eq. (14) separates into two distinct sums corresponding to two-body and three-body interactions, respectively, i.e., where δU 2 includes the counter-term contribution A from Eq. (14). For µ = 0 and ν = 0 intermediate states, with the factor of 2 resulting from Bose stimulation when both atoms transition to the same excited state. Because these terms are proportional to n (n − 1) they contribute to the two-body energy shift δU 2 . A diagram representing this two-body process, with two atoms colliding, making transitions to virtual excited vibrational states, and then returning to the ground state after a second collision with each other, is shown in Fig. 1(b). The µ = 0 virtual states and µ = 0 vibrational ground states are represented by dashed and solid lines, respectively. The origin of the three-body energy can be seen by examining the µ > 0, ν = 0 intermediate states. We have showing that these terms generate both effective two-and three-body energies. The extra factor of (n − 1) in Eq. (16) results from Bose stimulation of an atom back into the µ = 0 state when two atoms collide but only one makes a transition to an excited state. Figure 1(c) shows the two-body process corresponding to the n (n − 1) term in Eq. (16). Figure 1(d) shows the counterterm A whose value is determined such that it cancels the contributions from Figs. 1(b) and (c), thereby maintaining, through second order, the renormalization condition that the parameterŨ 2 is equal to the physical two-body energy. To arbitrary order the renormalization condition determines A such that all higher-order two-body diagrams cancel, as represented by Fig. 1(e). Figure 1(g) shows the effective three-body process corresponding to the n (n − 1) (n − 2) term in Eq. (16). This process gives the leading-order contribution to δU 3 and generates a three-body interaction energyŨ 3 = U 3 + δU 3 even if the bare U 3 , represented by Fig. 1(f), vanishes. More generally, we expect U 3 = 0, but nevertheless the contribution toŨ 3 given by δU 3 can be a significant (possibly even dominant) correction. Looking at Fig. 1(g), we see that two initial µ = 0 atoms collide giving rise to one µ = 0 atom that subsequently collides with a third, distinct µ = 0 atom. In Fig. 1(g) there are three distinct incoming atoms resulting in an effective three-body interaction mediated by the µ = 0 intermediate state. The renormalized three-body interaction energy is represented in Fig. 1(j) by a square vertex with three incoming and outgoing particles. Figures 1(h) and (i) show examples of two different processes contributing toŨ 3 at third-order in ξ; they illustrate how higher-order processes, including counter-terms, arise. Their contributions, and other third-order processes not shown, are not explicitly computed below. At third order, effective four-body interactions also arise.
Notice that there are two types of diagrams in Fig. 1: tree diagrams [e.g. Fig. 1(g)] and loop diagrams [e.g. Fig. 1(b)]. In general in quantum field theory the contributions from some loop diagrams diverge with the cutoff Λ, necessitating the need for renormalization, whereas the contributions from tree diagrams are finite [31]. We will see this behavior explicitly below. In fact, at m th order in ξ, there will be a set of tree diagrams giving a finite, leading-order contribution to the effective (m + 1)-body interaction energiesŨ m+1 . We note that even if all intrinsic higher-body interactions exactly vanish there will be effective mbody interactions and associated energy scalesŨ m generated by the two-body interactions. Consequently, the nonequilibrium dynamics of n atoms in the ground vibrational mode, when nξ 1, will be characterized by a hierarchy of frequencies
IV. ESTIMATE OF THE EFFECTIVE THREE-BODY INTERACTION ENERGY
Returning to Eq. (14) for the second-order energy shift and separating it into two-and three-body parts, we find that and In the expression for δU 2 the sum is over all µ and ν (both µ > ν and ν > µ) except for the µ = ν = 0 mode. Similarly, in the expression for δU 3 all µ except for µ = 0 are summed over. As expected, the sum Λ µ,ν K 2 µν /E µν corresponding to the second order, 1-loop diagram in Fig. 1(b) diverges with Λ, reflecting the divergent relationship between the bare U 2 and renormalizedŨ 2 energy parameters. In fact, the sum scales with the cutoff as Λ 1/2 . The renormalization condition thatẼ (2) =Ũ 2 determines A by requiring that δU 2 = 0. To second-order, the interaction energy of n atoms is thusẼ After cancelling the two-body corrections with A, the remaining second-order term gives an induced three-body energy that is insensitive to Λ : the quantity Λ µ>0 K 2 µ0 /E µ0 corresponding to the second-order tree diagram in Fig. 1(g) converges. Writing this sum can be solved analytically for a spherically symmetric harmonic trap in the Λ → ∞ limit, and we find [37] Cutting off the sum at E µν / ω ≤ Λ/ ω = 4 already gives β 1. 30 showing the rapid convergence of the series. The convergence of this sum is an example of the generic behavior that contributions from tree diagrams are finite. If the bare U 3 is zero or sufficiently small, the effective three-body energy is negative, giving attractive three-body interactions, and reducing the total interaction energy for both positive or negativeŨ 2 .
We expect significant corrections due to the anharmonicity of the true lattice potential. The single-particle energies of higher vibrational states are lowered on the order of the recoil energy E R , defined as the gain in kinetic energy for an atom at rest that emits a lattice photon. This leads to a decrease of the energy denominator in Eq. (14) and, for the typical system parameters considered here, this can give an estimated correction toŨ 3 of 10% or more. The matrix elements K µν will also have corrections. These effects can be computed numerically using single-particle band theory.
We have defined our perturbation theory around the zero-collisional energy limit, but in a trap the collision energy of ground state atoms is on the order of ω. As shown in [38], an improved treatment replaces the zero-energy scattering length by an effective scattering length defined as where the effective range r e is on the order of the van der Waals length scale m a C 6 / 2 1/4 away from a Feshbach resonance, and the collision energy is 2 k 2 /m a [39]. For 87 Rb the van der Waals length is approximately 8 nm. In a trap the ground vibrational state wavevector k σ −1 produces a fractional increase in scattering length on the order of (r e /σ) ξ. By incorporating the effective scattering length model we can extend the range of validity of our model.
Even neglecting these corrections, the perturbation theory generated by Eqs. (2) and (6) does not predict the two-body energỹ U 2 but instead uses the measured value, or the exact result calculated by other methods such as Busch et al [33], as input from which δU 3 is obtained. Similarly, the effective theory does not yield the intrinsic three-body interaction energy U 3 , and thereforẽ U 3 = U 3 + δU 3 must also be determined by either measurement or a theory of three-body physics if the intrinsic interaction energies U m>2 are non-zero. On the other hand, the effective theory shows that even if U m>2 = 0 there are significant induced three-and higher-body interactions, and if non-zeroŨ m>2 are measured the effective contribution from two-body processes must be taken into account before the intrinsic higher-body coupling strengths can be extracted. Note that if non-zero bare (intrinsic) parameters U m>2 are included in our model, additional counter-terms will be needed to cancel divergences, reflecting the need to ultimately determine any intrinsic higher-body coupling strengths via either measurement or an exact high-energy theory.
Assuming U 3 0, Fig. 2 showsŨ 2 = ξ ω andŨ 3 versus ξ, including positive (ξ > 0) and negative (ξ < 0) scattering lengths. Using ξ = 0.07 for 87 Rb in a 30 kHz well givesŨ 2 /h 1.9 kHz andŨ 3 /h −200 Hz. Using a Feshbach resonance [40] to change a scat and thus ξ, or fixing a scat and changing the trap frequency ω, it is possible to tune the relative strengths of the three-body (and higher-body) interactions. It would be interesting to explore the breakdown of the perturbative model by increasing either ξ or the atom number n, or by decreasing the lattice depth so that the influence of tunneling and higher-band effects increases. vanishes. The dashed line shows the leading-order two-body energyŨ2. The graphs extends beyond the regime of strict validity of the perturbation theory, which requires ξn 1 where n is the number of atoms in a lattice well, to illustrate the overall scaling of the two-and three-body energies. The collapse and revival experiments in [5,6,7] have ω/2π ∼ 30 kHz and ξ ∼ 0.07, putting them well within the perturbative regime. The inset showsŨ3 for the range 0 < ξ < 0.1. . Curve (i) shows the case with neither inhomogeneities nor three-body interactions included. Curve (ii) shows the effects of ∼ 5% inhomogeneities in U2. Curve (iii) shows the effects of three-body interactions with β = 1.34 but no inhomogeneities. Note that the three-body mechanism influences the visibility of revivals immediately, and it will be important even if inhomogeneities are stronger than are shown in curve (ii). Curve (iv) shows the combined effects of inhomogeneities and three-body interactions.
V. DYNAMICS AND DECOHERENCE OF ATOM-NUMBER COHERENT STATES
We now investigate the influence of effective three-body interactions on the phase coherence of an N atom nonequilibrium state |Ψ (0) = ( obtained by quickly loading a BEC into a lattice with M sites. To a good approximation the state can be treated as the product of coherent states [5,10], whereâ i |α i = α i |α i and |α i | 2 =n i is the average number of atoms in the i th site. A relative phase φ ij between sites i = j exists when â † iâ j = ηe iφij and η = 0. The initial state |Ψ (0) has η =n, and there are well-defined relative phases (φ ij = 0 for all i, j in this case). In contrast, the equilibrium Mott insulator state, achieved by much slower loading [35,36], has approximate number states in each well giving η ≈ 0, though there can be some degree of short-range phase coherence [41,42,43].
Coherent states in optical lattices make natural probes of higher-body coherent dynamics because small atom-number dependent energies can lead to significant phase shifts over time. After a hold time t h in the lattice, the initial state evolves to where the state of the i th well is andẼ i (n i ) is given in Eq. (12), restoring the index i labeling the lattice site. Snapping the lattice off at time t h , the wavefunctions from each well freely expand for a time t e until they fully overlap, analogous to the diffraction of light through a many-slit grating.
For a uniform lattice, the fringe visibility is [5] With no inhomogeneities and settingŨ 3 = 0, we obtain V (t h ) = e −2n[1−cos(Ũ2t h / )] . The visibility forn = 2.5 is plotted as the thin dashed line labeled (i) in Fig. 3, showing the well-known collapse and revival dynamics with period t 2 = h/Ũ 2 . For the 87 Rb system parameters used here t 2 = 0.52 ms. The thin line labeled (ii) in Fig. (3) shows the influence of an approximate 5% variation in the two-body energy U 2 . We average the a i (t) over a 60 lattice-site diameter spherical distribution. While the effect of inhomogeneities are important, a larger variation in U 2 then expected would be required to explain the decay of interference fringes after only 5 revivals as seen in experiments [5,6,7]. We note that the longer timescale for three-body recombination can be distinguished from the coherent, number conserving interactions derived here by tracking changes in total atom number, and this appears to be negligible on the revival damping timescale [26]. Other mechanisms, such as non-adiabatic loading [44,45] and collisions during expansion [46] will reduce the initial fringe visibility but do not explain the rapid decay of the visibility versus hold time t h .
To compute the visibility with three-body interactions we numerically evaluate The bold (blue) dashed line labeled (iii) in Fig. 3 shows the visibility V (t h ) = | η (t h )|â |η (t h ) | 2 /n versus t h /t 2 assuming no inhomogeneities,n = 2.5, and the harmonic oscillator value β = 1.34.... With ξ = 0.07, U 3 = 0, and ω/2π = 30 kHz, the effective three-body frequency isŨ 3 /h −200 Hz, andŨ 2 /h 2.1 kHz. The relatively small effective three-body interactions have a strong effect on the coherence of the state and the resulting quantum interference, showing how collapse and revival measurements can be a sensitive probe of coherent higher-body effects. The dephasing is faster than may have been expected from the small size ofŨ 3 because the three-body energies scale asŨ 3 n 3 versusŨ 2 n 2 for two-body energies, and thus have an increased influence on higher-number components of a coherent state. Similarly, coherent states with significant n > 4 atom number components will probe the four-and higher-body interaction energies. The bold (red) solid line labeled (iv) in Fig. 3 shows the combined effect of both ∼5% inhomogeneities inŨ 2 and three-body interactions.
The decay of the visibility in Fig. (3) is faster than what is seen in [5,6,7]. Figure 4 illustrates the sensitivity of the evolution of the visibility to the three-body energy scale by showing three cases corresponding toŨ 3 /h = {−200, −150, −100} Hz. The curves have been displaced vertically for clarity. Curve (i) forŨ 3 /h = −200 Hz corresponds to U 3 = 0, β = 1.34, ξ = 0.07, and ω/2π = 30 kHz. Curve (ii) corresponds to a reducedŨ 3 /h = −150 Hz, which could be the result, for example, of a positive intrinsic three-body energy U 3 /h = 50 kHz, or a change in parameters giving either β → 3β/4 or ξ → √ 3ξ/2. Similarly, curve (iii) corresponds toŨ 3 = 100 Hz, which could be due to a positive intrinsic three-body energy U 3 /h = 100 kHz, or to a change in parameters giving either β → β/2 or ξ → ξ/ √ 2. The collapse and revival visibilities are also very sensitive to the average atom numbern. A smaller value ofŨ 3 appears to agree better with the initial damping seen in [5,6,7], and this may indicate the presence of a non-zero intrinsic U 3 . However, accurate measurement of the system parameters is necessary if a value of the intrinsic U 3 is to be obtained using U 3 =Ũ 3 − δU 3 . Nevertheless, it is clear from Fig. (4) that both intrinsic and induced three-body interactions can be important on experimentally relevant timescales. Figure 4 also shows the partial and full revivals resulting from the beating between two-and three-body frequency scales expected if inhomogeneities are sufficiently reduced. The period for nearly full three-body revivals t 3 = h/Ũ 3 gives a direct method of measuringŨ 3 . Recently, long sequences of collapse and revivals showing multiple frequencies have been reported [26]; our analysis suggests that these may be used to study higher-body interactions in optical lattices.
VI. SUMMARY
We have shown that two-body induced virtual excitations of bosons to higher bands in a deep 3D optical lattice generate effective three-body and higher-body interactions. Although our methods do not yield the intrinsic higher-body interaction Curve (i) corresponds to U3 = 0, ξ = 0.07, β = 1.34,n = 2.5, and ω/2π = 30 kHz. Curves (ii) and (iii) correspond to smaller threebody energiesŨ3, which could be due to a non-zero intrinsic U3, a reduction in β, or a change in other system parameters including ξ orn. Three-body revivals occur at multiples of t3 = h/Ũ3, providing a method for measuring the coherent three-body interaction energy. | 7,872.2 | 2008-12-07T00:00:00.000 | [
"Physics"
] |
Overcome cancer drug resistance by targeting epigenetic modifications of centrosome
The centrosome is an organelle that serves as the microtubule- and actin-organizing center of human cells. Although the centrosome is small of size, it is great important on cellular function that regulates cytoskeletal organization and governs precise spindle orientation/positioning ensuring equal distribution of cellular components in cell division. Epigenetic modifications to centrosome proteins can lead to centrosome aberrations, such as disorganized spindles and centrosome amplification causing aneuploidy and genomic instability. Epigenetic disturbances are associated not only with carcinogenesis and cancer progression, but also with drug resistance to chemotherapy. In this review, we discuss mechanisms of epigenetic alteration during the centrosome biogenesis in cancer. We provide an update on the current status of clinical trials that aim to target epigenetic modifications in centrosome aberrations and to thwart drug resistance.
INTRODUCTION
It has long been considered that the accumulation of genetic mutations in tumor suppressors and/or oncogenes causes cancer [1] . However, mounting evidences have emerged that alterations of every component in the epigenetic regulatory machinery also participate in carcinogenesis [2,3] . Ultimately both genetic and epigenetic changes determine abnormal gene expression. Centrosomes play a key role in establishment and maintenance of the bipolar mitotic spindle that require to accurately divide genetic material (chromosomes) into daughter cells during cell division. Centrosome aberrations are either numerical or structural aberrations since they arise when centrosome structure, duplication or segregation are deregulated. So far, it has not been shown whether structural centrosome aberrations directly trigger drug resistance. In this review, we only focus on numerical aberrations of centrosome. Acquisition of ≥ 3 centrosomes in the centrosome cycle was termed centrosome amplification. Failure to properly control centrosome number leads to aneuploidy, which is frequently found in cancer cells. Mechanistically centrosome amplification may cause multipolar spindles or monopolar aster resulting in chromosome missegregation [4] . Thus, centrosome amplification is a hallmark of human tumors [5][6][7][8][9][10] . The BRCA1 E3 ligase specifically ubiquitinated γ-tubulin at lysine-48 (K48) and the expression of a mutant γ-tubulin protein in which K48 was mutated to arginine induces centrosome amplification [11] . However, centrosome amplification does not necessarily require DNA damages, epigenetic changes is one potential mechanistic link in dysregulation of centrosome function. Here we discuss in detail on centrosome structure, aberrations of centrosome in cancer, the relationship of cancer drug resistance to centrosome amplification and new drug development.
CENTROSOME ABERRATIONS, CANCER AND CANCER DRUG RESISTANCE
In this section, we briefly summarize the structure and biogenesis of centrosomes (for more detail reviews, please refer to [12] . Morphologically, centrosomes are non-membranous organelles. Each centrosome consists of a pair of centrioles surrounded by the pericentriolar material (PCM) [ Figure 1]. Although molecular compositions of the centrosome remain poorly defined, hundreds of proteins have been detected using proteomics [13,14] . As illustrated in Figure 2 [15] , cycling cells begin the cell cycle with one centrosome in G1, and centriole duplication occurs once per cell cycle, paralleled with DNA replication during the S phase. The duplication of the centrosome is initiated (formation of the procentriole) with Plk4 as the dominant kinase in centriole biogenesis [16,17] . Depletion of Plk4 induces the loss of centrioles and overexpression of Plk4 conversely causes the formation of multi-daughter centrioles [18] .
Once centrioles are assembled, PCM proteins will be recruited, nucleating more microtubules during interphase. These proteins are not only important for centrosome biogenesis but also contribute to the maintenance of cell polarity, cell vesicle transport, cell adhesion, cell signal transduction (Reviewed [19] ). Duplicated centrosomes separate at the onset of mitosis for bipolar spindle formation, which equally segregate sister chromatids to two daughter cells. Centrosome disjunction is modulated by NEK2a to remove centrosomal linkers, such as C-Nap1 (also known as CEP250) and rootletin [20] ). Over-expression of NEK2a induces immature centrosome separation [21,22] .
It is worth to note that majority of centrosome proteins have multiple locations. Approximately 77% (n = 370) of the centrosome and microtubule-organizing center (MTOC) proteins detected in the cell atlas also localize to other cellular compartments [23] . The network plot shows that the most common locations shared with centrosome and MTOC are the cytoplasm, nucleus and vesicles. For example, CTCF is associated with the centrosome in metaphase to anaphase of the cell cycle. At telophase, CTCF dissociates from the centrosome and localizes to the midbody and the newly formed nuclei.
Functionally the centrosome in human cells acts as the MTOC, which has been studied widely ever since first described by Theodor Bovery in 1900 [24] , and the actin-organizing center [25] . Importantly precisely duplicated and matured centrosomes ensure faithful chromosome segregation into two daughter cells via the formation of the bipolar mitotic spindle [26] . Thus, when centrosome structure, duplication or segregation are deregulated, centrosome aberrations with either numerical or structural aberrations arise.
Many studies established a link of centrosome aberrations and solid tumors or hematological malignancies; the correlations are present not only in pre-invasive lesions but also with tumor progression [27,28] . Althoug bipolar spindles could be detected in centrosome depletion or centrosome over-duplicated cells by clustering mechanisms [29] , the numerical centrosome aberrations are still the most common cause for chromosomal segregation errors [30] . Centrosome amplification thus can be as a novel biomarker for personalized treatment of cancers [31] . Several oncogenic and tumor suppressor proteins, such as BRCA1 and p53, are the best-known centrosome proteins [32,33] . Either overexpression or downregulation of centrosome proteins evoke centrosome abnormalities resulting tumorigenesis [ Table 1].
As discussed above, NEK2 is an important centrosome protein, which regulates centrosome separation and bipolar spindle formation in mitotic cells. High expression of NEK2 also mediates drug resistance to cisplatin or lipo-doxorubicin in myeloma [48] , breast cancer [49] , ovarian cancer [50] and liver cancer [51] . Nlp (ninein-like protein) is involved in centrosome maturation and spindle formation. Nlp overexpression was detected in human breast and lung cancers [52] . By examining 55 breast cancer samples, a study found that the breast cancer patients with high expression of Nlp were likely resistant to the treatment of paclitaxel. KIFC1 is a nonessential minus end-directed motor of the kinesin-14 family and it functions as a centrosome clustering molecule [53,54] . In breast cancer cells, overexpression of KIFC1 and KIFC3 confer docetaxel resistance [55] . Mother centrioles possess subdistal appendages and distal appendages; B: each centriole is a cylindrical structure with nine-fold microtubule organized like a central cartwheel. The triplet microtubules are composed of internal A, middle B, and external C microtubules. The figure is adapted and modified [9] These studies indicate that centrosome aberrations not only produce aneuploidy, chromosome instability leading to tumorigenesis but also promote cancer drug resistance.
MECHANISMS OF CANCER DRUG RESISTANCE INVOLVING CENTROSOME
The mechanism underlying chemo-resistance (mitotic drug resistance) is not yet clear. Several studies provide insights into the molecular basis of centrosome abnormalities that produce drug resistance, either Plk4 binds and phosphorylates STIL and associates with SAS-6. Thus, the cartwheel forms the proximal wall of the mother centriole. Other proteins are recruited to the cartwheel and the new daughter centriole (procentriole) begins to grow from the existing centrioles during S and reaches the full length at the G2 phase. Duplicated centrosomes separate at the beginning of the M phase, helped by kinases NEK2a and Plk1. PCM mature and the cartwheels disassemble in the early M phase. After the separation of two daughter cells, each cell inherits one centrosome. The figure is adapted and modified [15] [47] directly by dysregulation of centrosome protein levels or indirectly by regulating gene expression of other proteins.
Gene dosage
As discussed previously, gene dosage may affect centrosome duplication through a balance of the relative abundance of one or more proteins essential for the assembly of new centrioles and the availability of assembly sites. Thus, dysregulation of gene dosage for centrosome proteins produce aneuploidy and chromosome instability. Moreover, there are a close relationship between aneuploidy, chromosome instability and chemotherapy resistance.
The Aurora A kinase regulates centrosome maturation and separation and thereby play important roles in spindle assembly and stability. Overexpression of Aurora-A kinase induces centrosome amplification and chromosomal instability that create tumor cell heterogeneity, thus is associated with acquired drug resistance [56] .
PLK4 is a key component of the centrosome. Dysregulation of PLK4 activity causes loss of centrosome numeral integrity. Its overexpression is responsible centrosome amplification and contributes to resistance to tamoxifen and trastuzumab [57] .
Mitotic slippage
Chemotherapy is commonly used in order to induce cell death or to prevent proliferation of cancer cells by impairing spindle function and chromosome segregation. However sometimes cancer cells evade cell death for those that are arrested in mitosis [58] . Instead these cells leave mitosis without completing a normal cell division and become tetraploid. This phenomenon is called mitotic slippage. The examples can be seen in the drugs that target microtubule assembly (nocodazole, vincristine) or the disassembly (taxol or paclitaxel).
Avoiding apoptosis through mitotic slippage in cancer cells is thought to be a major mechanism contributing to cancer drug resistance. An interesting recent study provides insight into the mechanism of mitotic slippage. BH3-only pro-apoptotic proteins are necessary to initiate the molecular process of apoptosis in cells undergoing perturbed mitosis [59] . NEK2 conferred drug resistance is associated with decreased apoptosis [60] . Furthermore, overexpression of NEK2 suppressed the expression of the BH3-only genes BAD and PUMA and upregulated the expression of pro-survival genes BCL-xL and MCL-1, indicating a possible role of NEK2 in cancer drug resistance via mitotic slippage.
Phosphorylation at S69 of BIM, which is also a BH3-only protein, leads to its ubiquitin-dependent degradation [61] . During mitotic arrest, BIM is known to be heavily by Aurora A kinase, which could result in mitotic slippage.
Centrosome protein CEP55 was found to have a role in promoting mitotic slippage, which again is mediated by the Bcl2 family proteins in breast cancer [62] . In breast cancer patients, high-level expression of CEP55 associates with chemotherapeutic resistance, particularly to docetaxel. Similarly, docetaxel induces spindle multipolarity, higher KIFC1 expression might counteract this effect to prevent cell death and enable bipolar spindle formation through centrosome clustering [55] .
These studies demonstrated that centrosome proteins regulate gene expression of apoptotic/anti-apoptotic genes to induce cancer drug resistance.
Regulation of drug transporters by centrosome proteins
A recent study demonstrates that overexpression of NEK2 and drug resistance are closely correlated in other cancers through activation of efflux drug pumps [57,60] . Overexpression of NEK2 upregulated ABC transporter family members, including ABCB1 (p-glycoprotein, MDR1), the multidrug resistance protein ABCC1 (MRP1), and the breast cancer resistant protein ABCG2 (BCRP). High expression of NEK2 promoted a higher efflux of the hydrophilic eFluxx-ID gold fluorescent dye from cancer cells. Verapamil, an ABC transporter inhibitor, was able to abrogate part of the NEK2-induced drug resistance by showing a decrease in colony formation. Downregulation of NEK2 by shRNA decreased the expression of phosphorylated PP1, AKT, nuclear β-catenin, and ABC transporters.
However, it is also worth to note that drug resistance can also be secondary since chemotherapy or radiation may induced centrosome abnormalities. This is associated with tumor cell heterogeneity. Accumulations of centrosome aberrations after nilotinib and imatinib treatment in vitro are associated with mitotic spindle defects and genetic instability [63] .
EPIGENETIC REGULATION OF CENTROSOME PROTEIN EXPRESSION
Epigenetic mechanisms that alter functional gene dosage through hyper-or hypo-methylation, and consequently the abundance of key centrosome precursor molecules, may result in centrosome abnormalities, spindle defects, aneuploidy and polyploidy.
Although any genetic aberrations of the centrosome proteins contributes to tumorigenesis, alterations of epigenetic gene regulation are found more frequently as cancer drivers, which include widespread alterations of CpG island methylation, histone modifications, and dysregulation DNA binding proteins disrupt normal patterns of gene expression.
Phosphorylation
Many kinases (e.g., CDKs, Aurora A, polo-like kinases, etc.) participate in the regulation of centrosome duplication even they themselves are also controlled by phosphorylation and dephosphorylation. For instance, the phosphorylation status of CKAP2 during mitosis is critical for controlling both centrosome biogenesis and bipolar spindle formation [64] . It has been shown that the Cyclin-Dependent Kinase (CDK)-activating phosphatase (CDC25B) localizes to centrosome and involves in the centrosome duplication cycle and in microtubule nucleation [65] . The activity of CDC25B is positively or negatively regulated by several kinases including Aurora A and CHK1 [66,67] . The phosphorylation of CDC25B by Aurora-A locally participate in the control of the onset of mitosis [68] . Abnormal expression of CDC25B in numerous human tumors might have a critical role in centrosome amplification and genomic instability [69] .
Activities of Aurora-A (AurA) for its cellular function are regulated by different protein-protein interactions and posttranslational modifications. It has been established that Twist1 has a critical role in promoting EMT and drug resistance. AURKA phosphorylates Twist1 at three positions (S123, T148 and S184). AURKAmediated phosphorylation of Twist1 is crucial for EMT, the cancer stem cell phenotype and drug resistance togemcitabine [70] . On the other hands, activation of AurA at centrosomes occurs through autophosphorylation at the critical activating residue Thr288 [71] . The autophosphorylation is regulated by PLK1 [72] and TPX2 [73] .
PLK4 is also a substrate of itself (via autophosphorylation). Autophosphorylation of PLK4 results in ubiquitination and subsequent destruction by the proteasome [76][77][78] . It has been shown that mutagenesis of ASP-154 in the catalytic domain that causes centrosome amplification above background levels when overexpressed [16] . This may be due to the loss of self-destruction of PLK4.
Acetylation
Acetylation and deacetylation are highly common posttranslational modifications. Several studies demonstrate that acetylation/deacetylation play a role in the regulation of centrosome duplication and induction of abnormal amplification of centrosomes. KAT2A/KAT2B function as histone acetyltransferase or lysine acetyltransferases. Fournier and her colleagues showed that KAT2A/2B acetylate the PLK4 kinase domain on residues K45 and K46 [79] . Impairing KAT2A/2B-acetyltransferase activity results in diminished phosphorylation of PLK4 and in excess centrosome numbers in cells. Therefore KAT2A/2B acetylation of PLK4 prevents centrosome amplification. On the other hands, through focusing on the deacetylases, Fukasawa's group found that the deacetylation event negatively controls centrosome duplication and amplification. Of the 18 total known deacetylases (HDAC1-11, SIRT1-7), ten deacetylases possess the activity to suppress centrosome amplification, and their centrosome amplification suppressing activities are strongly associated with their abilities to localize to centrosomes. Among them, HDAC1, HDAC5 and SIRT1 show the highest suppressing activities, but each of them suppresses centrosome duplication and/or amplification with its unique mechanism [80] .
Methylation
G9a is a histone methyltransferase enzyme, also known as euchromatic histone-lysine N-methyltransferase 2 (EHMT2) [81] . G9a catalyzes the mono-and di-methylated states of histone H3 at lysine residue 9 (i.e., H3K9me1 and H3K9me2) and lysine residue 27 (H3K27me1 and HeK27me2). G9a plays a critical role in regulating centrosome duplication. Knockdown of G9a significantly reduces di-and trimethylation of H3K9, resulting in disruptions in centrosome amplification and chromosome instability in cancer cells [82] . Furthermore, silencing G9a leads to down-modulation of gene expressions, including that of p16INK4A. It has been shown that cells lacking p16 (INK4A) activity exhibit phenotypes associated with malignancy [83] . p16INK4A is the CDK2, Cdk4 and Cdk6-specific inhibitor [84] . The observations of the effects on G9a silencing are in support of the studies linking cyclin D1/Cdk4 with centrosome amplification [85,86] . Initiation of tumorigenesis was found in the loss of p16INK4A through hypermethylation of its promoter [87][88][89] . Thus, it has been postulated that loss of p16 expression coupled with increased γ-tubulin contributes to centrosome amplification and breast cancer progression [90] .
Promoter
There is a functional link between centrosome and transcription factors. NF-κB can induce abnormal centrosome amplification by upregulation of CDK2 [98] . A functional NF-κB binding site was located in the CDK2 promoter.
Methyl-CpG binding protein 2 (MeCP2) localizes at the centrosome. Its loss causes deficient spindle morphology and microtubule nucleation. In addition, MECP2 binds to histone deacetylases and represses gene transcription [99] .
E2Fs affect the expression of proteins, including Nek2 and Plk4, thereby deregulation of E2Fs induces centrosome amplification in breast cancer [100] . A further example showed that arsenic induced centrosome amplification via SUV39H2-mediated epigenetic modification of E2F1 [101] .
DDX3 regulates epigenetic transcriptional and translational activation of p53 and colocalizes with p53 at centrosome during mitosis to ensure proper mitotic progression and genome stability, which supports the tumor-suppressive role of DDX3 [102] . DDX3 knockdown suppressed p53 transcription through activation of DNA methyltransferases along with hypermethylation of p53 promoter and promoting the binding of repressive histone marks to p53 promoter.
During tumor development, especially to most solid tumors, cancer cells are often subjected to hypoxia [103] .
A recent study showed that via upregulation of HIF1, proteins whose overexpression drives centrosome amplification (such as Cyclin E, Aurora A, and PLK4) were upregulated [104] .
MiR-128 inhibited NEK2 expression and miR-128 was silenced by DNA methylation. Up-regulation of NEK2 by MicroRNA-128 methylation is associated with poor prognosis in colorectal cancer [107] .
Ubiquitination
Dysfunction in the ubiquitin-proteasome degradation has implicated in several cancer drug resistance. MDM2 is an E3 ubiquitin-protein ligase that mediates ubiquitination and degradation of p53 [108,109] . Increased levels of MDM2 would inactivate the functions of p53 to similar extent that do in deletion or mutation of p53 and found in a variety of human tumors [110] . Several studies demonstrated that MDM2 overexpression increases cancer drug resistance of tumors [111,112] . Mind bomb (Mib1) was identified as the E3 ubiquitin ligase of PLK4 [113] . Recently we found that HECTD1, a HECT-type E3 ubiquitin ligase is a novel centrosome protein whose deficiency induces centrosome amplification and promotes epithelial-mesenchymale transition [114,115] .
These results indicate ubiquitination is one of the important epigenetic modifications for centrosome.
NEW DRUGS IN TRIAL
Abnormalities in size, number and microtubule nucleation capacity of centrosome are resulted from genetic disorders or epigenetic disturbances of gene expression. Epigenetic modifications are temporally dynamic and reversible changes. Development of small molecules targeting epigenetic regulators are promising anticancer strategies, involving elimination of cancer cells with chromosome instability and aneuploid in combination with targeting centrosome proteins to overcome mitotic slippage and to induce apoptosis in cancer cells. The drugs that focused on the centrosome amplification may provide possibilities to treat cancer or overcome some forms of drug resistance. Recently clinical trials of inhibitors targeting kinases that function as centrosome regulators are under way for hematologic malignancies and solid tumors. We summarize the development of therapies targeting these mechanisms.
Although a lot of progresses have been made in treating cancer, the most cancer chemotherapeutics develop drug resistant (secondary) that limits the efficacy of treatments. This happened even for the newly approved NTRK inhibitors [116] . Thus, there is a significant need to target drug resistance for improved therapeutics for cancer. Several clinical trials, in single or combination of drugs, have been focused on blocking centrosome clustering to combat drug resistance [ Table 2].
Since increased levels of MDM2 inactivate the functions of p53 thereby induce centrosome amplification and drug resistance. Inhibition of MDM2 is obviously a good strategy to fight cancer drug resistance. Several clinical trials have been setting up to a variety of human tumors, such as Leukemia, Myeloma, Brain Tumor, Solid Tumor and Lymphoma [ Table 2].
As mentioned above, PLK4 is important in centrosome biogenesis and regulates mitotic progression. PLK4 has, therefore been identified as a candidate anticancer target. With directed virtual screening using a ligand-based focused library, several leads have been identified. CFI-400945 was generated through further optimization [117] . CFI-400945 is a potent and selective small molecule inhibitor of PLK4 [118] .
Monopolar spindle 1 (Mps1/TTK) kinase is essential for safeguarding proper chromosome alignment and segregation during mitosis [119] . Its overexpression contributes to more aggressive and drug resistant breast tumors but the reduction of the Mps1 level can sensitize several tumor cells to paclitaxel [120] . In the two ongoing clinical trials, the Mps1 inhibitors are tested along with paclitaxel in triple negative breast cancer patients [ Plk1, known as polo-like kinase 1, supports the functional maturation of the centrosome and establishment of the bipolar spindle. Overexpression of Plk1 is often observed in cancer cells [121] . This protein therefore is a potential drug target in cancer [122] . Several inhibitors of PLK1 have been developed and the promising results have been obtained in clinical trials [123] . For example, as compared to administration of cytarabine (a chemotherapy medication used to treat acute myeloid leukemia ) alone, a combination of BI 6727 with cytaribine increased the response with 31% total remission from 13% [124] .
Cyclin-dependent kinases (Cdks) are a family of protein kinases that regulate the centrosome cycle but deregulation of those Cdks by oncogenes and tumor supressors results to centrosome amplification [125] . More than 30 small-molecule inhibitors developed [126] . Many of them have been used in clinical trials studies for the treatment of various cancers [ Table 2] [127] . Flavopiridol is the first Cdk inhibitor used in clinical trials [128] . Flavopiridol has been successful in the treatment of AML and chronic lymphocytic leukemia [129,130] .
The Aurora kinases (AURKs) are involved in different aspects of mitotic control during cell cycle. Importantly, PLK1 is activated by AURKA/B. Therefore AURKs are potential targets against centrosome for cancer therapy. More than 30 AURK inhibitors have been developed and used in clinical studies [131] . For example, the inhibitor MLN8237 (alisertib), which targets AURKA, showed promising efficacy in several solid tumors [132] . AZD1152 (barasertib) is a selective inhibitor of AURKB and has been effective in AML patients with an overall response rate of 25%, but with no effective results in patients with solid tumors. In addition, AURKB/AURKC kinase inhibitor GSK1070916A is actually being tested in patients with solid tumors and phase I in the clinical trial has been completed.
CONCLUSION
The consequences of numerical aberrations/centrosome amplification leading to tumorigenesis have been studied extensively. In contrast, studies on mechanisms of cancer drug resistance in relation to centrosome aberrations have received little attention. Epigenetic modifications in centrosome biogenesis have important implications for the origin of some malignant tumors and play a role in cancer drug resistance. The current review discussed the connection of epigenetic changes causing centrosome aberrations to cancer drug resistance. For clinical, the ultimate goal is to identify effective cancer therapy. So far, most of clinical trials targeting possible drug resistance, which were registered to ClinicalTrials.gov at NIH are still monotherapies and in early stages of development. One important factor we should not forget is that regulation pathways to epigenetic modifications of centrosome, such as positive or negative feedback signaling circuits involved in cancer drug resistance are more complex than once thought. The selection of drugs, together with other treatment like immunotherapy, for combination therapy may lead to improve efficacy and to thwart drug resistance. In the future, one need to address molecular mechanisms how the trafficking of centrosome proteins between centrosome and nucleus determine expression/subcellular localization of downstream signaling molecules, such as the Bcl2 family proteins and ABC transporters. Further understanding of centrosome biology including basic cell biology or pathobiology of epigenetic controls in centrosome will provide potential to establish translatable strategies for cancer treatment and to prevent drug resistance. | 5,383 | 2019-06-19T00:00:00.000 | [
"Biology"
] |
An Investigation into the Application of Deep Learning in the Detection and Mitigation of DDOS Attack on SDN Controllers
: Software-Defined Networking (SDN) is a new paradigm that revolutionizes the idea of a software-driven network through the separation of control and data planes. It addresses the problems of traditional network architecture. Nevertheless, this brilliant architecture is exposed to several security threats, e.g., the distributed denial of service (DDoS) attack, which is hard to contain in such software-based networks. The concept of a centralized controller in SDN makes it a single point of attack as well as a single point of failure. In this paper, deep learning-based models, long-short term memory (LSTM) and convolutional neural network (CNN), are investigated. It illustrates their possibility and efficiency in being used in detecting and mitigating DDoS attack. The paper focuses on TCP, UDP, and ICMP flood attacks that target the controller. The performance of the models was evaluated based on the accuracy, recall, and true negative rate. We compared the performance of the deep learning models with classical machine learning models. We further provide details on the time taken to detect and mitigate the attack. Our results show that RNN LSTM is a viable deep learning algorithm that can be applied in the detection and mitigation of DDoS in the SDN controller. Our proposed model produced an accuracy of 89.63%, which outperformed linear-based models such as SVM (86.85%) and Naive Bayes (82.61%). Although KNN, which is a linear-based model, outperformed our proposed model (achieving an accuracy of 99.4%), our proposed model provides a good trade-off between precision and recall, which makes it suitable for DDoS classification. In addition, it was realized that the split ratio of the training and testing datasets can give different results in the performance of a deep learning algorithm used in a specific work. The model achieved the best performance when a split of 70/30 was used in comparison to 80/20 and 60/40 split ratios.
Introduction
With the current surge in the number of devices with networking capabilities, complex management strategies are required to provide a good quality of service (QoS). Achieving a good QoS becomes a hurdle in current traditional networks due to the vertical integration of the control and data planes. Furthermore, network optimization becomes difficult due to a high dependence on vendor-specific hardware and software.
Software-Defined Networking (SDN) is a new paradigm that solves the issues existing in traditional Internet architectures. It provides flexibility in management by making the networking programmable from a logically centralized control point. SDN decouples the control plane from the data plane present in traditional networks and deploys it in a remote device called the controller or control layer, as shown in Figure 1. It comes with the benefits of the centralized control functionalities, applications running on the network operating system, the unique capture of the global view of the architecture, the public interface of the north and south bounds, and its dynamic programmability in forwarding packets. Devices in the data plane, such as switches, forward the packets according to the control decisions or rules sent from the controller. The controller communicates with the application layer through the northbound application programming interface (API) and communicates with the data plane through the southbound API. Controller-switch communication is carried out using the OpenFlow protocol [1]. Due to the flexibility in network control it offers, SDN has become an alternative approach for traditional security infrastructures. However, absolute security of the system is at stake if the SDN framework itself gets compromised. The controller is always prone to a single point of failure. Hence, an attack on the controller can lead to the failure of the entire network [3].
Major security problems in the SDN are issues of unauthorized controller access (intrusion), man-in-the-middle attack, and a flow rule change that modifies packets. Other pertinent issues are malicious packets hijacking the controller, denial of service by switchcontroller communication flood, and configuration problems. Distributed denial of service (DDoS) is one the most common and dreadful threats that are aimed at successfully disrupting regular traffic from arriving at the controller. The attack is achieved by flooding the controller with more malicious packets than it can accommodate, thus rendering it inoperable. The attack is made possible by making use of multiple compromised switches (bot) for the production of malicious packets. The attacker forms a botnet, a group of bots, from the switches connected to the controller and then gains control over the entire network to operate after rendering the controller inoperable, as depicted in Figure 2. It is, therefore, necessary to implement a system that addresses this security threat. Traditional methods are insufficient, so machine learning based DDoS detection techniques have received more attention. In this paper, the feasibility and efficiency of applying variants of deep neural networks, namely convolutional neural network (CNN) and long short-term memory (LSTM), in training an ML model to detect and mitigate DDoS attack on SDN controllers are investigated. LSTM is an artificial recurrent neural network (RNN) architecture which is well-suited for data classification, processing and making predictions. According to the literature, many machine learning algorithms such as Support Vector Machine (SVM), K Nearest Neighbor (KNN), Artificial Neural Network (ANN), and Naïve Bayes (NB) have been explored in detecting DDoS attacks in the various layers of the SDN architecture. However, only the deep reinforcement learning-based algorithm has been applied in the application layer of the SDN to mitigate such attacks.
The main contribution of this work include: • A new dataset comprising of normal and malicious (DDoS) traffic developed using Mininet and the Floodlight controller is collated. • A DDoS defence mechanism based on the trained model for the identification and mitigation of DDoS attacks on the SDN controller is introduced. • The performance of the selected deep learning candidate is compared with that of other machine learning linear models. These models are k-nearest neighbor (KNN), logistic regression, linear support vector classifier (LinearSVC), support vector classifier (SVC), decision tree, random forest, gradient boosting, Gaussian naïve Bayes (NB), Bernoulli NB, and multinomial NB. These models and the selected candidate model are trained based on the same generated dataset. • The performance analysis of linear-based ML and neural network models in the detection and mitigation of DDoS flood attacks was done using various train-test split ratios (60/40, 70/30, and 80/20).
The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 presents the proposed model and methodology. Section 4 discusses the results. Section 5 concludes the paper.
Related Work
In the field of network security, the advent of SDN has provided researchers' unparalleled control over network infrastructure by establishing a single control point for data flows that routes the entire network [5]. A range of literature was reviewed that is relevant to this current work and highlights from them are stated in the subsequent texts. To handle the issue of DDoS attack in SDN, researchers have proposed and implemented DDoS identification mechanisms based on artificial intelligence, mainly machine learning.
In [6], the authors used KNN, SVM, and Naïve Bayes to detect DDoS packets. KNN was most suitable with 97% accuracy, while SVM had 82% and Naïve Bayes 83%. The authors in [7] used Support Vector Machine (SVM) together with their own proposed algorithm, idle timeout adjustment (IA). They showed that their proposed approach differed from previous works and did better than the initial methods used. Neural network, Naive Bayes and SVM have been used in [3]. The neural network and Naive Bayes models provided 100% accuracy, while SVM presented 99% accuracy.
In [8], the authors used a support vector machine (SVM), and their results showed an average accuracy of 95.24%. The authors in [9] used Linear regression, Naïve Bayes, KNN, Decision Tree, Random Forest, SVM, and ANN. Their linear regression model achieved the highest accuracy, precision, and recall results at 98.65%. Naïve Bayes, on the other hand, showed the worst result at 97.45%. All others had accuracy between that of linear regression and naïve Bayes. In [10], the authors used Naïve Bayes. They had an average precision of 0.98 for training dataset with all features inclusive. They also recorded an average precision of 0.81 for training dataset with seven of the features removed. SVM has been used in detecting DDoS attacks; the model produced an accuracy of 99.8% [11].
In [12], the authors used Naïve Bayes, SVM, and Neural network. Naïve Bayes had an accuracy of up to 70% while SVM and the neural network had the same accuracy of 80%. The authors in [13] used SVM for DDoS attack detection. It was observed that the SVM algorithm achieved more than 98% accuracy on both the attacker and victim side for SYN flooding, ICMP flooding, and DNS reflection attacks. In [14], the authors used a deep neural network signature-based Intrusion Detection System (IDS). Their results show that the collaborative detection mechanism developed produced a true-positive rate of more than 90% with less than 5% false positives.
In [15], the authors worked on a reinforcement learning-based smart DDoS flood mitigation agent. Their findings demonstrate that the agent could effectively mitigate DDoS flood attacks of various protocols. Deep learning algorithms have also been used in SDN-based architectures to solve the problem of intrusion detection [16][17][18]. Other deep learning algorithms [19,20] have been applied in non-SDN architectures to detect DDoS and intrusion detection.
From the related works discussed, it is evident that machine learning has been used to identify DDoS attacks at all levels of the SDN architecture. Deep learning has been used in both SDN and non-SDN architectures for intrusion detection but not DDoS classification in SDN [17,18]. This circumstance makes it necessary to explore the feasibility and efficiency of applying CNN or RNN LSTM algorithms in the identification and mitigation of DDoS attacks on the controller. Table 1 shows a summary of the related works. Detection of distributed denial of service attacks using machine learning algorithms in software-defined networks [12] Naïve Bayes, SVM, Neural network.
The naïve Bayes had an accuracy of up to 70% and SVM and the neural network had the same accuracy of 80% • Two feature processed dataset. • Used only TCP flood.
10.
Multi-SDN based cooperation scheme for DDoS attack defence [13] SVM It was observed that the SVM algorithm would achieve more than 98% accuracy on both the attacker and victim side of SYN flooding, ICMP flooding, and DNS reflection attack.
• Used TCP, UDP, and ICMP floods • Training and testing split ratio not mentioned.
Methodology
Our anomaly detection technique is based on gathering certain parameters of the network when operating in a normal and also when subjected to a DDoS attack. These features include: • The following assumptions were made in this research: • The normal operation of the network is constant (the exchange of information between nodes has a particular profile), which forms the basis of our anomaly detection and defence mechanism. • The training of the detection engine is done off-device; the model is only exported and used on the controller.
Architecture
A three-tier architecture consisting of seven switches, eight hosts (two hosts per switch) and an external controller (the single host connected to a switch) was used in this research. Figure 3 shows the three-tier topology implementation in Mininet.
Simulation Test Bed
The work was simulated using Mininet and floodlight as an external controller. The SDN Mininet simulator software was used in creating the three-tier data center topology (shown in Figure 3). The floodlight controller and OpenFlow switches were deployed using a virtual machine running Ubuntu. After setting up the network, a specialized tool known as hping3 [21] was used to generate data traffic. Using the hping3 tool, we simulated a normal TCP, UDP, and ICMP traffic between two endpoints in the network. Afterwards, we simulated a DoS for TCP, UDP, and ICMP flood attacks. The statistics of the various switches were collected; these include: The three different kinds of traffic generated by the hping3 tool were UDP, TCP, and ICMP. The initial regular data generated were labeled as normal traffic. Afterwards, malicious traffic was generated using hping3 for the various UDP, TCP, and ICMP floods, which was labeled as malicious traffic. In total, 10,031 data collected, 4270 being malicious traffic (approximately 43%) and 5761 (approximately 57%) normal traffic.
Scenarios Considered
The data gathered were used to build binary classification models using the following ML models: K-neighbor nearest (KNN), Logistic Regression, Linear SVC, SVC, Decision Tree, Random Forest, Gradient Boosting, and Naïve Bayes classifiers such as Gaussian, Bernoulli, and multinomial, as well as the main algorithms being investigated for the purpose of this work, namely RNN LSTM and CNN. The model summary for LSTM and CNN is shown in Figures 4 and 5, respectively. The performance of each model was evaluated based on the following key performance indicators: recall, accuracy, true negative rate, and the time used for identifying and mitigating the DDoS attacks. Accuracy is the proportion of correct predictions from the total dataset given. Recall is the percentage of predicted normal data against the total amount of normal data presented. The true-negative rate, on the other hand, measures the sum of true-negative against the sum of the condition of negative. It relates to the test's ability to detect malicious data against the total amount of malicious data presented.
Three scenarios were considered in the research. In the first scenario, 80% of the data were used in training and the remaining 20% for testing. The second scenario utilized 70% of the data for training and 30% for testing. In the third scenario, 60% of the data were used for training, while 40% were used for testing.
Detection and Defence Mechanism
The model was exported after evaluations were made and used in an application that runs on the controller. The application was used to measure the detection time when the controller was subjected to a DDoS attack. If the model detects a DDoS attack, its output is fed into the defence engine. The defence/mitigation engine is built on top of NetFilterQueue [22], a Linux system implementation that matches packets as accepted, dropped, altered, or given a mark. The rules to match packets have to be manually set, which does not make it scalable. Leveraging this, we implemented the defence mechanism by automatically matching packets based on the output of the detection algorithm.
Results and Discussion
The performance of the models in terms of precision and recall is shown in Figures 6-11. In terms of precision, the linear regression models (GradientBoosting, KNN, DecisionTree, GaussianNB, MultinomialNB, SVC, and LinearSVC ) outperformed the LSTM model (a marginal difference of 1.4%) for the variants of split-ratios considered. Nonetheless, the LSTM model achieved a good tradeoff between a high recall and precision, as shown in Figures 12-14, which makes it suitable for DDoS classification. A summary of the performance of the models in shown in Table 2. The values in bold indicate the highest accuracy, recall and true-negative rate of each model.
Detection of DDoS Attack Using LSTM Model
This section of the experiment was aimed at creating scenarios of DDoS in the form of ICMP, UDP, and TCP flood attacks on the controller. It also determined how long it took the trained LSTM model to detect the attacks. Ten scenarios were considered, and Figure 15 shows the time it took the trained model to detect each flood attack, using a 60/40 train-test ratio. In Figure 15, the highest time it took the LSTM model to detect TCP DDoS flood among the 10 attempts was 18.70 s and the least time was 12.83 s. It took 11.73 and 15.90 s, respectively, as the lowest and highest times to detect the UDP DDoS flood attack. In addition, it took 11.76 and 15.73 s, respectively, as the lowest and highest times to detect ICMP flood attack among the 10 scenarios considered. Figure 16 shows the detection time of the LSTM model when a 70/30 train-test ratio was used. In Figure 16, the highest time it took the LSTM model to detect the TCP flood among the 10 scenarios was 18.71 s and the least time was 12.84 s. It took 11.81 and 15.89 s, respectively, as the lowest and highest times to detect UDP flood attack. In addition, it took 11.68 and 15.76 s, respectively, as the lowest and highest times to detect ICMP flood attack. Figure 17 shows the detection time of the LSTM model when a 80/20 train-test ratio was used. In Figure 17, the highest time it took the LSTM model to detect the TCP flood was 18.55 s and the least time was 12.67 s. It took 11.62 and 15.70 s, respectively, as the lowest and highest times to detect the UDP flood attack. In addition, it took 11.56 and 15.63 s, respectively, as the lowest and highest times to detect the ICMP flood attack. Table 3 gives a summary of the detection time of the LSTM model.
Mitigation of DDoS Attack Using LSTM Model
This section of the experiment was aimed at creating scenarios of DDoS in the form of ICMP, UDP, and TCP flood attacks on the controller. It also determined how long it took for the trained LSTM model to mitigate the attack after detection. After the controller detects the DDoS attack, the IP, protocol type, and destination port are sent to the mitigation engine, which instantly drops all packets from that particular source IP. Ten scenarios (attempts) were considered. The train-test ratios used were 60/40, 70/30, and 80/20. Figure 18 shows the mitigation time of the LSTM model when a 60/40 train-test ratio was used. In Figure 18, the highest time it took the LSTM model to mitigate the TCP DDoS flood attack was 4.75 s and the least time was 3.01 s. It took 3.28 and 4.51 s, respectively, as the lowest and highest times to mitigate the UDP flood attack. In addition, it took 3.18 and 4.45 s, respectively, as the lowest and highest times to mitigate the ICMP flood attack. Figure 19 shows the mitigation time of the LSTM model when a 70/30 train-test ratio was used. In Figure 19, the highest time it took the LSTM model to mitigate the TCP flood attack was 4.68 s and the least time was 2.99 s. It took 3.28 and 4.54 s, respectively, as the lowest and highest times to mitigate the UDP flood attack. In addition, it took 3.16 and 4.45 s, respectively, as the lowest and highest times to mitigate the ICMP flood attack. Figure 20 shows the mitigation time of the LSTM model when a 80/20 train-test ratio was used. In Figure 20, the highest time it took the LSTM model to mitigate the TCP flood attack was 4.86 s and the least time was 3.16 s. It took 3.42 and 4.65 s, respectively, as the lowest and highest times to mitigate the UDP flood attack. In addition, it took 3.36 and 4.57 s, respectively, as the lowest and highest times to mitigate the ICMP flood attack. Table 4 gives a summary of the mitigation time of the LSTM model.
Comparison of the LSTM Model with the Best Performing Linear-Based ML Models
We compared the best performing linear-based ML models with the LSTM model in terms of detection and mitigation time. According to the observed detection times for all the three different ratio splits, the 80/20 split had the best time values for detecting DDoS attacks. Hence, these values were compared with that of the best performing linear models in the sample split ratio. This is shown in Figures 21-24. It was observed that the LSTM model had a higher time in detecting DDoS attacks on the SDN controller. However, it was not more than 4 s from the highest time for the linear-based models, which makes it a good result.
In addition, from the mitigation time for all the three different ratio splits, it was observed that the 70/30 split had the best time values for mitigating the DDoS attacks. Hence, these values were compared to that of the best performing linear models in the sample split ratio. This is shown in Figures 25-28. In Figure 28, all the classification models (both the linear models and the LSTM model) after detecting a DDoS flood attack, took almost the same time to mitigate the attack. Hence, it can be concluded that the LSTM model performs exceptionally just as linear models will in the mitigation of DDoS attacks on the SDN controller. It was also observed that, aside KNN and GB, the RNN LSTM model performed better in some protocols than the three remaining linear models. Table 5, shows a comparison of the classification models from this research and other related works. In Table 5, it can be observed that the LSTM model (which achieved an accuracy of 89.63%) is quite good in comparison to the linear-based ML models. Furthermore, the LSTM model has a good tradeoff between precision and recall rates which makes it a good classification model for DDoS detection (Figures 12-14).
Conclusions and Future Work
In this research, we demonstrated that RNN LSTM is a viable deep learning algorithm that can be applied in the detection and mitigation of DDoS in the SDN controller. In addition, it was observed that the split ratio of the training and testing dataset could give different results in the performance of a deep learning algorithm used in a specific work. Thus, a 70/30 split produces a better model accuracy when compared to 80/20 and 60/40 split ratios. It can be concluded that RNN LSTM is also a good model for the identification and mitigation of DDoS attacks in the SDN architecture. The software-defined network used in this work was designed and tested within a virtual environment which simulates a software running on a set of network devices. Thus, future works may be carried out in a real SDN architecture to test how this application works in real-time. Future works will also explore the gathering of a larger dataset with in depth feature selection analysis and tuning of hyper-parameters to achieve better performance when using the neural network models.
Data Availability Statement:
The data used in this study is available at https://github.com/ jayluxferro/SDN-DoS/blob/master/README.md.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,196.8 | 2021-02-11T00:00:00.000 | [
"Computer Science"
] |
An UPLC-ESI-MS/MS Assay Using 6-Aminoquinolyl-N-Hydroxysuccinimidyl Carbamate Derivatization for Targeted Amino Acid Analysis: Application to Screening of Arabidopsis thaliana Mutants
In spite of the large arsenal of methodologies developed for amino acid assessment in complex matrices, their implementation in metabolomics studies involving wide-ranging mutant screening is hampered by their lack of high-throughput, sensitivity, reproducibility, and/or wide dynamic range. In response to the challenge of developing amino acid analysis methods that satisfy the criteria required for metabolomic studies, improved reverse-phase high-performance liquid chromatography-mass spectrometry (RPHPLC-MS) methods have been recently reported for large-scale screening of metabolic phenotypes. However, these methods focus on the direct analysis of underivatized amino acids and, therefore, problems associated with insufficient retention and resolution are observed due to the hydrophilic nature of amino acids. It is well known that derivatization methods render amino acids more amenable for reverse phase chromatographic analysis by introducing highly-hydrophobic tags in their carboxylic acid or amino functional group. Therefore, an analytical platform that combines the 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate (AQC) pre-column derivatization method with ultra performance liquid chromatography-electrospray ionization-tandem mass spectrometry (UPLC-ESI-MS/MS) is presented in this article. For numerous reasons typical amino acid derivatization methods would be inadequate for large scale metabolic projects. However, AQC derivatization is a simple, rapid and reproducible way of obtaining stable amino acid adducts amenable for UPLC-ESI-MS/MS and the applicability of the method for high-throughput metabolomic analysis in Arabidopsis thaliana is demonstrated in this study. Overall, the major advantages offered by this amino acid analysis method include high-throughput, enhanced sensitivity and selectivity; characteristics that showcase its utility for the rapid screening of the preselected plant metabolites without compromising the quality of the metabolic data. The presented method enabled thirty-eight metabolites (proteinogenic amino acids and related compounds) to be analyzed within 10 min with detection limits down to 1.02 × 10−11 M (i.e., atomole level on column), which represents an improved sensitivity of 1 to 5 orders of magnitude compared to existing methods. Our UPLC-ESI-MS/MS method is one of the seven analytical platforms used by the Arabidopsis Metabolomics Consortium. The amino acid dataset obtained by analysis of Arabidopsis T-DNA mutant stocks with our platform is captured and open to the public in the web portal PlantMetabolomics.org. The analytical platform herein described could find important applications in other studies where the rapid, high-throughput and sensitive assessment of low abundance amino acids in complex biosamples is necessary.
Introduction
In the post genomics era where the plant biology community is facing the challenge of identifying the functionalities of genes of unknown function (GUFs), metabolomics offers a link between biochemical phenotype and gene function [1]. However, the use of metabolomics for the prediction of the function of plant genes faces technical challenges due to the large size (between 100,000 to 200,000 biocompounds) [2][3][4], the chemical complexity and the different abundance levels of a plant metabolic pool. These challenges are currently being tackled by combining targeted and non-targeted metabolic analyses to characterize and compare changes in metabolic networks [1,2,[5][6][7]. Those combined strategies are a partial solution to the lack of universality of a single analytical technique, as they exploit the power of current separation technologies and the various dynamic ranges and sensitivities offered by the arsenal of commercially available analytical detectors to cover a larger portion of the metabolome than any single platform alone [2,[5][6][7][8].
Currently, combined metabolomics technologies are being tested as functional genomic tools for the annotation of Arabidopsis thaliana GUFs [1,7,9]. Usually, high throughput biochemical screening methods are employed to first identify previously uncharacterized Arabidopsis mutants affecting a variety of metabolic pathways. The screening is carried out by targeted analysis of specific groups of compounds or metabolic subsets (glucosinolates, fatty acids, phytosterols, isoprenoids, amino acids, among others) across a large population of mutagenized Arabidopsis lines. Once new loci involved in plant metabolism are identified further work is performed in those particular mutants using non-targeted analysis in order to characterize metabolite changes more broadly. Identification of metabolites that are discriminatory between the knockout plant compared to the wild-type help fill up the gaps in our understanding of plant-specific regulatory and biosynthetic pathways and determine the function of the GUFs [1,7,9].
Because of the central role that amino acids play in plant biochemistry, screening methods that quantify free levels of this class of metabolites in plant tissue are in demand. Despite the numerous methods available for amino acid analysis, many lack the suitability for metabolomic studies. Three aspects are vital in developing an effective targeted metabolite analysis platform for large-scale mutant screening: (i) reduction of sample preparation and analysis time, (ii) collection of high-quality data, and (iii) broad dynamic range [10].
Chromatographic separation methods (gas chromatography, GC, and liquid chromatography, LC) combined with tandem mass spectrometric (MS/MS) detection are dominating the field of metabolomics. Although considerable work has been done in the development of LC-MS methods for analysis of both underivatized and derivatized amino acids in complex matrices, the former are being particularly implemented in metabolomic research and employ the ion-pairing (IP) reversed-phase (RP) LC [10,11] or hydrophilic interaction chromatography (HILIC) alternatives [12,13]. Although these methodologies are very attractive due to the elimination of the sample derivatization step, they suffer of several problems.
In IPRPLC, for example, the occurrence of system peaks during the gradient elution disturbs the quantitation of amino acids. Systems peaks are caused by low volatility of ion-pairing reagents (example, pentadecafluorooctanoic acid, PDFOA) and their adsorption on the column support surface [14,15]. In addition, long equilibration times (t e ) between runs and column regeneration after few injections are needed in order to avoid degradation in chromatography and retention time drift for amino acids due to accumulation of the ion-pairing reagent on the column surface. Equilibration times from 9 to 105 min [15][16][17][18] and column flushing from 3 to 30 min are reported in the literature [14,15,19,20]. Another drawback associated with the use of ion-pairing reagents in LC-ESI-MS analysis is the decrease in ionization efficiency of amino acids due to interference by these easy-ionized mobile phase modifiers [21]. The occurrence of undesirable reactions between ionpairing reagents and salts present in biological samples can also contribute to this problem. Armstrong et al. [20] reported the formation of a sodium adduct of tridecafluoroheptanoate (TDFHA) during the analysis of 25 physiological amino acids and one peptide in plasma samples by IPRPLC coupled to time-of-flight (TOF) MS which caused significant signal suppression of alanyl-glutamine dipeptide and valine. A cation-exchange cleanup step had to be added to the sample preparation in order to decrease the abundance of the TDFHA adduct and improve the accuracy and precision of the analysis [20]. Last but not least, surfactant impurities can make the eluent particularly noisy at the m/z range corresponding to underivatized amino acids, affecting the sensitivity of the analysis [15,22].
Alternatively, when HILIC separation mode is used instead of a reversed-phase system underivatized amino acids are retained without any mobile phase modifier and the above mentioned drawbacks associated to the use of iron-pairing reagents can be avoided. Despite of that, column care (i.e., installation of in-line filter and guard column [23]) and long equilibration times (usually about 10 min in order to ensure retention time repeatability [23]) are essential in HILIC analysis. Furthermore, HILIC columns suffer of poor separation efficiency compared to the RPLC technique [24,25].
Due to the above, it is necessary to explore the possibility of implementing LC-MS methods for the analysis of derivatized amino acids to large-scale mutant screening in metabolomic studies. It is undeniable that derivatization brings several advantages to the LC-MS amino acid analysis in complex biological samples. First, derivatization of amino acids improves chromatographic properties (symmetric peak shape, better retention and resolution) in RPLC techniques [22]. In addition, if the amino acid derivatives are amenable for electrospray ionization-tandem mass spectrometry (ESI-MS/MS), better ionization efficiency and increased detection sensitivity can be obtained in their analysis due to the shift of the molecular ion masses and enhanced hydrophobicity caused by derivatization [22]. Despite these advantages it is recognized that not all of the typical amino acid derivatization methods are amenable for "omic"-scale projects.
In this study, an analytical platform that combines ultraperformance liquid chromatography with tandem mass spectrometry (UPLC-MS/MS) for targeted amino acid analysis in Arabidopsis thaliana leaf extracts is presented. Our method uses the commercially available amino acid derivatization reagent 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate (AQC). Since its introduction as derivatization reagent, AQC has shown interesting features. Reaction of AQC with primary and secondary amino acids is a simple, straightforward process that occurs within seconds and produces stable derivatives; in contrast the hydrolysis of the excess reagent is a much slower reaction [30,41]. The only disadvantages reported in the literature are related to the use of HPLC separation with fluorescence or UV detection: long analysis time (25-65 min), low sensitivity (UV only), peak interference by AQC hydrolysis product and intramolecular quenching [41][42][43][44][45]. An analytical platform that exploits the greater chromatographic capacity and throughput of UPLC and the sensitivity and selectivity of MS/MS would overcome those drawbacks. The applicability of a UPLC-MS/MS method coupled with AQC precolumn derivatization for targeted amino acid analysis in large-scale metabolomics studies is demonstrated.
Development of an Infusion Protocol for ESI-MS/MS Parameter Optimization of AQC Amino Acid Derivatives
Derivatization with AQC offers a simple and reproducible conversion of amino acids into their stable adducts amenable for RPLC [41]. Although the superior throughput and resolution of the UPLC technology can now be combined with UV, fluorescence (FL), or photodiode array (PDA) detection of AQC amino acid derivatives thanks to the commercial availability of the AccQ•Tag Ultra Chemistry package (Waters Corp.) [46,47], the possibility of using UPLC-MS technology has not received enough attention even though the eluents used for AccQ•Tag UPLC amino acid analysis and the AQC adducts are amenable for MS. Armstrong et al. [20] pointed out that although the preparation of samples for LC-MS analysis using amino acid kits simplifies the derivatization step, the non-volatile buffers included in those kits (such as the Waters AccQ•Tag) are not readily compatible with ESI-MS, bringing disadvantages to the LC-MS approach.
Our preliminary studies in the optimization of MS parameters for the analysis of amino acids derivatized with the AccQ•Tag kit proved that signal suppression was particularly problematic during direct infusion of the adducts into the mass spectrometer. As indicated by Armstrong et al. [20], this problem is attributed to the non-volatile borate buffer provided with the AccQ•Tag derivatization kit, which is used for optimum pH adjustment of the reaction solution in order to obtain maximum product yields [41].
To overcome the drawback presented by the borate buffer in direct infusion experiments, an alternative buffer for the AQC derivatization of amino acids is needed in order to facilitate the optimization of critical MS parameters (cone voltage and collision energy) that affect the selectivity and sensitivity of LC-MS/MS amino acid analysis.
Evaluation of the Effect of Buffer System Type, pH and Concentration on the AQC-Amino Acid Derivatization
In order to find a suitable alternative buffer for AQC amino acid derivatization, several factors affecting the outcome of the reaction, such as the effect of the chemical nature, concentration, and pH of the reaction medium were investigated. Two MS friendly volatile buffers, namely, ammonium formate, and ammonium acetate were studied. For comparison purposes, control experiments using the well established borate buffer system (pH 8.8) as the reaction medium were carried out in parallel. An initial judgment on the suitability of the media under evaluation was made based on the physical appearance of their respective amino acid standard solutions following derivatization with AQC. The use of ammonium formate buffer (pH 7.6) produced dark-yellowish solutions upon AQC amino acid derivatization, possibly indicating the formation of unwanted byproducts. The ammonium acetate buffer (pH 9.3), on the other hand, yielded clear colorless solutions similarly to the borate buffer system and was selected for further experiments.
The effect of the buffer concentration on the derivatization reaction was investigated next, while keeping the pH constant at 9.3. Six concentrations of ammonium acetate buffer (10,20,50,100,200 and 500 mM) were tested. All six concentrations yielded clear colorless solutions upon AQC amino acid derivatization. Nevertheless, subsequent UPLC-ESI-MS/MS analysis revealed a decrease in ion intensity with the increase in buffer concentration. Evidently, high buffer concentrations led to an increase in salt deposits in the sample cone surface, decreasing the signal intensity. Signal intensity was particularly affected at buffer concentrations 100 mM and higher. The increased LC-MS/MS signal suppression with increasing buffer concentration has been reported by other authors [48]. Ammonium acetate buffer concentrations equal or less than 50 mM did not show significant signal suppression and were found appropriate for AQC amino acid derivatization.
Using a constant ammonium acetate buffer concentration of 50 mM, the pH was then adjusted to 9.0, 9.3 and 10.3. Buffered amino acid solutions at pH 9.0 turned slightly yellowish upon AQC derivatization. At pH 9.0, lowering the buffer concentration from 50 mM to 20 mM produced even darker yellowish solutions, further indicating that both the pH and the buffer concentration affect AQC amino acid derivatization. Ammonium acetate buffer concentrations greater than 50 mM at the pH of 9.0 were not tested based on our previously results, showing a decrease in ion intensity with an increase in buffer concentration. Keeping derivatization conditions at pH = 10.3 also proved suitable for AQC adduct formation, and no differences were observed compared to the results obtained at pH 9.3 (data not shown). All further infusion experiments were performed using the 50 mM ammonium acetate buffer system at pH 9.3.
Evaluation of Derivative Stability and Reproducibility of the AQC Derivatization Reaction
Using the Alternative 50 Mm Ammonium Acetate Buffer (pH 9. 3) The applicability of the 50 mM ammonium acetate buffer (pH 9.3) in the preparation of AQC amino acid derivates for direct infusion experiments was evaluated. Derivatized amino acid standard solutions (1 × 10 −2 g/L) were infused into the Xevo TQ mass spectrometer. Multiple reaction monitoring (MRM) transitions were determined for 26 amino acids, and the optimal cone voltage and collision energy associated with each transition were established (Table 1). Unlike previous direct infusion experiments performed with the borate buffer, signal suppression and source contamination were not observed with this alternative buffer system, after 78 consecutive infusions. AQC amino acid derivatives were stable for more than three weeks when stored at room temperature in the dark, further advocating the effectiveness of this buffer for the derivatization reaction (data not shown). Table 1. MRM transitions, cone voltage (CV) and collision energy (CE) determined for AQC-derivatized standard amino acids buffered with ammonium acetate (50 mM, pH 9.3). Experimental conditions: Waters XEVO TQ mass spectrometer; direct infusion at 20 µL/min; final amino acid concentration after derivatization was 1 × 10 −2 g/L. The reproducibility of the derivatization method with the 50 mM ammonium acetate buffer (pH 9.3) was confirmed by the UPLC-ESI-MS/MS analysis. The peak area of the isotopically labeled amino acids derivatized with AQC in ammonium acetate medium was measured in nine replicates (final concentration of adducts = 4 × 10 −4 g/L) ( Table S1). As shown in Table S1, the relative standard deviation (RSD) of the peak area for all isotopically labeled amino acids was below 9%, indicating high reproducibility of the derivatization reaction. The efficiency of the reaction in the alternative buffer was further studied by evaluating the linearity of the detector response for standard amino acid solutions over the concentration range from 250 μM to 3.05 pM. Figures S1A and S1B (supplementary information) show typical internal calibration curves of phenylalanine obtained by UPLC-ESI-MS/MS analysis under the conditions described in section 3.5. The response factors for these calibration curves were calculated using relative peak areas, in which the area of phenylalanine was divided by the area of the internal standard, 4-hydroxyphenyl-2,6-d 2 -alanine-2-d 1 (present at a constant concentration of 4 × 10 −4 g/L after derivatization). Figure S1A displays the internal calibration curve for phenylalanine obtained with the conventional borate buffer, whereas Figure S1B shows the internal calibration curve obtained with the alternative 50 mM ammonium acetate buffer (pH 9.3). Clearly, both internal calibration curves exhibit similar response factors, correlation coefficients and slopes, providing additional evidence for the suitability of the ammonium acetate buffer for AQC derivatization of amino acids. It should be mentioned, however, that when the calibration curves were built using absolute peak areas, rather than relative peak areas, the overall peak areas measured with the ammonium acetate buffer were lower than those measured with the borate buffer. One plausible explanation for this observation is that although both borate and ammonium acetate buffers are suitable for the reproducible formation of stable AQC amino acid adducts, lower yields of these derivatives are attained with the ammonium acetate buffer system.
In summary, our results show that 50 mM ammonium acetate buffer (pH 9.3) can be effectively used for AQC amino acid derivatization in direct infusion experiments. The use of this alternative buffer allowed the optimization of mass spectrometric parameters specific for AQC derivatized amino acids (such as cone voltage and collision energy) necessary for LC-MS/MS method development, which could not be otherwise obtained with the borate buffer system (Table 1).
At this point, it is worth mentioning, that signal suppression phenomenon observed with borate buffered amino acid derivatives during direct MS infusion experiments was not manifested during their UPLC-ESI-MS/MS analysis. This is mainly because during UPLC analysis the sample itself undergoes dilution with the mobile phase. Therefore, the ammonium acetate buffer simply offers an MS friendly alternative medium for direct MS infusion experiments in order to optimize MS parameters necessary for AQC amino acid derivative analysis (a one-time process necessary for method development). The ion suppression observed with the borate buffer during direct infusion of AQC amino acid adducts should not prevent us for combining a rugged derivatization chemistry such as AccQ•Tag Ultra method and the LC-ESI-MS/MS analytical approach, especially in metabolomics applications where the gain in sensitivity and specificity offered by the MS analysis (in the MRM mode) of derivatized amino acids is highly desirable. Therefore, once these MS parameters are optimized, the specific derivative chemistry of the AccQ•Tag kit is used (i.e., using the borate buffer) for the derivatization step previous to the UPLC-ESI-MS/MS analysis of amino acids in the Arabidopsis mutants. As it was mentioned before, using a derivatization kit that is commercially available is preferred because it simplifies the derivatization step, but most importantly, the specific chemistry of the AccQ•Tag kit offers higher yields of amino acid adducts; both necessary factors for the aim of large scale metabolomics projects.
AccQ•Tag UPLC-ESI-MS/MS Method Development and Evaluation
In our experiments, the UPLC-ESI-MS/MS determination of AQC derivatives of 38 amino acids and 15 labeled internal standards was achieved by operating the mass spectrometer in the MRM mode. The main product from the collision-induced dissociation of all AQC adducts was the ion m/z 171, derived from the cleavage at the ureide bond formed upon derivatization. Therefore, the MRM-MS method was developed to include the transition m/z [M + H] + > 171 for each derivatized amino acid at the corresponding optimized collision energy and cone voltage (Table 1). To increase the overall performance, the MRM-MS method was built to monitor only one amino acid transition per timed function (time windows ranging from 0.42 to 1.03 min).
Although the tandem mass spectrometer provides excellent specificity when operated in the MRM mode, complete resolution of chromatographic peaks corresponding to isomers, isobars and/or isotopomers is desirable for satisfactory quantitation of amino acids in their native or derivatized form [14,19,22,49]. In our study the AccQ•Tag Ultra column, under the gradient conditions described in section 3.5, performed very well and provided good chromatographic resolution for unequivocal peak identification by MS/MS analysis of AQC amino acid derivatives. All the targeted compounds (38 amino acids) and their respective internal standards (15 labeled amino acids) were resolved within 10 min.
The improvement in sample throughput and chromatographic separation brought by UPLC to the analysis of AQC derivatized amino acids was also previously demonstrated by Boogers et al. [46] in their UPLC-PDA method. In their comparative study, 16 amino acids were separated within 8 min (total cycle time = 10 min), which resulted in a reduction in time analysis by a factor of 2.5 compared to the Pico•Tag method (a kit from Waters Corporation which uses the PITC as derivatization reagent). In our study a larger number of amino acids were analyzed without compromise in the separation.
Others authors [10,11,49] have reported problems separating and quantifying some of these problematic amino acid sets in their underivatized form using HPLC-MS/MS. Jander et al. [11], for example, could not differentiate between Ile/Leu, and unsatisfactory resolution between Lys/Gln adversely affected quantitation in Arabidopsis seed extracts since the tail of the considerably more abundant Gln peak masked the signal from Lys. Using the ion pairing approach, Gu et al. [10] reported irreproducible chromatographic resolution of Glu/Gln and Asp/Asn, and, therefore, the contributions of the naturally occurring heavy isotopomer of Asn to the Asp channel and Gln to the Glu channel had to be evaluated. Additionally, although the authors could separate Ile and Leu when standard solutions were analyzed, the amino acid pair was only partially resolved in Arabidopsis seeds extracts. In their same set of experiments, the pair Thr and HSer always coeluted during chromatography regardless of the type of solution analyzed. To solve these problems, Gu et al. [10] used alternative MRM transitions for these pairs, but they were not as specific and sensitive for the respective amino acids as the transitions involving the most abundant fragment ions (for example refer to Figure 3 in ref [10]). It was demonstrated before by Petritis et al. [49] that selection of less abundant fragment ions caused a four-to six-fold loss of sensitivity for the LC-MS/MS analysis of native amino acids. It is worth noting that the lack of baseline separation [22] and irreproducible amino acid separation between standards and biological samples [36] have also been observed in the HPLC-ESI-MS/MS analysis of amino acids derivatized with FMOC, butanol, PrCl [22] and TAHS [36]. As a result of the reproducible and satisfactory chromatographic separation of AQC amino acid derivatives obtained with the AccQ•Tag Ultra column in our studies, the development of our MRM-MS method was not complicated by overlapping elution of critical sets of amino acids, in contrast to previous observations with HPLC separation of native and derivatized amino acids. For example, it was not necessary to account for the crosstalking of 13 C isotopes of Asn and Gln to the MRM channels of Asp and Glu, respectively, in order to accurately quantify these amino acids. Furthermore, there was no need to select additional fragment ions for the isomers/isobars, which may be less predominant and decrease the sensitivity of the transition channel used for MS detection of the corresponding amino acid. Additionally, reproducible chromatographic separation was obtained in both amino acid standard solutions and sample extracts (see Figure 1 for example) and, therefore, quantitation of amino acids was straightforward.
According to these results, the combination of AQC pre-column derivatization with the superior performance of UPLC technologies allows reproducible separation of several critical amino acid pairs before MS/MS analysis, which is a necessity because of their similar nominal masses or identical fragmentation. This adds maximum selectivity and sensitivity to the amino acid analysis.
Method Evaluation
The performance of the UPLC-ESI-MS/MS method for the analysis of AQC-derivatized amino acids was evaluated by measuring the repeatability, linearity, and sensitivity of the analysis. The repeatability of the method was determined by examination of the retention time and peak area ratios (i.e., Area amino acid /Area internal standard ) after intraday injections of standard solutions of derivatized amino acids. The relative standard deviations (RSDs, in %) of the retention times were always less than 2% (n = 30) for the AQC-amino acids ( Table 2 and Table S2). RSD values for peak areas ranged from 0.19 to 7.47% (Table 2). These results compare well with the precision studies obtained for the HPLC-ESI-MS analysis of AQC derivatized amino acids performed by Hou et al. [50]. With their method, the RSD% of the peak area ratios was in the range of 1.1 to 4.0% using a mixed standard of 17 AQC-amino acids at the concentration of 100 μM (n = 6). Repeatability of retention time was not given in their study. It is important to point out that the excellent stability of the retention time was observed in our study with injection of calibration standards and Arabidopsis extracts without any particular column care, indicating the advantage of our technique over the ion pairing approach in terms of repeatability of the method. Table S2 shows the repeatability of the retention time at two different time points within the chromatographic column lifetime. Retention time shifts were lower than 0.06 min. In the iron pairing approach, retention time migration of underivatized amino acids after a few consecutive assays is especially problematic due to accumulation of the ion-pairing reagent on the surface of the column material [19,20]. Retention time shift for native amino acids of as much as 1 or 1.5 min has been reported in the literature for IPRPLC-MS based studies [19,20]. Therefore, although intra-day RSD values for HPLC retention times found by IPRPLC-MS/MS methods could prove comparable to the values reported in this study (for example, > 3.8% [17], > 1.3% [10]), caution must be exercised when doing a direct comparison since, in some cases, retention time stability, and therefore, reproducible amino acid separation in IPRPLC-MS/MS approaches is contingent to frequent column flush with pure organic solvent after few assays.
The evaluation of the method was continued with data collection from the analysis of twenty solutions containing 38 derivatized physiological amino acids with a concentration ranging from 25 μM to 48 fM and 15 stable-isotope-labeled amino acids at a fixed concentration of 4 × 10 −4 g/L. The data was used to create an internal calibration curve for each amino acid using the respective internal standard as given in Table S3. Using the internal standardization method, plots of relative peak area versus amino acid concentration were generated using the TargetLynx software and were used to calculate the linearity (correlation coefficient and dynamic range) and detection limits shown in Table 3. Linear regression analysis of the calibration curves showed correlation coefficients (R 2 ) between 0.9810 and 1.000, within the low and high limits of linearity specific for each amino acid, as presented in Table 3. In addition, the overall process efficiency was calculated as PE (%) = 100 (area spiked before extraction /area standard solution ) as described in Gu et al. [10]. The area standard solution corresponds to the area of the internal standard in the neat standard solutions used to prepare the calibration curves, and area spiked before extraction corresponds to the peak area in the sample extract. The overall process efficiencies ranged from 65.0 to 99.4%. As stated by Gu et al. [10], process efficiencies greater than 100% occurs when coeluting species present in the sample matrix contribute to the detected signal of the amino acid.
There was no evidence of such contribution according to the results presented in Table S4. Subsequently, the limits of detection (LOD) were established by using the method of the blank and were calculated as three times the standard deviation of the peak areas observed from the blank signals, divided by the slope of the calibration curve obtained for the given amino acid. The LOD values obtained (Table 3) ranged from 1.02 × 10 −11 to 1.06 × 10 −8 M, suggesting that the analytical method presented in this study is 1 to 5 orders of magnitude more sensitive than other existing LC-MS and LC-MS/MS approaches [14,15,17,18,21,22,[36][37][38][39][49][50][51][52][53] for the analysis of native or derivatized amino acids, as showed in Table 4. The LOD values reported by Shimbo et al. [37] for 20 amino acids derivatized with TAHS are comparable to those obtained in our study, however, our UPLC analysis for AQC-amino acids has higher throughput (38 amino acids and 15 internal standards separated three times faster). Table 4 also shows the faster separation time (5 times shorter chromatographic run) and better sensitivity (3 orders of magnitude lower LOD) added to the analysis of AQC-amino acids by the combination of UPLC with tandem mass spectrometry operated in the MRM mode.
Method Application to Screening of Arabidopsis Mutants
Currently, metabolomic studies require high-throughput and sensitive targeted analytical platforms for screening of a large number of genetic variants. Thus, after its evaluation, our AccQ• Tag-UPLC-ESI-MS/MS method was used for the quantitative amino acid determination in A. thaliana leaf extracts (9 wild-type samples and 75 mutants; 6 biological replicates each; 504 samples in total) to demonstrate its applicability as a targeted approach for metabolomics analysis (for complete list of A. thaliana mutant stocks used in this study refer to Ref. 7). Our method is among the seven analytical platforms employed by the Arabidopsis metabolomics consortium [54] that aims to evaluate the power of using a combination of untargeted metabolomics platforms and targeted profiling methods for key metabolite sets in order to identify the function of GUFs.
Thirty-five out of the 38 targeted amino acids were identified and detected above their LODs in the A. thaliana leaf extracts. Figure 2 shows the amino acid profiles of two mutant stocks carrying T-DNA mutant alleles in genes of known function (GKFs) and GUFs. Quantitation was based on relative peak areas (as response) of each compound using the calibration curves that were constructed employing the internal standard method. Asparagine (Asn), serine (Ser), glutamine (Gln), arginine (Arg), glycine (Gly), ethanolamine (MEA), aspartic acid (Asp), threonine (Thr), L-alanine (L-Ala), γ-amino-n-butyric acid (Gaba), proline (Pro), lysine (Lys), valine (Val), isoleucine (Ile) were among the most abundant amino acids in the extracts. 3-Methyl-histidine (3-Mehis), 1-methyl-histidine (1-Mehis), creatinine (Cr), cystathionine (Cysthi), cystine (Cys-S-S-cys), cysteine (Cys), and homocysteine (Hcy) were not detected (below LOD) in any of the samples studied (wild-type and mutants). Details on the statistical data processing were already published in two previous papers by the Arabidopsis Metabolomics Consortium [1,7] and will not be covered in this paper. Data quality check performed to determine the variability in amino acid concentration between different biological replicates showed correlation coefficients between 0.61-1.00. Correlation coefficients were < 0.7 in the majority of the cases, indicating the high reliability between the replicates obtained with our amino acid profiling platform. Figure S3 shows the data quality plot for the analysis of amino acids in the mutant SALK_021108 (AT1G52670). Data quality plots for all the mutants analyzed with our AccQ•Tag-UPLC-MS/MS platform can be found in the web portal of the consortium [54].
It is obvious that the amino acid profiling alone is not enough to represent the metabolic effect of gene knockout in the group of T-DNA mutants stocks selected in the initial three metabolomic experiments (E1, E2 and E3) and, therefore, interpretation of the biological significance of the data is outside the scope of this manuscript. However, the combination of our AccQ•Tag-UPLC-ESI-MS/MS platform with other targeted and untargeted method gives a more holistic view of changes in the metabolome. The statistically evaluated data compiled by the consortium of laboratories (including our research group) is publicly available through the web-based project database [54] in order to incentive its use by the metabolomics community for the formulation of hypothesis about the function of GUFs. A discussion of exemplary datasets was already published elsewhere by members of the consortium [7].
Chemicals and Reagents
The L-amino acids kit (Sigma-Aldrich, Co., St. Louis, MO, USA) was used for direct infusion experiments and a commercial mix of amino acids and related compounds (Sigma-Aldrich, Co., St. Louis, MO, USA) was employed in the preparation of calibration standards. Asparagine, glutamine and homoserine were purchased separately (Sigma-Aldrich, Co., St. Louis, MO, USA) since they are not included in the commercial mix. Stable-isotope-labeled reference compounds (L-asparagine- 15
Plant Material
Seed stocks of Arabidopsis thaliana mutants were obtained from ABRC and propagated by the central lab of the Arabidopsis Metabolomics Consortium at Iowa State University. This paper focuses on the results obtained by targeted amino acid analysis on leaf extracts of 69 mutant lines selected for three metabolomic experiments (E1, E2, and E3) designed by the consortium. Six biological replicates of each mutant line were provided along with control samples (Columbia (Col-0) ecotype) for each metabolomic experiment.
The list of T-DNA knock-out mutants, the rationale for their selection, plant growth conditions, and protocol for plant harvesting are published elsewhere [1,7] and also available in the project database [54]. Plant material was stored at −80 °C upon arrival.
Amino Acid Extraction from Arabidopsis Samples
Amino acids were extracted from 5 mg (dry weight) of Arabidopsis leaf sample with 125 μL of 50% (v/v) methanol:water solution spiked with isotopically labeled internal standards at 4 μg/mL. Samples were grinded in a mixer mill for 60 sec, incubated on dry ice for 5 min, and sonicated in a water bath for 1 min. Two cycles of buffer extraction, grinding, dry ice incubation, and sonication were completed. At the end of each cycle, the debris was removed by centrifugation at 13 K rpm, 4°C, and 8 min in a Beckman-Coulter refrigerated benchtop centrifuge. The extract was transferred each time to a limited volume vial.
Accq•Tag Ultra Amino Acid Derivatization
The AccQ•Tag Ultra derivatization kit (Waters Corp.) was used in all derivatization procedures, unless otherwise noted. AccQ•Tag Ultra borate buffer was replaced with the ammonium acetate buffer only for direct infusion mass spectrometry experiments. Following the protocol provided by the manufacturer, 10 μL of either a standard amino acid mix solution or an Arabidopsis leaf extract was mixed with 70 μL of AccQ•Tag Ultra borate buffer (pH = 8.8). The derivatization was carried out by adding 20 μL of reconstituted AccQ•Tag Ultra reagent (3 mg/mL of AQC in acetonitrile) to the buffered mixture. The sample was immediately vortexed followed by incubation for 15 min at 55 °C.
To maintain consistency between the time of extraction and time of analysis due to the large-scale of the project, derivatized samples were prepared and analyzed by UPLC-ESI-MS/MS in daily batches. acetonitrile (10%), formic acid (6%), ammonium formate in water (84%)), eluent B was 100% AccQ•Tag Ultra solvent B (acetonitrile), and the column flow rate was 0.7 mL/min. The autosampler temperature was set at 25 °C and the column temperature at 55 °C. The sample injection volume was 1 μL.
UPLC-ESI-MS/MS Analysis
MS method development started with the direct infusion of individual AQC-derivatized amino acids (1 × 10 −2 g/L) into the ESI source of the mass spectrometer at the default infusion rate (20 μL/min). MRM transitions with their respectively optimized cone voltage and collision energy values were determined for each metabolite using the Waters IntelliStart software. The common main product from the collision-induced dissociation of all the AQC adducts was the ion m/z 171, derived from the cleavage at the ureide bond formed upon derivatization. Using the MS parameters fine-tuned by IntelliStart, derivatized standard amino acid solutions (25 μM) were injected into the UPLC-ESI-MS/MS system to determine their retention times.
The final MRM-MS method employed for the quantitation of amino acids and internal standards was composed of 53 ESI+ timed functions properly segmented over the 10 min chromatographic run. The time segment of each function was selected based on the retention times observed for the metabolites and reference compounds, and ranged from 0.42 to 1.03 min. To increase the overall performance, the MRM-MS method was built to monitor only one transition channel per MRM function. The most sensitive parent-daughter ion transition of each derivatized amino acid (i.e., m/z [M-H] + > 171) was selected for quantitation.
The following ionization source settings were used: capillary voltage, 1.99 kV (ESI+); desolvation temperature, 600 °C; desolvation gas flow rate, 1000 L/h; source temperature, 150 °C. The analyzer settings were as follows. For quadrupole 1, the low mass resolution was 2.91387 and the high mass resolution was 15.1501; while for quadrupole 2, the values were 2.97214 and 14.7422, respectively. Argon was used as collision gas at a flow rate of 0.15 mL/min.
The UPLC-ESI-MS/MS system control and data acquisition were performed with the Waters Corporation MassLynx TM software. Data analysis was conducted with the TargetLynx TM software (Waters Corporation).
UPLC-ESI-MS/MS Method Evaluation and Applicability
Method evaluation involved the determination of linearity (regression coefficient and dynamic range), sensitivity (detection limits), and reproducibility (relative standard deviations of retention times and peak areas) of the analysis for each amino acid. Working standards with concentration range from 250 μM to 476.8 pM were prepared by serial dilutions of a 500 μM amino acid mix solution spiked with isotopically labeled internal standards at 4 × 10 −3 g/L. The serial dilutions were performed in a Biomek 2000 Beckman Coulter laboratory automation workstation (Fullerton, CA) using a solution containing the internal standards at 4 × 10 −3 g/L in a 50% (v/v) methanol:water mixture in order to keep their concentration constant. After derivatization the concentrations of amino acids were decreased 10-fold and the concentration of all internal standards was maintained constant at 4 × 10 −4 g/L. Calibration curves were obtained by replicate injection of each of the derivatized working standards and were constructed as plots of relative peak area (Area amino acid /Area internal standard ) versus amino acid concentration using the TargetLynx software. The assignments of internal standards are given in Table S4.
The applicability of the UPLC-ESI-MS/MS method for sensitive throughput analysis of amino acids was evaluated by determination of their concentrations in derivatized Arabidopsis thaliana leaf extracts obtained as described in numeral 3.3 and 3.4.
Conclusions
An AccQ•Tag-UPLC-ESI-MS/MS method that uses stable-isotope-labeled internal standards and scheduled MRM functions was presented for reliable and sensitive quantitation of amino acids. The major advantage offered by this method was the enhanced sensitivity for the analysis, which allowed detection of amino acids at concentration levels down to 1.02 × 10 −11 M (i.e., 10.2 atomole on column). This latest method represents an improved sensitivity for amino acid analysis of 1 to 5 orders of magnitude compared to existing methods. The AccQ•Tag-UPLC-ESI-MS/MS method was successfully applied to the analysis of 504 Arabidopsis leaf extracts and could be easily implemented for the analysis of amino acids under a typical work flow for metabolomics research. The analysis of the plant extracts by the AccQ•Tag-UPLC-ESI-MS/MS method was completed with minimum column care, high repeatability, and reproducible separation which is in sharp contrast to existing HILIC and IPRPLC approaches. Contrary to a common misconception with respect to precolumn derivatization methods, the AQC derivatization worked well for all the amino acids tested and the AccQ•Tag-UPLC-ESI-MS/MS method gave reliable data for metabolomic studies. Valine -d 8 45 Valine 44 46 Homocystine 36 47 Leucine 49 48 Isoleucine 49 49 Leucine-d 10 50 Phenyl-d 5 -alanine 51 Phenylalanine 50 52 Tryptophan-2',4',5',6',7'-d 5 (indole-d 5 ) 53 Tryptophan 52 | 9,041.2 | 2012-07-06T00:00:00.000 | [
"Biology"
] |
Digital Correlation Microwave Polarimetry : Analysis and Demonstration
The design, analysis, and demonstration of a digital-correlation microwave polarimeter for use in earth remote sensing is presented. We begin with an analysis of threelevel digital correlation and develop the correlator transfer function and radiometric sensitivity. A fifth-order polynomial regression is derived for inverting the digital correlation coefficient into the analog statistic. In addition, the effects of quantizer threshold asymmetry and hysteresis are discussed. A two-look unpolarized calibration scheme is developed for identifying correlation offsets. The developed theory and calibration method are verified using a 10.7 GHz and a 37.0 GHz polarimeter. The polarimeters are based upon 1-GS/s three-level digital correlators and measure the first three Stokes parameters. Through experiment, the radiometric sensitivity is shown to approach the theoretical as derived earlier in the paper and the two-look unpolarized calibration method is successfully compared with results using a polarimetric scheme. Finally, sample data from an aircraft experiment demonstrates that the polarimeter is highly-useful for ocean windvector measurement.
I Introduction
Recent advances in the interpretation of polarimetric microwave thermal emission from the Earth's oceans and atmosphere have prompted the study of new retrieval techniques for near-surface ocean wind vectors and mesospheric temperature profiles [1]. These techniques are facilitated by a more complete characterization of the polarization characteristics of the upwelling radiation field than obtainable using conventional single-or dual-polarization radiometers.
As an example of these techniques, polarimetric measurements have been shown to greatly facilitate the retrieval of ocean surface wind direction [2,3].
The quantity used to fully describe the second-order statistics of the quasi-monochromatic
The parameters
Tv and T h can be measured using standard linearly-polarized total power radiometers [5]. Detection of the third and fourth Stokes parameters, however, requires two additional measurements to effectively perform the correlations in (1). The various types of polarimetric radiometers fall into two basic categories: adding polarimeters (AP) and direct correlating polarimeters (DCP). The adding polarimeter uses measurements of the brightness temperature of 2 at leasttwo additional polarizationstatese.g.,45°slant-linearlypolarized(T45o) andeitherleftor fight-handcircularlypolarized(Tt or Tr). From the four measured brightness temperatures and using the Stokes parameter rotational transformation [6], the third and fourth Stokes parameters can be determined, e.g.: (2) The direct correlating polarimeter estimates Tu and Tv by cross-correlating the instantaneous voltage signals of the vertical and horizontal channels.
The actual correlation can be performed by either analog or digital multiplying circuitry.
If the time-varying voltages v. (t) and vh(t) are assumed to be stationary and ergodic [7], then the covariance estimate P_h is: where r is the integration time. Since the IF voltages are related to the incident field quantities by the receiving antenna's effective area and the receiver's signal transfer characteristics, measuring /_h is equivalent to measuring TLr: where _" = _ is the correlation coefficient and T,_,su_ are the system temperatures of the ra- Several mechanisms can contribute to calibration errors in (4) and (5). Antenna cross-amount of whichmustbeknown.Onemethodfor comprehensive calibrationof thefirstthreemodified Stokes parameters usesa rotatingpolarizedcalibrationstandard [8]. Thepolarizedstandard presents to the receivera stronglypolarizedbut preciselydetermined radiationfield andallows complete determination of thegainsandoffsetsfor the first threeStokesparameters. Calibration of thefourthStokes parameter channel canbeaccomplished by insertionof anappropriate 90°shift in theRFpathusing,e.g.,aquarter waveplate [9]. Useof thepolarized standard in space, however, requiresadditionalhardware beyondthe conventional ambient andcoldblackbodystandards that arecommonlyused.
In theimplementation of(4) and (5) it is desirable todesignasystemthatrequiresaminimal amount of calibrationhardware. Whileananalogcorrelator canbeusedto determine Tu or Tv, its response will generally require the in-situ identification of relatively large leakage gains 9u_ and 9Uh from T_ and Tn, viz.: vu = 9u,,Tv + gunTh + guuTv + 9uvTv + ou (6) as well as the offset term o_. While leakage gains can be minimized by proper tuning and balancing, elimination of long term drift in detection and video components -their root cause -can be prohibitively expensive.
A solution to the above problem above of precise measurement of either Ttr or Tv can be found through digital correlation.
Here the radio-frequency (RF) or intermediate-frequency (IF) signals are sampled at the Nyquist rate, the digital samples cross-correlated using fast multiplication circuitry, and the correlation integral (4) performed via digital accumulation. Provided that the digitized signal contains no DC component and the A/D conversion is linear and unbiased, the correlation coefficient/_ can be obtained without leakage or offset. A further advantage of using a digital correlator with more than one bit (or two levels) of discretization is that in-situ calibration can be performed using only conventional ambient and cold unpolarized views, for example, an ambient blackbody target and cold space.
A Mean Statistics
The input signals to a correlator, va (t) and vb (t), are modeled as jointly-Gaussian stationary random processes with root mean square (RMS) voltages a_, and avb and correlation coefficient p = ._nzz_.
If the processes are sampled with period T at or below their Nyquist rate, then the sample sequences consist of independently and identically distributed pairs with the following joint Gaussian probability density function (pdf): The three-level quantization performed on the input signals by the A/D converter can be modeled by a nonlinear transfer function: where the quantities z_ZVtha are the threshold levels of the A/D converters (see also Figure 2), with the subscript t_ denoting either channel a or b. For typical CMOS or ECL logic, Vtha ,_ 0.05 to 0.50 volts, therefore, the necessary microwave signal power can range from -12 to +8 dBm in a standard 50 f_ system.
Thesethreestatistical parameters aremeasured by accumulation of theoutputs of a simple digital circuit such as shown in Figure 3.
The statistics of s,_^2, ,_, and r_b and their relationship to Ta, Tb, and Tu are obtained by integrating the right-hand sides of (9) and (10) against the pdf (7). The expected value of the digital variance is = 2[1- (11) where 0,_ =" vth,_ /a,,,, and is the normal cumulative distribution function. Figure 4 is a plot showing the relationship between the digital variance and RMS input voltage at a fixed threshold voltage. As will be shown in section B, for maximum sensitivity in Tv the value of 0,_ should be close to 0.61. Inverting (11) yields a simple estimate for the signal standard deviation for a measured digital signal variance: or, in terms of antenna brightness temperatures: V2ha where Ro is the system impedance, B is the bandwidth, Go is the system gain, and TaEc,_ is the receiver noise temperature. In general, the parameters (_ _kl_a,, } and TREC,,_ are slowly time varying and represent system gains and offsets that must be identified via periodic calibration. The relationship between the input correlation coefficient p and the expected value of the digital covariance rab is similarly straightforward and can be obtained by integrating the righthand side of (10) against the joint pdf over two dimensions. The problem can be reduced, however, to an integration over one-dimension using Price's theorem [17,18]. Price's theorem relates the covariance of theinputsignalsto thedigitalcorrelationcoefficient: = f(avoOa, avbOb; where J is the Dirac impulse function. The input covariance can be related to the input correlation coefficient using the chain rule of differentiation:
Or_b Orab OP_°_b Or_b
Op -OR,,.Vb Op = cr_ a,b oir_°_b (16) Thus, the digital correlation coefficient is a one-dimensional integral of the pdf over p: In practice 0_ and Ob are taken to be 0, and Ob from (13). The relationship between the input correlation coefficient and the digital covariance is plotted in Figure 5 for a fixed threshold level For a given r_b, the correlation estimate t5 is determined by nonlinear inversion of (17).
The inversion technique must be carefully chosen so that systematic errors arising from the approximation are not larger than the statistical uncertainty of the estimate. This requirement is quite stringent. For example, from (5), a radiometer with a system temperature of Tsy8 = 500 K and a noise requirement of ATrms = 0.1 K for the third or fourth Stokes parameter requires a measurement ofp with absolute error less than 0.1K/(2 • 500K) = 1 x 10 -4. The two existing inversion techniques for three-level correlators are based upon power series inversions of either the bivariate normal integral [19] or the one dimensional integral (17) [20]. In the former method [19] the inversion was derived for the cross-correlator, while for the latter method it was derived for the auto-correlator. Both sharesimilarconvergence characteristics, e.g.,third-orderexpansions are requiredto obtain0.1%accuracy or anabsolute errorof 10-4 for IPl< 0.6. Thelattertechnique, however, is mathematically simplerandpermitsananalysisof theeffectsof systemnonidealities (considered in sectionA). Sincethis expression wasoriginallyderivedfor the autocorrelator, a newandmoreaccurate expression tailoredto thecross-correlator is presented here(thederivation is presented in AppendixA). Firsttheintegrandof (17)is approximated by a Taylorseriesabout p' = 0. Next, the series is integrated to obtain: where Acceptable inversion errors for Earth-science polarimetry are attainable using this fifth-order power series.
B Sensitivity
A radiometer's fundamental sensitivity is limited by the available bandwidth, observation time, and receiver noise temperature. The radiometric sensitivity of a polarization correlating radiometer is: wherea_is the standard deviation of the estimate _. For continuous (analog) correlation using N independent samples and small values ofp it can be shown that limp__0 a_-= 1/x,/-N [11]. Thus, using (5) and (21) For the three level system with balanced channels (0a = Ob = 0), the sensitivity for vanishingly small correlation is (see Appendix A): The impact of quantization noise can be minimized by proper selection of the threshold voltages Vth,_. The optimal value of 0 (determined numerically) is 0.61 with a corresponding sensitivity of: ATu,_, = 2.47 v/Tv"u'Th"u" (25) Comparing this expression to (22) we find that the 1.6-bit digital correlator achieves 81% of the sensitivity available from an ideal analog correlator.
The total power channels are useful for normalized threshold level estimation as well as measurements of the first two Stokes parameters. The sensitivity of a total power channel can be calculated in a similar fashion by aM (26) AT,_,_,-0(_)/0"F_ With the threshold levels 0a = 0o = 0.61 (i.e., set for optimal cross-correlator sensitivity), the total power channels have a fundamental sensitivity of (see appendix B) AT,_,rm8 = 2.20 TS_'''_ (27) vW Theideal(analog) totalpowerradiometer hasa sensitivityof Tsus/x/N. Thus, a three-level digital total power radiometer can achieve 41% of the sensitivity of an ideal analog radiometer when the threshold voltages are optimized for the cross-correlation channel.
It is noted that in (27) the optimal sensitivity for the total power channels is not used because the threshold voltages were chosen to optimize the cross-correlation channel. In otherwords, the threshold level value of 0.61 is the optimum value for small cross correlations; however, this value is not optimal for the total power channels. This choice is acceptable, however, because in the polarization correlating radiometer the total power channels are primarily used to measure the relative threshold level values. If the thresholds were to be set for optimum total-power sensitivity, the digital total-power radiometer could 78% of the sensitivity of the analog radiometer with 0,_ = 1.58.
Systematic Errors
Two sources of systematic errors in a polarimetric radiometer are analyzed in this section: (1) Figure 6): otherwise Relations (9) and (10) can now be recomputed to reveal the effects of threshold offsets.
A.1 Correlation channel
The digital correlation coefficient (17) including offsets becomes: Equation 29 can be considered equivalent to (17) but with small gain and offset perturbations of order 6_ and _b-We show here that the gain error is negligible if the input correlation coefficient is small. In contrast, the offset error is found to be an order of magnitude larger than the gain error.
This correlation offset, however, is parameterized in terms of the threshold level offset and may be compensated via calibration using two unpolarized standards.
The correlator offset error arises from the constant of integration rabla=o in (29). This constant was not explicitly shown in (17) because ideally it is zero. The constant can be evaluated by taking the expected value of (10) with p = 0 and using the modified definition of h(v -v6,,) in (28): Clearly, when either threshold level is ideal (i.e., _,_ = 0) the above term vanishes. A shift in both threshold levels, however, causes the offset error to become non-zero, the expected value of which can be separated into a product of two expected values since v,_ and vb are statistically independent when p = 0. The resulting correlation offset is: Assuming 6a and 65 are small allows (3 1) to be approximated using Taylor series expansions about +0. and +0b. The first term in the product is: where the 6_ terms cancel to leave an odd-valued function. The linear behavior of (32) makes the threshold asymmetry a significant source of error. The constant of integration in (29) is the product of two such terms: where is the normalized threshold offset product. The threshold asymmetries thus affect the digital correlation offset by an amount proportional to the normalized offset product. Expressed using voltages: The above product is generally a slowly time varying hardware constant, but as will be shown in section IV, it can be estimated using a conventional two-look unpolarized calibration.
The correlator gain perturbation is found by expanding the integrand of (29) in a threedimensional power series in p', 6_ and 6b, then integrating the resulting expansion with respect to p'. The algebra involved (see Appendix C) is cumbersome, although the result can be expressed as a sum of two series. The first series rabl&=tb=o is the ideal relationship between p and r given by (17).Thesecond seriesis anerror series 5rab (Sa, 5b, p) caused by nonzero threshold offsets 5,_ and 5b. Collecting these terms we have: The above series is truncated at O(p 4) and 0(53). Assuming that the nominal threshold levels are equal to the optimal value (0,_ = 0.61) the error series becomes: The error series is a sum of components that are 0(52p), O(6ZpZ), and O(o_pa), respectively.
To determine which components of the series are significant we assume that p = 0.1 and To render these error terms insignificant, the quantity 52 must be sufficiently small.
Using the previous criterion that all errors in p of magnitude _ 10 -s are negligible, the normalized threshold offsets should be no larger than 10 -2, that is, v_,, _< 10-2av,,. This is readily attainable using precision electronics for av,, "_ 0.5 V. If threshold offsets are not small enough, then the offsets should at least be controlled to render insignificant the higher-order terms (e.g., p2, p3... ), in which case only a correctable gain error occurs. For this latter case, it is sufficient that v6, _< 10-1av,_ to cause the magnitude of the p2 and pa terms to be less than 10 -5. The remaining error is linear in p and can be modeled as an effective change in the correlator gain: Notethattheidealcorrelator outputr_b16.=6_=o is implicitly included in (39). Typically the threshold offsets are small enough so that the gain perturbation is a only few percent.
A.2 Total power channel
The effect of threshold asymmetry on the total power channels is a perturbed system gain and offset along with a residual nonlinearity that we show to be negligible. Consider the expected value of the total power output: This expression is a simple extension of (1 1) but includes the threshold asymmetry.
series expansion then (40) can be written [ 7 ] ] (44) 1 2 There is a gain term affecting the total power channel output by a factor of (1 -_6a) and an offset 1 2 of approximately _6_. This additional system gain and offset is easily identified via a standard two-look calibration. The nonlinear residual is ( 41 Assuming the optimal value for the threshold levels (0,_ = 0.6D, the above residual is found to be ,,_ 10-_. If 6,_ <_ 10 -2, then the nonlinear residual term becomes ,-_ 10 -6, which is insignificant for either total power or threshold estimation.
B
Other correlator gain attenuating sources Analog-to-digital converter hysteresis acts to reduce the correlation output by an amount proportional to the magnitude of the hysteresis. This effect has been modeled by D'Addario, et al. [19] assuming a uniformly distributed region of uncertainty about the nominal threshold.
However, this statistical model underestimates the attenuation effect because the hysteresis is treated as a process that is statistically independent from the signal. Rather hysteresis is a nonstationary process in which the current threshold level is dependent upon the previous value of the input signal.
To make a more accurate assessment of hysteresis a Monte-Carlo simulator was constructed to demonstrate the effect on the gain of the correlation channel. The simulator is based upon an A/D converter transfer function of the form: where Vnys,,_ is the hysteresis voltage.
The transfer function is graphically illustrated in Figure 7. Input correlation coefficients in the range -0.1 < p < 0.1 were tested with varying levels of hysteresis using 214 Monte-Carlo samples for each case. In Figure 8 where v°(t), v°(t), and v_(t) are mutually uncorrelated and wide-sense stationary, and At is an additional path delay or timing skew. If vc (t) is bandlimited then the cross-correlation function is: where B is the bandwidth or bandlimiting cutoff frequency of v_(t), and the function sinc(z) Forexample, a 10°or 20°phase difference will cause a 1.5% or 6% reduction in the correlation coefficient, respectively.
IV Calibration
Calibration of a digital polarimeter entails the periodic identification of slowly time-varying system hardware parameters. For the total power channels, these constants are the system gain and offset.
For the polarization correlating channel, the threshold-offset product (36) in section A. 1 as well as any other additive offsets (such as those originating from correlated LO noise) must be identified. As shown below, these parameters can be estimated using the simple hot and cold views of unpolarized blackbody standards as obtained during conventional total power channel calibration.
A Total power channel calibration
Identification of the gain and offset of the total power channels in (14) allow an antenna temperature estimate to be made. The output of the total power channel is related to the antenna temperature estimate by: where the left hand side is the linearized digital variance, 9,_ is the radiometer system gain in K -x, and the receiver temperature TREC,,_ is the system offset.
This system is easily solved for _ and Tn_c,o: cl, c3, and c_ are given by (20), and 7r6 is the offset product (36). The fifth-order term csp_ can be ignored if p0 < 0.1, which is usually the case when Tu = 0.
The two calibration targets provide unpolarized emission at two different radiation intensities. Sequential views of the hot and cold targets provide the digital correlation measurements r_b°t and _ta for the hot and cold looks, respectively.
Using these two measurements a system of equations can be formed: The coefficients _ot and c_,'ad are computed by using the relative threshold values O_ t and O_ a, respectively.
Using only a third-order expansion in/90 allows the above system to be solved ana- lytically. An estimate of the threshold-offset product can be found by: and an estimate of the correlation bias is a root of the following cubic: The solution of a cubic equation is given in [21, (3.8.2)]. For this particular cubic there is typically one real root and a pair of complex conjugate roots. The real root is the desired solution for P0 and where q and r are defined as aircraft-based studies of land and ocean emission (Figure 9). The radiometer operated successfully in a conical scanning configuration to measure the first three Stokes parameters over the wind-driven ocean at 10.7 GHz (X-band) and 37.0 GHz (Ka-band) [3]. In-situ calibration was accomplished using unpolarized hot and ambient temperature blackbody calibration targets and verified using a ground-based polarimetric calibration target [8].
A Hardware
The 10.7 GHz radiometer was a superheterodyne single-sideband (SSB) system with a low noise Estimates of second-order digital signal statistics were made by squaring and cross-multiplying the A/D converter outputs, then accumulating using digital counters. The total power, or variance, of an individual channel was measured by counting the number of times the input signal exceeded either the positive or negative threshold levels as in (9). The correlation coefficient was similarly determined by separately counting the number of positive and negative correlation counts. A total of eight AND/NAND gates composed the entire three-level multiplier circuit. The outputs of the digital multiplier were accumulated in four 24-bit counters providing 16.8 ms of integration time.
The initial 1-Gbit multiplier outputs were prescaled using high-speed 8-bit ECL ripple counters.
The system clock was distributed differentially to the counters using 50 f_ odd-mode coupled microstrip lines. The high-speed ECL signals exhibit transition times shorter than 250 ps; therefore, the digital signals have spectral content >4 GHz. On-chip and interconnect propagation delays within the multiplier circuit were compensated with clock delays generated by programmable delay chips. To save power, the output from these counters were carried to 16-bit TTL counters. The most-significant 16 bits were buffered and read by computer. The circuit was fabricated on sixlayer G 10 fiberglass circuit board. Microstrip interconnects were placed on the outer two layers of 1/2-oz copper and power was distributed via the internal layers of 2-oz copper.
Redundant analogtotal-power channels wereimplemented in parallelwith thedigital radiometers. ThesameIF signalsfedto thedigitalcorrelators werecoupledto square-law detectors at ,-_-23 dBm power level. Video amplifiers following the square-law detectors used integration times of 8 msec. The video amplifier output ranged from 0-10 V and was sampled by a 12-bit A/D converter. An analog offset was added to the video amplifier output to maintain the signal level within the operating voltage range of the A/D.
B Calibration
The unpolarized hot and ambient method of Section IV was used to identify the correlation offset and the threshold-offset product system parameters. Microwave foam absorber in ambient and liquid nitrogen conditions provided unpolarized radiation fields. Figure 11. By visual inspection, T, and Th mixing into :_tr appears to be nonexistent, but the correlator output is attenuated N25% at 37 GHz and ,-_70% at 10.7 GHz compared to Tu.
Using these data, the gains and offsets for the 10.7 and 37.0 GHz Tu channels were calculated according to the methods in [8] and presented in The calibration exercise also yielded analog and digital total-power measurements that were compared for consistency.
One hundred twenty-five samples with TB = 80 to 290 K were compared. The mean and standard deviation of the analog-digital measurement differences are tabulated in Table 3 In all, the digital total-power radiometer tracks the analog system quite well.
VI Discussion
The design techniques and radiometer hardware described here demonstrate the utility and technological feasibility of the digital polarimetric radiometer for earth remote sensing applications.
Other polarimeter topologies such as the analog correlating polarimeter or the analog adding polarimeter are possibilities. Such systems, however, can exhibit polarization cross-coupling beyond that caused by the antenna system that is not easily identifiable without sophisticated calibration techniques.
On the contrary, the digital polarimeter, if built to the proper design specifications, has the distinct advantage of negligible Stokes parameter cross-coupling and affords in-flight periodic calibration of all polarimetric channel parameters. Further, use of a three-level digital correlator provides a simple means of calibrating both correlation offsets as well as total power measurements.
To reiterate, the following design rules developed in Section III for the A/D converter parameters required to limit offset and gain perturbations are: 1. v_ < 0.01av_ to minimize correlator offsets rab[p:0 and _rab 2. minimize hysteresis, timing skew, and phase differences to maximize 9trtr Adherence to these rules is highly desirable in order that the radiometer can be used to make accurate measurements of the third and fourth Stokes parameters. However, if these design specifications cannot be met, the radiometer may still be used pending regular calibration using a polarimetric calibration standard or a similar method such as correlated noise injection to compensate for gainandoffsetvariations.
ThePSR/Dhardware demonstration confirmstheability to fabricate, operate, and calibrate a digital polarimetric radiometer in the field and exemplifies its utility in earth remote sensing, particularly in the observation of ocean surface winds. As a follow-on to the PSR/D demonstration, an implementation suitable for satellite deployment using lower-power space-qualified CMOS logic at comparable sample rates is feasible and is currently under development [30]. An incidental consequence of this work also exists in the application of digital correlators for synthetic aperture interferometric radiometry (e.g., [31]). Such systems will be susceptible to the same effects of nonlinearity, threshold asymmetry, timing skew, A/D converter hysteresis, and phase errors, all of which have been discussed here.
ACKNOWLEDGMENTS
The authors extend their appreciation to W.
A Correlation Coefficient Inversion
The digital correlation coefficient can be computed by the following: Rewriting the expression for r_b using the bivariate normal pdfyields: The task at hand is to expand the integrand in a Taylor series and then integrate. The integrand of the above is This can be expanded in a Taylor series in terms ofp':
I(P')=I(O)+I(1)(O)P'+_I(2)(O)P'2+II(a)(O)P'3+_. I(4)(O)p
The algebra involving the derivatives is quite cumbersome and the computer algebra package MapleV was used to evaluate the derivatives. The derivatives ofp(0a, --Ob, p) are easily found by substituting --Oh for Ob in the above. Because the first and third derivatives are odd functions of 0_ and 0b, it is immediately seen that the Taylor series terms with odd powers of p will cancel leaving only the even powers of p. Adding the appropriate derivatives yields the following for the integrand: Finally, integrating the above yields: This appendix contains a derivation of the sensitivities of both the cross-and autocorrelating channels of the digital polarimeter. These sensitivities are assumed to be optimized with respect to the A/D converter threshold level.
A Cross-correlator Sensitivity
The sensitivity of the third Stokes parameter cross-correlating channel is Using the chain rule, the derivative in the denominator is expanded: The derivative Or_b/Op evaluated for small p can be computed using (15) and ( The above expression can be written in terms of 0 by substituting in (7) and (11) B
Total-power Sensitivity
The sensitivity of the total-power channel is found similarly: Once again, the denominator is expanded using the chain rule: The firsttermin theproductis thedifferentialrelationship between theinputvoltagevarianceand the outputof thetotal-power channel of thedigitalcorrelator. From (11) = _ y_ <h2(vn(nT))) + -N_Z Z (h2(vn(nr))) <h_(v'_(mT))) Thus, the variance of _ is Forming the quotient (97) The Taylor series of this product is (2) Substituting these results into (115), the first term of the integrand is O-_p(--Oa +6a,Ob +6b, P) p:o+ _P(--Oa +6a,--Ob +6b, P) o:ol P' The first partial derivative with respect to p, evaluated at zero, of the bivariate normal pdf is Finally,substituting theaboveseriesintothea and b terms of (122) yields the following: The third term in the series expansion of I (p') contains Consider the following expression by expanding Z in a power series: Table1: Digital correlating polarimeter system parameters foundusinghot andambientunpolarizedcalibrationtargets. Table 2: Gain and offset terms for the 10.7 and 37.0 GHz digital polarimeters as measured using the polarized calibration standard. Table 3: Comparison of digital and analog total-power radiometer measurements. Figure 1: Block diagram of a typical digital polarimetric radiometer. This direct correlating polarimeter, utilizes a dual polarized antenna, dual channel superheterodyne receiver, and a 3-level digital correlator.
The IF signals are also coupled to traditional square law detectors and video amplifiers. | 6,314 | 2013-08-07T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Evaluation of an Efficient Approach for Target Tracking from Acoustic Imagery for the Perception System of an Autonomous Underwater Vehicle
This article describes the core algorithms of the perception system to be included within an autonomous underwater vehicle (AUV). This perception system is based on the acoustic data acquired from side scan sonar (SSS). These data should be processed in an efficient time, so that the perception system is able to detect and recognize a predefined target. This detection and recognition outcome is therefore an important piece of knowledge for the AUVs dynamic mission planner (DMP). Effectively, the DMP should propose different trajectories, navigation depths and other parameters that will change the robot's behaviour according to the perception system output. Hence, the time in which to make a decision is critical in order to assure safe robot operation and to acquire good quality data; consequently, the efficiency of the on-line image processing from acoustic data is a key issue. Current techniques for acoustic data processing are time and computationally intensive. Hence, it was decided to process data coming from a SSS using a technique that is used for radars, due to its efficiency and its amenability to on-line processing. The engineering problem to solve in this case was underwater pipeline tracking for routine inspections in the off-shore industry. Then, an automatic oil pipeline detection system was developed borrowing techniques from the processing of radar measurements. The radar technique is known as Cell Average – Constant False Alarm Rate (CA – CFAR). With a slight variation of the algorithms underlying this radar technique, which consisted of the previous accumulation of partial sums, a great improvement in computing time and effort was achieved. Finally, a comparison with previous approaches over images acquired with a SSS from a vessel in the Salvador de Bahia bay in Brazil showed the feasibility of using this on-board technique for AUV perception.
Introduction
Perception is one of the key issues in autonomous robotics. It usually involves robot self-perception (position, attitude, remaining energy, faulty situations), as well as perception of the environment (obstacle avoidance, mapping, objects, special waypoints). Hence, the perception system is essential for the robot to succeed in executing any field mission. Particularly in the hostile and unknown underwater world, a high quality perception system is necessary in order to build an AUV robust enough to withstand the main oceanic perturbations. Other important and necessary systems are the dynamic mission planner and the guidance and control systems [1][2][3][4][5][6].
The use of AUVs has been growing in the last decade, as they are a good tool for the sustainable exploitation of oceanic resources, for example, exploration in the deeper seas. Missions like underwater pipeline inspections and maintenance, prospection studies, mine detection, debris or other object recognition are among the preferred automated tasks to be developed for modern AUVs [7][8][9]. As seen in the literature, the technology for such task automation has shown a strong improvement in three main areas: 1) AUV technology; 2) perception devices for the underwater world, i.e., SONAR (Sound Navigation And Ranging); 3) novel acoustic image processing techniques. Regarding point (1), AUVs have undergone great improvement regarding constructive aspects and new materials, control algorithms and powerful computation tools [1][2]; [5]; [8][9]. For point (2), many devices like the multi-beam echo-sounder (MBE), side scan sonar (SSS) and synthetic aperture sonar (SAS) appear able to acquire high resolution data [10][11]. SSS is preferred due to its very good quality/cost trade-off. It has been tested in deep water conditions and is one of the most adequate choices for the detection task in underwater environments. The conventional SSS provide lines of acoustic pulses that vary from 200 to 2000 samples. Note also that the bigger quantity of samples implies more computational effort. Finally, with respect to point (3), there is still a great deal of work to be done. In effect, while AUV and sonar technology is mature enough for the aforementioned automated tasks and even though many approaches of acoustic images processing are currently available, they still require a strong on-line computational effort to achieve self and environmental perception. These acoustic image processing approaches can be analysed from different points of view regarding their speed, efficiency, resources needed, precision and robustness [12][13][14][15][16][17][18][19][20][21][22][23][24].
Sonar and radar (Radio Detection And Ranging) technologies share similar features in their processing. In addition, radars are used to detect and recognize vehicles with faster dynamics, like airplanes, dealing also with electromagnetic waves that are faster than acoustic ones [25][26]. The key concept of the present approach is to migrate radar techniques to sonar acoustic data processing. A group of target detection techniques widely used in radar technology is known as CFAR (Constant False Alarm Rate), described in detail in [25]. This group of techniques maintain a constant false alarm rate computed from the last n samples of the digitalized echoes power, also known as interference power. In this way, an adaptive detection threshold is adjusted to maintain a probability of expected false alarm (Pfa) by estimating the average of the interference power values of the adjacent n cells. This approximation is called Cell Averaging-Constant False Alarm Rate, or CA-CFAR for short [27].
Underwater pipeline and cable tracking is an interesting case study of AUV application with intensive on-board image processing for automatic and autonomous task development. To fulfil this objective, it is necessary to detect the pipeline first, then track it while obtaining other useful information like the pipeline situation (if buried with freespan, with corrosion, with near debris and others).
This article will describe in detail a CA-CFAR based algorithm for the acoustic image processing of SSS data for quantitative analysis of its feasibility to on-line processing. The main objective is to determine if it is efficient enough to be used for on-board and on-line processing in an AUV as an essential input for the AUV's dynamic mission planner. Using a set of data taken from SSS acoustic images of the seafloor of Salvador de Bahia, Brazil, it will be shown that with a refinement in computation, CA-CFAR could make a drastic reduction in time and computational resources.
This work is organized in the following way: section 2 shows the acoustic input data formation. Then, in section 3, an automatic processing chain is presented for a pipeline detection system, focusing on each of the processes. Section 4 shows the basic concepts of the detection theory with CA-CFAR and accumulated CA-CFAR. In section 5, the experimental result, analyses and comparisons with traditional CA-CFAR [27] and partial sums CA-CFAR [28] are presented. Finally, section 6 discusses the conclusions obtained from the work.
Acoustic Image Forming from a SSS
The SSS is a very interesting tool for high-resolution mapping of the seabed due to its excellent cost/quality trade-off [10]; [46]. It has been tested in deep water with satisfactory results [29][30][31][32]. Though SAS provide higher quality imagery and has been used in numerous works [33][34][35][36][37], it is not yet clear that it is better for automated target detection and recognition purposes. Reports about the use of MBE to explore the sea floor in detail are also given in [29]; [38][39][40]. The SSS is formed by a group of transducers that are mounted on both sides of the AUV. In each data acquisition cycle, these transducers scan sideways and downward, constituting a plane that advances in the direction in which the vehicle travels, the along-track. The direction that is perpendicular to the vehicle's straight movement is called across-track. Figure 1 shows an idealized representation of the operation of a SSS mounted on an AUV. The transducers on both sides of the sonar send out oblique acoustic signals in the shape of a fan. These acoustic pulses normally oscillate between 100 and 500 kHz. The port side (left) and starboard (right) sides of the images are scanned separately. The acoustic pulses travel through the water column, hit the seafloor and the echo, also named backscattering, is returned to the reception sensor where its amplitude is quantified. This amplitude depends on the angle of incidence and the cover of the seafloor. The echoes coming directly from the seafloor constitute the true returned signal. There are also multiple bounces off the seafloor or the sea surface that constitute reverberation or undesired echoes (multi-path). The regions under (nadir) and above (zenith) the sonar correspond to points of low and high reflection off the surface of the seafloor and the surface of the sea, respectively.
The data acquired are projected on a line traced along the seafloor. This scanning line is known as a swath. The acoustic data associated with this exploration line represents an observation of the reflected intensity depending on the range of the SSS and the relative angle between the AUV and the seafloor. If the vehicle is moving in a straight line at a steady speed, the deployment of successive swaths will build an acoustic image of the seafloor [11].
Underwater pipeline detection system
An automatic processing chain is applied to each of the acoustic lines acquired by the SSS. This processing chain consists of a group of serial processes. The input to one process is the output of the previous process. Figure 3 shows a block diagram of the simple processing chain utilized in the implemented detection system. As shown, the inputs to the whole processing chain are acoustic lines or swaths provided by the SSS and its output is a list of geo referenced (NED) coordinates of the pipeline position.
Before applying the target automatic detection processes, the acoustic data are pre-processed with the objective of improving the input to the detection process.
The first process consists of geometrical correction from the distortion caused by the inclination. The SSS acoustic images are prone to numerous unexpected problems, geometrical and natural, which interfere in the detection process [11]; [41][42]. The distortion by inclination corresponds to differences between the relative position of the characteristics of the acoustic image and the actual pipeline location on the sea floor. This distortion is overcome by a process of corrections that have two adjustments, one on the across-track direction and another on the along-track direction. To carry out this correction, a simple trigonometric relation is applied [10] utilizing navigation data such as altitude, the slant-range and the angle of incidence directly proportional to the groundrange (see Figure 1). There are also other sources of distortion that should be considered, such as the AUV's attitude or the water salinity. These factors were not taken into account in this first approach. Hence, the following strong suppositions were assumed for this research [43] for a complete correction: 1) the seafloor is plane and horizontal; 2) the acoustic pulse is propagated through the water at a constant speed; 3) the roll angle of the vehicle is null, because it does not contribute to the geometric distortions; 4) the vehicle is immobile from the moment the acoustic pulse is emitted until the return at maximum range is received. Even though these assumptions are rarely fully satisfied, the correction at this stage yields a much better image for continuing the subsequent processing. Thus, an acoustic image is defined as a function of two dimensions of discrete finite values ���� ��, where ��� �� are the coordinates 1 of the image matrix [15]. This intensity, or backscatter force of the sea floor, is defined as �(�� �), where (�) and (�) are part of a system of rectangular coordinates (�� �� �) defined of the sea floor, as illustrated in Figure 2 [43].
This coordinates system is defined as follows: y is aligned with the along-track direction and positive x represents the starboard across-track direction. Denoting by (�̅ ��� ��) the original image and by (����) the height of the water column in pixels at the n th line, it can be stated that: is the maximum number of along-track lines in the slant-range corrected image and (� � ) is the maximum number of pixels per across-track line. Since the value of (�) corresponding to (�) will in general be non-integer; this equation assumes the use of an appropriate technique for interpolating the lines of the original image at non-integer coordinates. Therefore, a linear interpolation was used for this work.
The next process consists of the elimination of irrelevant information. The acoustic images contain, in the centre, a black track, which is a blind spot corresponding to the nadir, which is inherent to this type of sensor. This irrelevant information must be substituted, because it generates a high contrast zone in the image. This will surely generate a false detection in next processing stage.
To avoid this, in each acoustic line, the greatest shadow limit is detected on both port and starboard side. Additionally, it is positioned in the centre of the sonar line and is traversed both on the right and the left until the greatest sharp variation of shades is found. Finally, a 1 The square brackets are used to indicate that � and � are discrete.
threshold value is calculated, which represents the brightness of the blind spot limit. Thus, the limits for each acoustic line are obtained, a process in which no processing will be carried out.
Another important issue is image enhancement. There exists extensive literature about this factor [15][16]; [41]. However, most of the traditional techniques for image enhancement are not adequate enough or cannot be directly migrated to acoustic image processing. Thus, it must be determined for each particular case if the application of this process is particularly useful. For the present approach, very good results were obtained without resorting to this processing stage.
The next step, shown in Figure 3, is automatic detection. This consists of labelling the image pixels, classifying the acoustic intensity around a discriminating threshold. In this way, pixels with acoustic intensity above this threshold are labelled with a saturating value (255 in an 8 bits quantization) and minimum value (0 in an 8 bits quantization) if they are below it [46]. As this automatic detection within the processing chain described is a core contribution of this work, it will be explained in further detail in the next section.
The final step in this processing chain is the correlation between adjacent lines, which mainly consists of false detections removal, and consequently determining the target's position. This is achieved through a normalized correlation of a set of (�) preprocessed swaths. The parameter (�) depended on the technological and physical features of the application. Then, this sub-image (���) was processed as follows. The summation of the With two of these geo-referenced points as neighbours, a vector was constructed. This vector pointed from the older geo-referenced detection point to the more recent, successive one. The vector was given as a reference to the guidance system of the AUV.
Automatic Detection using CA-CFAR
The problem of detection was summed up by analysing each sample with the purpose of detecting the presence or absence of a target. Detection techniques are generally implemented in analysing the information of adjacent samples. In [27], two hypotheses were defined for this analysis: 1) the sample is the result of interference ( ) in this case, acoustic reverberation; 2) the sample was the result of a combination between interference and echoes of a target ( ), in this case reverberation and backscattering, respectively. Consequently, the detection consisted of examining each sample and selecting one of the above two hypotheses as best fitting. If the hypothesis was the most appropriate, the detection system declared that the target was not present. On the other hand, if hypothesis was the most appropriate, the detection system declared that the target was present. Due to the signals being described statistically, the choice between these two hypotheses represents an exercise in statistics decision theory [44].
In the particular case of acoustic images, it was assumed that man-made structures on the sea floor were usually more reflective than the surrounding sediment [46]. For this reason, one of the detection alternatives was centred on finding the backscattering maximum intensities, also called the acoustic highlight, which varies considerably according to the relative sonar orientation the target. In fact, it can fall below the detection threshold, causing the target to appear invisible to the sonar.
On the other hand, an additional relevant characteristic of SSS images is that the objects that stand out above the seafloor generate shadows; that is to say, areas where the echo intensity is frequently lower than the level coming from the seafloor. Shadow length depends on the vertical height of the object. Thus, there are other detection alternatives that utilize these shadows. Due to the data being acquired from a moving vehicle, the sonar geometry as it concerns the target was variable. In this case, a shadow can be present even when the acoustic highlight is not. Thus, it is desirable to combine both detection approximations, as is proposed in this work. The dark grey cells in Figure 4 represent the neighbouring data, which will be averaged to estimate the noise parameters. These cells are the reference cells (�). Note that each cell represents one pixel. Also in Figure 4, a file vector of (1xn) cells is depicted. The length of this file vector depends on the resolution of the SSS. The lighter grey cells, immediately next to the test cell (� � ), are called guard cells (�). These cells are excluded from the average. The reason for this is that if the target is present, then the neighbouring cells will contain similar values. In this case, the acoustic highlight in the cells surrounding � � should contain the same values of acoustic intensity and should not be representative only by its own value. The increase in acoustic highlight of the target should tend to increase the estimation of the reverberation parameters.
The total number of reference and guard cells is calculated utilizing the equations (2) and (3), with � � � (see Figure 4): The procedure for determining the detection threshold (�) is described below. Let us consider the case of a Gaussian reverberation with a square law detector. The probability density function (pdf) for any cell � � � � has only one free parameter, which is the mean of the reverberation power (� � ). Likewise, the process estimates the mean of the reverberation power in the test cell using the adjacent cells' data, using the following expression: It is supposed that the content of (� � ) cells, which are neighbouring ones to the cell under test � � , will be used to estimate (� � ). Another supposition is that reverberations are independent and identically distributed (i.i.d). Then, the joint probability density function � � � � for a vector �̅ � = (� � � � � � � � � � � ) of neighboring cells (� � ) is: The equation (5) is the likely function ⋀ for the vector of observed data �̅ . The maximum estimated likelihood (MEL) of (� � ) is obtained by maximizing the equation (5) with respect to (� � ) [44]. Mathematically, it is equivalent to and generally easier to maximize the log-likelihood function thus [27]: Deriving equation (6) with respect to (� � ) and equating it to 0 yields: The detection threshold (� � ) required is estimated as a scalar multiple α>0 of the reverberation power: An adaptive threshold can be considered at a constant rate or probability of false alarm; however, the reverberation levels will vary. The threshold (� � ) and the probability of false alarm (� �� ) are random variables. The CFAR detector is considered if the value of the probability of false alarm does not depend on the current value of (� � ). Combining equations (7) and (8) yields the expression for the estimated threshold: , and using the standard result of the probability theory with equation (4) yields the pdf of � � : This pdf of (� � ) is known as the Erlang density with parameters (� � ) and ( � � �� � ): The observed (� �� ) with the estimated threshold will be ���(�� � �� � ), which is also a random variable. Its expected value was computed as: Completing the standard integral and carrying out some algebraic manipulation, the final result was obtained: For an expected ( � � �� ), the required value of the multiplier (�) is acquired from solving equation (13): Note that ( � � �� ) does not depend on the reverberation power (� � ), but on the number (� � ) of neighbouring cells and the threshold multiplier (�). Thus, the technique of cell average exhibits the CFAR behaviour. This is significant, because a drastic reduction of computation times can be obtained, as will be demonstrated experimentally in the following section.
Accumulated Cell Average Constant False Alarm Rate ACA-CFAR
As demonstrated in [28], it was possible to achieve pipeline detection from acoustic images of a SSS with the standard CA-CFAR. In addition, a variation of this approach, Partial Sums CA-CFAR, was introduced and tested experimentally. In this work, a refinement of CA-CFAR was introduced and evaluated with field data. It was named ACA-CFAR for Accumulated Cell Average Constant False Alarm Rate. It consisted of a continuous average of the values of cells with which to calculate the threshold (T). Within each step, a reference cells window and a guard cells window were taken and averaged using equation (7). With this value, the threshold was estimated and then the algorithm checks for the presence of a target were conducted. In order to perform this computation, it was necessary to define a window of (� � ) reference cells that slid over all samples until the process was complete. Consequently, for each estimation of the adaptive threshold for every sample to be analysed, (� � ) access to memory for the calculation of the sum of the reference cells and (� � ) new access to memory for the guard cells were required. This calculation required considerable computational resources and time to analyse the entire sample data. For this reason, the proposed improvement focused on the calculation of the sum of reference and guard cells (see Figure 4). From analysing the accumulated CA-CFAR, it can be observed that with only two memory accesses at maximum, the value of the summation for any cell could be obtained. This computation was done prior to threshold estimation and detection checking. This method was identically applied to compute the summation of the guard cells.
Experimental Results
The algorithms were originally developed with MATLAB and were then ported to code written in C++, taking advantage of the data structure within OpenCV. The algorithms were executed on a PC with a CPU 2GHz Intel(R) Core(TM) 2 Duo and 2GB RAM memory, with Linux OS. The SSS was a StarFish 450F, utilizing advanced digital CHIRP acoustic technology. Even when the AUV's on-board CPU facility was a FitPC-2 with different resources, the experiments consisted in the preliminary phases of comparison studies among different detection approaches. It was expected that the best one would be selected to be ported to the run-time environment at the ICTIOBOT AUV prototype [1], travelling at an almost constant speed of 2m/sec.
Data
The experimental data employed in this work were acoustic images of a SSS taken from a vessel on the seafloor of Salvador de Bahia, Brazil, where an exposed pipeline has been laid down. For SSS detection, it is necessary that the pipeline be fully or partially exposed. If buried, the perception sensor would have needed a magnetic tracker or a sub-bottom profiler.
The pipeline tracking had two stages: the first was initiated at latitude ��������������� and longitude �3���3����3��3��� and concluded at latitude -12º 51' 33, 28'' and longitude �3���3����������. 50500 Lines of valid acoustic data were collected, yielding 101 images at 1000x500 pixels for testing the algorithms. The second stage, started at latitude ���������33����� and longitude �3���3����������� and concluded in latitude ����������������and longitude��3���3���3������, collected 47000 lines of acoustic data totalling 94 images of the same size as the ones obtained for the first test stage. Figure 6 shows three examples of original SSS images in (a) the output after applying this automatic detection in (b) and the final result after making the correlation of adjacent lines in (c). These images have been cropped for better presentation. In each case, the pipeline can be found on the right side of the SSS. In Figure 6.1, a straight and well-defined pipeline can be observed. In Figure 6.2, the pipeline is slightly curved and a lot of sediment has accumulated on top of the image, which may have produced false detections. Figure 6.3 exhibits an intermittent buried pipeline. In Figure (c), a red circle denotes detection points for tracking, obtained by the algorithm. Details about these detection points are also given in Table 1. As can be seen, the result of this automatic detection consists of spatial coordinates (row and column), as well as the absolute latitude and longitude of the acoustic line, then the pipeline position (point detection for tracking).
Acoustic Data Samples Vector: Accumulated Acoustic Data Samples Vector: Summation vector for each cell or acoustic intensity: Figure 6: space coordinates (column 2 and 3), absolute coordinates of the acoustic line (column 4 and 5) and absolute coordinates of the pipeline position (column 6 and 7). Equations (18) and (19) show, respectively, the computation of the number of algorithm instructions for partial sums CA-CFAR presented in [28] and the ACA-CFAR introduced in this work:
Image
� �������� � � � * � � (19) Note that the performance index of equation (19) is constant for the same image, depending only on the amount of samples (� � ). Table 2 shows the settings for the automatic detection process with CA-CFAR, PSCA-CFAR and ACA-CFAR. These all present similar detection results. However, the performance difference regarding the amount of CPU instructions is remarkable.
Comparisons
Analysing A graphical comparison of the algorithms' performance is shown in Figure 7. As can be seen, the ACA-CFAR maintained a constant number of instructions even though the number of reference or guard cells varied. In other words, if the number of neighbouring or contextual cells was increased, this novel technique maintained the same number of CPU instructions, depending only on the sample amount. This is a very significant advantage with regards to previous CFAR techniques, the performance of which does depend on the number of reference or guard cells, which slows down their performance.
Conclusions
The main contribution of the work presented here is the proposal of a novel automatic acoustic image processing technique. It was experimentally tested for pipeline detection using acoustic data obtained with a SSS in Salvador da Bahia, Brazil. The image processing technique called cell average constant false alarm rate (CA-CFAR) was borrowed from the radar domain and was strongly improved by changes in the computing algorithm for on-line processing and detection. The accumulated CA-CFAR, or ACA-CFAR for short, gives the same detection results of CA-CFAR, with a significant decrease in the computational effort and time.
This preliminary comparison study was conducted to select the best approach for programming the on-board perception system of the AUV prototype ICTIOBOT. This perception system will be applied to the off-shore industry devoted to pipeline tracking by using images with a higher resolution. These results also showed that it was a good idea to migrate concepts from radar to sonar. The efficient CACFAR image processing technique is a good choice for obtaining on-line and efficient performances also in the acoustic domain.
These features are essential for perception feedback in the dynamic mission planner, the guidance and the control and navigation systems of the aforementioned AUV prototype.
Acknowledgements
This work was carried out thanks to financing from the following projects: | 6,493.2 | 2014-02-01T00:00:00.000 | [
"Computer Science"
] |
Search for New Physics Using Quaero: A General Interface to - D0 Event Data
We describe quaero , a method that i) enables the automatic optimization of searches for physics beyond the standard model, and ii) provides a mechanism for making high energy collider data generally available. We apply quaero to searches for standard model WW , ZZ , and t (cid:22) t production, and to searches for these objects produced through a new heavy resonance. Through this interface, we make three data sets collected by the D(cid:31) experiment at p s = 1 : 8 TeV publicly available.
It is generally recognized that the standard model, a successful description of the fundamental particles and their interactions, must be incomplete.Models that extend the standard model often predict rich phenomenology at the scale of a few hundred GeV, an energy regime accessible to the Fermilab Tevatron.Due in part to the complexity of the apparatus required to test models at such large energies, experimental responses to these ideas have not kept pace.Any technique that reduces the time required to test a particular candidate theory would allow more such theories to be tested, reducing the possibility that the data contain overlooked evidence for new physics.
Once data are collected and the backgrounds have been understood, the testing of any specific model in principle follows a well-defined procedure.In practice, this process has been far from automatic.Even when the basic selection criteria and background estimates are taken from a previous analysis, the reinterpretation of the data in the context of a new model often requires a substantial length of time.
Ideally, the data should be "published" in such a way that others in the community can easily use those data to test a variety of models.The publishing of experimental distributions in journals allows this to occur at some level, but an effective publishing of a multidimensional data set has, to our knowledge, not yet been accomplished by a large particle physics experiment.The problem appears to be that such data are context-specific, requiring detailed knowledge of the complexities of the apparatus.This knowledge must somehow be incorporated either into the data or into whatever tool the non-expert would use to analyze those data.
Many data samples and backgrounds have been defined in the context of sleuth [1], a quasi-model-independent search strategy for new high p T physics that has been applied to a number of exclusive final states [2,3] in the data collected by the DØ detector [4] during 1992-1996 in Run I of the Fermilab Tevatron.In this Letter we describe a tool (quaero) that automatically optimizes an analysis for a particular signature, using these samples and standard model backgrounds.sleuth and quaero are complementary approaches to searches for new phenomena, enabling analyses that are both general (sleuth) and focused (quaero).We demonstrate the use of quaero in eleven separate searches: standard model W W and ZZ production; standard model t t production with leptonic and semileptonic decays; resonant W W , ZZ, W Z, and t t production; associated Higgs boson production; and pair production of first generation scalar leptoquarks.The data described here are accessible through quaero on the World Wide Web [5], for general use by the particle physics community.
The signals predicted by most theories of physics beyond the standard model involve an increased number of predicted events in some region of an appropriate variable space.In this case the optimization of the analysis can be understood as the selection of the region in this variable space that minimizes σ 95% , the expected 95% confidence level (CL) upper limit on the cross section of the signal in question, assuming the data contain no signal.The optimization algorithm consists of a few simple steps: (i) Kernel density estimation [6] is used to estimate the probability distributions p( x|s) and p( x|b) for the signal and background samples in a lowdimensional variable space V, where x ∈ V.The signal sample is contained in a Monte Carlo file provided as input to quaero.The background sample is constructed from all known standard model and instrumental sources.
The semi-positive-definiteness of p( x|s) and p( x|b) restricts D( x) to the interval [0, 1] for all x.(iii) The sensitivity S of a particular threshold D cut on the discriminant function is defined as the reciprocal of σ 95% .D cut is chosen to maximize S. (iv) The region of variable space having D( x) > D cut is used to determine the actual 95% CL cross section upper limit σ 95% [8].
When provided with a signal model and a choice of variables V, quaero uses this algorithm and DØ Run I data to compute an upper limit on the cross section of the signal.Instructions for use are available from the quaero web site.
Table I shows the data available within quaero, and Table II summarizes the backgrounds.These data and their backgrounds are described in more detail in Ref. [3].The final states are inclusive, with many events containing one or more additional jets.Kolmogorov-Smirnov tests have been used to demonstrate agreement between data and the expected backgrounds in many distributions.The fraction of events with true final state objects satisfying the cuts shown that satisfy these cuts after reconstruction is given as an "identification" efficiency (ǫ ID ).Because electrons are more accurately measured and more efficiently identified than muons in the DØ detector, the corresponding muon channels µ / E T 2j and µµ 2j have been excluded from these data.
To check standard model results, we remove W W and ZZ production from the background estimate and search (i) for standard model W W production in the space defined by the transverse momentum of the electron (p e T ) and missing transverse energy ( / E T ) in the final state eµ / E T , and (ii) for standard model ZZ production in the space defined by the invariant mass of the two electrons (m ee ) and two jets (m jj ) in the final state ee 2j.Removing t t production from the background estimate, we search for this process (iii) in the final state e / E T 4j using the two variables laboratory aplanarity (A) and p j T , II.Standard model backgrounds (often produced with accompanying jets) to the final states considered.V V denotes W W , W Z, and ZZ; "data" indicates backgrounds from jets misidentified as electrons estimated using data.Monte Carlo programs (isajet [9], pythia [10], herwig [11], and vecbos [12]) are used to estimate several sources of background.and (iv) in the final state eµ / E T 2j, using the two variables p e T and p j T , assuming a top quark mass of 175 GeV.Including all standard model processes in the background estimate, we look for evidence of new heavy resonances.We search (v) for resonant W W production in the final state e / E T 2j, using the single variable m eνjj after constraining m eν and m jj to M W , and (vi) for resonant ZZ production in the final state ee 2j, using the variable m eejj after constraining m jj to M Z .In both cases we remove events that cannot be so constrained.To obtain a specific signal prediction, we assume that the resonance behaves like a standard model Higgs boson in its couplings to the W and Z bosons.Constraining m eν to M W and m jj to M Z , we use the quality of the fit and m eνjj to search (vii) for a massive W ′ boson in the extended gauge model of Ref. [13].Using m eν 4j after constraining m eν to M W , we search (viii) for a massive narrow Z ′ resonance with Z-like couplings decaying to Non-resonant new phenomena are also considered.The variables m jj and either m T eν or m ee are used to search for a light Higgs boson produced (ix) in association with a W boson, and (x) in association with a Z boson.Finally, we search (xi) for first generation scalar leptoquarks with mass 225 GeV in the final state ee 2j using m ee and S T , the summed scalar transverse momentum of all electrons and jets in the event.The numerical results of these searches are listed in Table III.Figures 1 and 2 present plots of the signal density, background density, and selected region in the variables considered.III.Limits on cross section × branching fraction for the processes discussed in the text.All final states are inclusive in the number of additional jets.The fraction of the signal sample satisfying quaero's selection criteria is denoted ǫsig; b is the number of expected background events satisfying these criteria; and N data is the number of events in the data satisfying these criteria.The subscripts on h, W ′ , Z ′ , and LQ denote assumed masses, in units of GeV.
We note slight indications of excess in the searches for t t → e / E T 4j and t t → eµ / E T 2j (corresponding to cross section × branching fractions of σ × B = 0.39 +0.21 −0.19 pb and 0.14 +0.15 −0.08 pb) that are consistent with our measured t t production cross section of 5.5 ± 1.8 pb [14] and known W boson branching fractions.Observing no compelling excess in any of these processes, limits on σ × B are determined at the 95% CL.As expected, we find these data insensitive to standard model ZZ production (with predicted σ × B ≈ 0.05 pb), and to associated Higgs boson production (with predicted σ × B < ∼ 0.01 pb).As a check of the method, quaero almost exactly duplicates a previous search for LQLQ → ee 2j [15].
quaero is a method both for automatically optimizing searches for new physics and for allowing DØ to make a subset of its data available for general use.In this Letter we have outlined the algorithm used in quaero, and we have described the final states currently available for analysis using this method.quaero's performance on several examples, including both standard model and resonant W W , ZZ, and t t production, has been demonstrated.The limits obtained are comparable to those from previous searches at hadron colliders, and the search for W ′ → W Z is the first of its kind.This tool should in-
FIG. 1 .FIG. 2 .
FIG. 1.The background density (a), signal density (b), and selected region (shaded) (c) determined by quaero for the standard model processes discussed in the text.From top to bottom the signals are: W W → eµ /ET , ZZ → ee 2j, t t → e / ET 4j, and t t → eµ / ET 2j.The dots in the plots in the rightmost column represent events observed in the data.
TABLE I .
A summary of the data available within quaero, including the selection cuts applied and the efficiency of identification requirements.The final states are inclusive, with many events containing one or more additional jets.Reconstructed jets satisfy p j T > 15 GeV and |η j det |< 2.5, and reconstructed electrons satisfy p e T > 15 GeV and (|η e det |< 1.1 or 1.5 <|η e det |< 2.5), where η det is the pseudorapidity measured from the center of the detector. | 2,630.6 | 2001-06-07T00:00:00.000 | [
"Physics"
] |
Selection and justification of priority tasks of biogas plant management taking into account technological risks
This article is devoted to the selection and justification of priority applied control tasks in the automation of a biogas plant (BP), taking into account the risks of loss of control. The types of technological risks in the management of fermentation processes in methane tanks are considered. A reasoned analysis of quantitative risk assessments using the pairwise comparisons method is carried out. The use of the method of consistency of expert opinions in the algorithm for solving the problem of selection and justification made it possible to conduct a strict analysis of the consistency of expert opinions and to identify whether the obtained estimates are random or not. Using the obtained information models for the regime parameters of anaerobic digestion processes, the relevance of developing a better control system for the optimal temperature regime of substrate heating, temperature stabilization in the methane tank, and the rational end time of the fermentation process is justified. The results obtained are in the best agreement with the improvement of the BP process management efficiency.
Introduction
The development of modern society is provided, first of all, by the energy base. The threat of an energy crisis on a global scale makes the problem of developing and popularizing renewable energy sources urgent. Even today, in many countries, the active use of such sources is one of the main priorities of energy policy. Inefficient waste management and growing problems with environmental pollution are the result of the irrational use of natural resources around the world. If by 2010 the contribution of biomass to the total energy consumption in the world was up to 12 %, the forecast for the growth of biomass as a source of renewable energy in the world suggests reaching 23.8 % by 2040 [1]. Biogas plants are one of the most important means of developing bioenergy. Currently, various biogas plants schemes have been developed and successfully applied in the world.
Biogas plants (BP) are of particular interest for livestock enterprises. Almost all the waste of these enterprises, especially the waste of large farms, is of organic origin, and can be disposed of by anaerobic digestion. Thanks to the use of biogas technologies, it is possible to achieve the utilization of organic waste with the production of biogas and high-quality fertilizer, which has a less aggressive effect on plants than before processing, helps to reduce the emission of unpleasant odors and improve the environmental situation.
By now, many technological BP schemes have been developed. For farmers who want to use BP, there is a need to make a deep analysis of all the technologies presented, to determine the most cost-effective and suitable solution for the climatic conditions of this region. This fact is significantly complicated by the lack of ready-made solutions for the automation of these plants, which hinders the implementation of biogas technologies, taking into account the peculiarities of the region. The presence of reasonable tasks solved in automation and automatic control systems (ACS) for fermentation processes contributes to ensuring the modern technical level, quality and efficiency of BP during their operation.
For a long time, agriculture has not been an attractive area for the application of information technology (IT) in automation due to the long production cycle, subject to natural risks and large crop losses during cultivation, the inability to use high-quality information and a small amount of quantitative information for operational decisionmaking. The use of IT in the agro-industrial complex was limited to the use of computers and software, mainly for financial management and tracking commercial transactions [2,3]. Not so long ago, farmers began to use digital technologies to monitor agricultural crops and support decision -making of applied problems of agricultural production, for example, to determine the rational area of the cultivated land plot [4]. BP automation is becoming a new class of tasks to be solved and its practical focus on the development of alternative energy sources. As a result, it is relevant to choose, justify priority tasks and make decisions on automation and management of BP based on advanced IT.
The scientific novelty of the used problem solving technology is as follows: -in the justification of a new class of tasks to be solved in the context of a constant increase in prices for basic energy carriers and the depletion of the Earth's hydrocarbon resources in the presence of cheap raw materials and the accumulation of waste in the agroindustrial complex; -the basis of the technology used does not require the involvement of a large number of experts; -in ensuring an increase in the stability of the result of solving the problem by applying the method of consistency of expert judgments developed by us in the solution algorithm [5].
Justification of the chosen method of solving the problem
The suggested method of selecting the priority tasks of automatic control and regulation of the BP is based on a method that makes it possible to take into account many factors. Since there are several different methods of quantitative and expert evaluation of the choice, there is uncertainty in the choice of the method of solving the problem and, therefore, there is a risk of making an irrational decision when choosing problems. To reduce the risk, it is necessary to eliminate the uncertainty in the choice of the solution method, it means to remove the inconsistency between the results of solutions by different methods and thus ensure the stability of the result obtained. To increase the stability of the solution result, it is suggested to use the method of pairwise comparisons, which implements a method of processing information by increasing the consistency of expert opinions and conducting their hierarchical evaluation, which allows you to get the priority of tasks for their solution.
In an article by specialists of the German company Sartorius BBI Systems GmbH [6] on risk analysis and management, it is noted that risk management together with the fermentation process management system allows to increase the efficiency of management decisions. At the same time, the analysis of existing developments shows that almost at the stage of automated control systems (ACS) design, when choosing and justifying the tasks of regulation and management, possible risks from the loss of control of the BP are not considered. In this connection, it is not possible to identify the impact of each of the decisions on the overall effectiveness due to their aggregate influence. To increase the validity of the choice of the most significant tasks of managing the interconnected BP processes in the designed automated control systems, we will use a pairwise comparison of the risks from loss of control.
Comparative analysis of quantitative risk assessments in the management of a biogas plant
The process of anaerobic digestion is a complex technological process. For the normal course of fermentation, it is necessary to maintain optimal conditions in the methane tank, which are provided by the functioning of control systems at different interrelated stages of biogas technology. If the modes deviate from the optimal ones, risks are possible at each of the stages. The main ones are the following: -risks associated with choosing a non-optimal anaerobic digestion regime -R11; -risks associated with the choice of a non-optimal mode of substrate supply to methane tanks -R12; -risks associated with the violation of the heating system of the methane tank and its insulation -R13; -risks associated with disruption of the biogas collection system-R14; -risks associated with violations of the biogas discharge technology, the cleaning system and the operation of other auxiliary equipment -R15.
A reasonable choice of a rational decision to manage the process is associated with the assessment of the most significant risk from the above groups. The comparison of risk assessments was carried out by pair-wise comparison using the analytic hierarchy process (AHP) [7]. For a pair-wise comparison of indicators, we use a scale that contains numerical indicators from 1 to 9 and their inverse values [7]. Expert judgments are expressed in the integers of this scale. A positive aspect of the AHP is the ability to check expert assessments for consistency, which may appear when the expert fills in the comparison matrix (CM). Taking into account the rating scale, the comparison of the impact of the main characteristics on efficiency was carried out according to the following principles [8]: 1 -the considered risk characteristic affects the effectiveness to the same extent; 3 -the considered risk characteristic slightly reduces the effectiveness compared to the other; 5 -the considered risk characteristic noticeably reduces the effectiveness compared to the other; 7 -the considered risk characteristic significantly reduces the effectiveness compared to the other; 9 -the considered risk characteristic greatly reduces the effectiveness compared to the other; 2, 4, 6, 8 -the corresponding intermediate values.
To establish the consistency of expert estimates, the deviation of the maximum n -number of items to compare; Е(CM) -expected value CI CM. Expert opinions are considered to be consistent with the value of CR ≤ 10 %. The consistency of the matrices of pair-wise comparisons is achieved by adjusting their eigenvalues [5]. The positive aspect of this adjustment method is that there is no need to revise all the values of the matrix to improve its consistency. As a result of the expert survey, a matrix of risk comparisons was obtained and the results for the group of risks associated with violations in the systems for ensuring the optimal mode of the fermentation process, shown in Table 1.
The option with the maximum value of the priority vector is considered a priority. It can be seen from (1) that the first variant really surpasses the other analogues in terms of basic characteristics.
The resulting weight coefficients show the significance of the risks R11 and R12, since their total weight is 77.4 % (47.8 % + 29.6 %). The consequences of risks R13, R14, R15 are considerably less significant than the risks R11 and R12, which is explained by the conduct of these operations after the start or end of the process during the operation of support equipment. The results obtained are in the best agreement with the essence of the applied problems of industrial BP management and the requirements for their solution. The suboptimality of the fermentation modes (risk R11) is determined by the violation of the factors affecting gas production. These factors, from a technical point of view, most often include the supply systems, heating of the substrate and mixing, temperature and pH, and the composition of the substrate mixture. The supply of the substrate (s) to the apparatus (risk R12) serves as an effective control effect on the fermentation process. The control of the substrate supply should be carried out slowly and exclude both the supply of large portions of cold substrate and changes in the composition of the substrate. The factors affecting the production of gas are most often the substrate supply systems (too much loading of the fermenter (over-feeding), or too little loading of the substrate (underfeeding)), heating of the substrate, temperature and mixing of the medium in the apparatus. Management of fermentation in many cases is reduced to maintaining the concentration of the substrate within the smin ≤ s ≤ smax by introducing certain portions as the composition of the substrate mixture changes during fermentation.
Let us proceed to the analysis of the consequences of risks directly related to the fermentation processes in methane tanks. The risks associated with the loss of control of the fermentation process, as a rule, reduce the yield of gas and fertilizer, and, consequently, reduce the economic efficiency of the BP. To control and regulate the fermentation process, the following risks are identified as the main ones: risks associated with poor control of the heating temperature of the supplied substrate in the fermenter -R21; risks associated with the loss of accuracy of controlling the temperature of the medium in the fermenter -R22; risks associated with the loss of control of the fermentation time -R23; risks associated with the lack of control of changes in the composition of the substrate in the fermenter -R24; risks associated with insufficient speed of obtaining information about the concentration of sulfur and ammonia when changing the composition of the substrate mixture before the start of the fermentation process and the formation of fatty acids or the concentration of H + ions in the produced gas, especially when changing the composition of the substrate mixture -R25.
Using expert information for these risks, a comparison matrix is obtained and the results are summarized in Table 2.
The option with the maximum value of the priority vector is considered a priority. It can be seen from (2) that the first option really surpasses the other analogues in terms of basic characteristics.
As a result of the analysis of this group of risks, it was revealed that during the fermentation process, the most significant losses will be from poor-quality control of the heating temperature of the supplied substrate in the fermenter (R21); risks associated with the loss of accuracy of control of the medium temperature in the fermenter (R22); risks associated with the loss of control of the duration of the fermentation process -the time of the optimal end of the process (R23). Their total weight is 89.3 % (48.2 % + 27 % + 14.1 %).
Confirmation of the received estimates is the following.
The temperature of the substrate in the fermenter must be regulated with high accuracy, since anaerobic microorganisms are very sensitive to its sharp fluctuations. This is due to the rate of adaptation of the biomass to new conditions. Maximum temperature fluctuations can lead to a slowdown in the process of anaerobic biochemical oxidation, and in critical conditions even to a complete stop in the formation of biogas, due to the death of anaerobic bacteria for the synthesis of methane [8]. Plants with a psychrophilic mode of operation due to the low temperature of 23 °C have a long fermentation period and a relatively small gas capacity. It is known [9] that the higher the temperature of the process, the more sensitive the bacteria are to its fluctuations. This follows from the relatively narrow maximum of the graph curve under the thermophilic regime.
Practice has shown that microorganisms are harmed, first of all, by a rapid change in temperature, and on the contrary, methanogenic microorganisms can, in the case of a slow change in temperature, adapt to its different values. Therefore, for the stability of the technological process, the absolute temperature is not so important, as the constancy of the temperature value is much more important. When choosing the temperature regime, it is necessary to provide reliable insulation suitable for the climatic conditions of the plant installation region, a system for automating the operation of the biogas plant and the heating system, in order to avoid sudden temperature fluctuations [10].
The temperature at which fermentation takes place has a significant effect on the time of the process. It is known the higher the temperature of the process is the faster the decomposition occurs. At the same time, the combined effect of the fermentation temperature and the fermentation time on the amount of gas produced should be noted. A short duration of fermentation under anaerobic conditions leads to incomplete processing of the substrate, an excessively long duration reduces the mass of waste disposed of and leads to economic losses [11]. Therefore, one of the main tasks for managing the BT regime is to determine the optimal duration of waste processing in the methane tank.
Losses associated with the control of the substrate supply should, if possible, exclude changes in the composition of the substrate. The latter is not an element of the automatic mode, since it concerns changing the diet of animals before it is fed to the fermenters. This is confirmed by the low risk value of R24 = 7.3 %. The inability to automatically influence the concentration of sulfur and ammonia in the substrate mixture before starting the fermentation process and the formation of fatty acids in the produced gas is characterized by the lowest risk value of R25 = 3.3 %.
Conclusion
The most important tasks of BT management, identified taking into account technological risks and the modular principle of the organization of biogas technology, are the control of the feed process and the set heating rate of the substrate fed to the BT reactors, stabilization or control of the optimal temperature regime in the reactor, as well as control of the mixing of the medium in the fermenter. Solving these problems requires ensuring a high degree of reliability and accuracy of functioning ACS. The effective functioning of such ACS, combined with the management of technological risks, will ensure the economical disposal of various types of organic waste. Less suitable tasks for solving in automatic mode are those related to ensuring the reliability of the heating system of the methane tank and its incomplete thermal insulation, as well as the system for collecting and unloading biogas, its purification and the operation of other auxiliary equipment (filters, gas tanks). These tasks are solved with the use of so-called closed strapping devices and the use of filters.
If the management of the BT does not eliminate the risks associated with the violation of the conditions for the preparation of raw materials, the heating system and the supply of the substrate, the temperature regime during the fermentation process and the operation of the gas tank, then the funds spent, literally, go to the sewer. When choosing BT management tasks, you should take into account the above-mentioned critical operations and strive to minimize the risks associated with them.
Since the priorities of the management tasks of the existing BT are not clear, the chosen method of paired comparisons in this case is applicable and allows you to determine the relative importance of different options and provides a basis for comparing each option with others, helping to rank the options. The numerical results obtained by this method are in the best agreement with the essence of the applied problems of BT management and the requirements for their solution. | 4,235 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Experimental Study on the Effect of Limestone Powder Content on the Dynamic and Static Mechanical Properties of Seawater Coral Aggregate Concrete (SCAC)
The development of island construction concrete can serve as a basis for the development and utilization of island resources. Complying with the principle of using local materials to configure seawater coral aggregate concrete (SCAC) that is able to meet the requirements of island and reef engineering construction could effectively shorten the construction period and cost of island and reef engineering construction. In this paper, quasi-static mechanical experiments and dynamic mechanical experiments were carried out on SCAC with different limestone powder contents. High-speed photography technology and Digital Image Correlation (DIC) were used to monitor the dynamic failure process and strain field of SCAC, and the influence of limestone powder content on the dynamic and static mechanical properties of SCAC was investigated. The results showed that, when the limestone powder content was 20% and 16%, the quasi-static compressive strength and quasi-static tensile strength exhibited the best improvement. Additionally, with increasing limestone powder content, the dynamic tensile strength of SCAC first showed and increasing trend and then a decreasing trend, reaching its maximum value when the limestone powder content was 16%. Moreover, the maximum strain value of SCAC with the same limestone powder content increased with increasing strain rate grade, showing an obvious effect on strain rate.
Introduction
The design and construction of island engineering projects such as island airports, island buildings, and docks serve as a basis for the development of marine resources [1][2][3]. However, in the process of island construction, transportation costs and construction durations will undoubtedly be increased if all building materials (especially concrete) need to be transported by land [4][5][6]. In addition, due to the special environment of islands, island buildings and constructions inevitably face the threat of dynamic loading, resulting from phenomena such as earthquakes and explosions [7,8]. Therefore, methods for producing and processing seawater coral aggregate concrete (SCAC) enable to meet the requirements of island and reef engineering construction using local materials in order to reduce economic costs and shorten construction periods while satisfying the needs of island and reef engineering construction projects is a key issue in island and reef construction.
The research on SCAC can be traced back to World War II [9]. With the development of marine resources, the performance of SCAC is becoming a hot topic in research on the development of marine and island resources. Over nearly half a century, researchers have carried out research to differing extents on various aspects of SCAC performance (such as corrosion resistance, durability, mechanical properties, etc.), and some progress has been made [10]. Studies have shown that, due to the low strength, ease of crushing, high porosity and high permeability of coral aggregate particles, the strength of SCAC mixed with coral aggregate and seawater is not ideal [11]. The original coral concrete exhibited a relatively low compressive strength of approximately 30 MPa [10], which is not able to satisfy the requirements of island and reef engineering construction [12]. Therefore, current research on SCAC is focused on ways of improving the performance of SCAC, including its mechanical properties. Studies have shown that the mechanical properties of concrete are closely related to the composition and structure of concrete [13][14][15][16]. Some scholars have explored ways of improving the strength of SCAC by optimizing the mix ratio [8,17,18], while others have attempted to improve the performance of SCAC by adding fibers to SCAC. Some scholars have attempted the addition of plant fibers (such as sisal fibers) to SCAC to improve its performance [7,19,20]. Xu et al. [21] added glass fibers to SCAC and developed a method for analyzing the development of internal cracks in glass-fiberreinforced polymer-sea sand concrete composites, and the strength enhancement effect of glass fibers on SCAC was studied. Liu et al. [22] studied the effect of the addition of carbon fibers on the mechanical properties and microstructure of carbon-fiber-reinforced coral concrete (CFRCC) by means of mechanical experiments, X-ray diffractometry, digital microscopy and scanning electron microscopy, finding that the addition of carbon fibers was able to improve the compressive strength and splitting tensile strength of concrete. Methods for improving the mechanical properties of SCAC by incorporating additives are also receiving attention [23]. Cheng et al. [24] studied the effects of the addition of waste ash (FA), blast furnace slag (BFS) and metakaolin (MK) on the mechanical properties, drying shrinkage, carbonation and chloride ion permeability of coral sand concrete (CSC), and compared the results with ordinary Portland cement (OPC) and natural aggregate concrete (NAC), concluding that the compressive strength of CSC was slightly lower, but possessed better chloride ion permeability. Islands possess abundant reef limestone resources, and the main component of reef limestone is CaCO 3 [25], which is the main raw material for the production of limestone powder. Studies have shown that limestone powder has a positive effect on improving the mechanical properties of concrete [26][27][28][29]. However, there is still a lack of reports studying the addition of limestone powder to SCAC. Whether limestone powder can also improve the mechanical properties of SCAC and the effect of limestone powder content on the mechanical properties of SCAC deserves further exploration.
In addition, the problem of how to improve the dynamic mechanical properties of SCAC when presented with the risks of dynamic loading, resulting from phenomena such as earthquakes or explosions, have also drawn the attention of researchers. Ma et al. [30] conducted dynamic impact loading tests using a split Hopkinson pressure bar (SHPB) system with a diameter of 100 mm, and the effects of strain rate on the uniaxial compressive strength, energy dissipation, fractal dimension, and failure morphology of SCAC were studied. Ma et al. [20,31] proposed sisal-fiber-reinforced CASC (SFCASC) with a compressive strength of 77.3 MPa. Dynamic mechanical experiments were performed on the SFCASC by SHPB, and the SFCASC was found to exhibit an obvious strain rate effect. It can be found from the above research that a number of researchers have used the SHPB test system to investigate the dynamic mechanical properties of SCAC. In fact, the SHPB experimental technology has been widely used in the study of the dynamic mechanical properties of concrete materials [32] due to its good performance when testing dynamic mechanical properties at strain rates in the range of 10 1~1 0 4 s −1 [33]. Moreover, with the development of SHPB technology, researchers have combined high-speed photography technology [34], coupled static-dynamic loading [35], Digital Image Correlation (DIC) [36] and other technologies with the traditional SHPB experimental system, greatly expanding the use scenarios of SHPB test systems. In addition, DIC technology has attracted the attention of researchers because of the advantage in terms of the strain field of the measured specimens being able to be measured directly using non-contact methods during the process of performing mechanical experiments [37]. Therefore, in this paper, in order to explore the influence of limestone powder content on the dynamic and static mechanical properties of SCAC, SCAC specimens with different limestone powder contents were processed. Quasi-static and dynamic mechanical experiments carried out on SCAC with different limestone powder contents using RMT, SHPB and high-speed camera. The properties tested in the dynamic and static experiments included static compressive strength, static tensile strength, dynamic tensile strength, DIF, dynamic strain field and failure pattern. The influence of limestone powder content on the dynamic and static mechanical properties of SCAC was studied.
Raw Materials
Reef limestone ( Figure 1) and coral sand ( Figure 2) collected directly from the reef were selected for the coarse aggregate and fine aggregate, respectively, of the SCAC. The basic physical properties of reef limestone and coral sand shown in Table 1 were tested in accordance with the Chinese standard GB/T 14685-2022 [38]. Figure 2b shows the particle size distribution of the coral sand tested using the method described in the literature [39]. The binding materials used in the experiment mainly include cement, limestone powder and slag powder ( Figure 3). The cement was P.O52.5 Portland cement, produced by Zhuchengyangchun Cement Co., Ltd. (Weifang, China), which satisfied the requirements of the Chinese standard GB175-2007 [40]. The performance of polycarboxylate superplasticizer produced by Hongxiang Construction Admixture Factory in Laiyang City, Shandong Province was able to satisfy the requirements of the Chinese standard GB/8076-2008 [41]. The artificial seawater was configured in accordance with the literature [12], rather than using fresh water for the formulation of the SCAC. content on the dynamic and static mechanical properties of SCAC was studied
Raw Materials
Reef limestone ( Figure 1) and coral sand ( Figure 2) collected directly fro were selected for the coarse aggregate and fine aggregate, respectively, of the basic physical properties of reef limestone and coral sand shown in Table 1 we accordance with the Chinese standard GB/T 14685-2022 [38]. Figure 2b shows size distribution of the coral sand tested using the method described in the lite The binding materials used in the experiment mainly include cement, limesto and slag powder ( Figure 3). The cement was P.O52.5 Portland cement, pr Zhuchengyangchun Cement Co., Ltd. (Weifang, China), which satisfied the re of the Chinese standard GB175-2007 [40]. The performance of poly superplasticizer produced by Hongxiang Construction Admixture Factory City, Shandong Province was able to satisfy the requirements of the Chines GB/8076-2008 [41]. The artificial seawater was configured in accordance with th [12], rather than using fresh water for the formulation of the SCAC.
Mix Proportion and Sample Preparation
The mix proportions of SCAC in this study, shown in Table 2, were calculated using Equation (1), in accordance with the Chinese standard JGJ 51-2002 [42]. The specimens that were subjected to quasi-static mechanical testing and dynamic mechanical testing were Φ50 mm × 100 mm and Φ65 mm × 35 mm cylindrical specimens, with 3 molded specimens in each group. The manufacturing process of the concrete specimens is shown in Figure 4. The concrete was poured by vibration to ensure the uniformity of coarse aggregate in concrete. All SCAC specimens underwent 28 days of curing under the same conditions, following the curing method described in the literature [43]. In order to ensure the smoothness of the surface of the specimen, the upper and lower surfaces of the specimen were polished using a grinding machine after maintenance. (1) where f cu,o and f cu,k represent the trial strength of the lightweight aggregate concrete and the standard cube compressive strength value of the lightweight aggregate concrete, respectively; σ represents the standard deviation of the strength of the lightweight aggregate concrete.
Mix Proportion and Sample Preparation
The mix proportions of SCAC in this study, shown in Table 2, were calc Equation (1), in accordance with the Chinese standard JGJ 51-2002 [42]. Th that were subjected to quasi-static mechanical testing and dynamic mecha were Φ50 mm × 100 mm and Φ65 mm × 35 mm cylindrical specimens, wi specimens in each group. The manufacturing process of the concrete specim in Figure 4. The concrete was poured by vibration to ensure the uniformity gregate in concrete. All SCAC specimens underwent 28 days of curing und
Mix Proportion and Sample Preparation
The mix proportions of SCAC in this study, shown in Table 2, were calculated using Equation (1), in accordance with the Chinese standard JGJ 51-2002 [42]. The specimens that were subjected to quasi-static mechanical testing and dynamic mechanical testing were Φ50 mm × 100 mm and Φ65 mm × 35 mm cylindrical specimens, with 3 molded specimens in each group. The manufacturing process of the concrete specimens is shown in Figure 4. The concrete was poured by vibration to ensure the uniformity of coarse aggregate in concrete. All SCAC specimens underwent 28 days of curing under the same conditions, following the curing method described in the literature [43]. In order to ensure the smoothness of the surface of the specimen, the upper and lower surfaces of the specimen were polished using a grinding machine after maintenance.
, , where , and , represent the trial strength of the lightweight aggregate concrete and the standard cube compressive strength value of the lightweight aggregate concrete, respectively; represents the standard deviation of the strength of the lightweight ag-
Mix Proportion and Sample Preparation
The mix proportions of SCAC in this study, shown in Table 2, were calculated usi Equation (1), in accordance with the Chinese standard JGJ 51-2002 [42]. The specime that were subjected to quasi-static mechanical testing and dynamic mechanical testi were Φ50 mm × 100 mm and Φ65 mm × 35 mm cylindrical specimens, with 3 mold specimens in each group. The manufacturing process of the concrete specimens is show in Figure 4. The concrete was poured by vibration to ensure the uniformity of coarse gregate in concrete. All SCAC specimens underwent 28 days of curing under the sa conditions, following the curing method described in the literature [43]. In order to ensu the smoothness of the surface of the specimen, the upper and lower surfaces of the spe men were polished using a grinding machine after maintenance. Note: "L" represents limestone powder, while "S" represents slag powder, and the number after the letter indicates the amount of slag admixture. For example, "L8S32" denotes SCAC with 8% limestone powder and 32% slag powder.
Static Compressive and Tensile Strength Experiment
Quasi-static compressive and tensile testing of the SCAC was carried out using an RMT-150B experimental machine ( Figure 5, Wuhan Institute of Geotechnical Mechanics, Wuhan, China). The RMT-150B rock mechanics test system consists of four parts: the host, the hydraulic system, the servo control system, and the computer control and processing system [44]. The Brazilian disc method [45] was applied to perform tensile strength testing of the specimen by transferring stress to the specimen in the tensile and compressive directions. Note: "L" represents limestone powder, while "S" represents slag powder, and the number after the letter indicates the amount of slag admixture. For example, "L8S32" denotes SCAC with 8% limestone powder and 32% slag powder.
Static Compressive and Tensile Strength Experiment
Quasi-static compressive and tensile testing of the SCAC was carried out using an RMT-150B experimental machine ( Figure 5, Wuhan Institute of Geotechnical Mechanics, Wuhan, China). The RMT-150B rock mechanics test system consists of four parts: the host, the hydraulic system, the servo control system, and the computer control and processing system [44]. The Brazilian disc method [45] was applied to perform tensile strength testing of the specimen by transferring stress to the specimen in the tensile and compressive directions.
Dynamic Mechanical Properties Experiment
The Split Hopkinson Pressure Bar (SHPB) is a test system that can be used to effectively test the dynamic mechanical properties of materials under strain rates in the range 10 1~1 0 4 s −1 , and has been widely used for testing the dynamic mechanical properties of rock, concrete and other geotechnical engineering materials [46,47]. The SHPB consists of a launcher, a bullet, an incident bar, a transmission bar, a buffer bar, a strain gauge attached to the bar, a speed test system, a dynamic strain meter, and an analysis system. A schematic diagram for the SHPB test device is presented in Figure 6.
Dynamic Mechanical Properties Experiment
The Split Hopkinson Pressure Bar (SHPB) is a test system that can be used to effectively test the dynamic mechanical properties of materials under strain rates in the range 10 1~1 0 4 s −1 , and has been widely used for testing the dynamic mechanical properties of rock, concrete and other geotechnical engineering materials [46,47]. The SHPB consists of a launcher, a bullet, an incident bar, a transmission bar, a buffer bar, a strain gauge attached to the bar, a speed test system, a dynamic strain meter, and an analysis system. A schematic diagram for the SHPB test device is presented in Figure 6. The transmission of the stress wave in the SHPB test is shown in Figure 7. During the SHPB test, under the impetus of high-pressure gas, the bullet leaves the launcher to impact the incident bar, resulting in an incident stress wave. When the incident stress wave propagates between the incident bar and the specimen, the specimen is compressed in the direction along the bar. Because of the difference in the wave impedance between the bar and the specimen, some of the incident stress waves will become reflected stress waves, while the others will become transmitted stress waves when they penetrate the transmission bar. These 3 waves are measured by resistance strain gauges attached to the incident bar and the transmission bar, respectively. Finally, the electrical signal collected by the strain gauge is output by the computer acquisition system, and the impact data of the material are finally obtained. The transmission of the stress wave in the SHPB test is shown in Figure 7. During the SHPB test, under the impetus of high-pressure gas, the bullet leaves the launcher to impact the incident bar, resulting in an incident stress wave. When the incident stress wave propagates between the incident bar and the specimen, the specimen is compressed in the direction along the bar. Because of the difference in the wave impedance between the bar and the specimen, some of the incident stress waves will become reflected stress waves, while the others will become transmitted stress waves when they penetrate the transmission bar. These 3 waves are measured by resistance strain gauges attached to the incident bar and the transmission bar, respectively. Finally, the electrical signal collected by the strain gauge is output by the computer acquisition system, and the impact data of the material are finally obtained. The transmission of the stress wave in the SHPB test is shown in Figure 7. During the SHPB test, under the impetus of high-pressure gas, the bullet leaves the launcher to impact the incident bar, resulting in an incident stress wave. When the incident stress wave propagates between the incident bar and the specimen, the specimen is compressed in the direction along the bar. Because of the difference in the wave impedance between the bar and the specimen, some of the incident stress waves will become reflected stress waves, while the others will become transmitted stress waves when they penetrate the transmission bar. These 3 waves are measured by resistance strain gauges attached to the incident bar and the transmission bar, respectively. Finally, the electrical signal collected by the strain gauge is output by the computer acquisition system, and the impact data of the material are finally obtained. Two assumptions should be satisfied when analyzing the SHPB test results [48]: (1) One-dimensional stress wave assumption: it must be ensured that the wavelength of the Two assumptions should be satisfied when analyzing the SHPB test results [48]: (1) One-dimensional stress wave assumption: it must be ensured that the wavelength of the propagating stress wave is much larger than the diameter of the compression bar and that the compression bar is an elastic bar, while the compression bar can only undergo axial deformation, and the stress wave can only propagate along the axial direction; (2) Stress uniformity assumption: the test must ensure that the specimen is small enough to ensure that the stress and strain state inside the specimen is evenly distributed during the loading process. The formula used for data processing can be derived on the basis of these two assumptions (Equation (2)) [30], and the dynamic tensile stress of the specimen can be obtained based on the data obtained from the SHPB experiment, in line with the principle of the dynamic Brazilian disc splitting experiment (Equation (3)) [49].
where A and E represent the cross-sectional area and elastic modulus of the bar, respectively; D and B represent the diameter and thickness of the specimen, respectively; ε i (t), ε r (t) and ε t (t) represent incident strain, reflected strain and transmitted strain, respectively; ε s (t), σ s (t) and . ε t (t) represent strain, stress and strain rate, respectively.
Digital Image Correlation Method
The DIC method ( Figure 8) can be used to analyze the information at a specific point based on the change in the shape and position of the speckles on the surface of the specimen when applying force to the object [50]. In order to ensure that the image conditions satisfy the recognition requirements, the specimen needs to be sprayed in a 'scattered spot' manner ( Figure 8b). The basic principle of the digital image correlation method (Figure 8d) is to select a square image sub-region, where the center of the sub-region is the pixel point. propagating stress wave is much larger than the diameter of the compression bar and that the compression bar is an elastic bar, while the compression bar can only undergo axial deformation, and the stress wave can only propagate along the axial direction; (2) Stress uniformity assumption: the test must ensure that the specimen is small enough to ensure that the stress and strain state inside the specimen is evenly distributed during the loading process. The formula used for data processing can be derived on the basis of these two assumptions (Equation (2)) [30], and the dynamic tensile stress of the specimen can be obtained based on the data obtained from the SHPB experiment, in line with the principle of the dynamic Brazilian disc splitting experiment (Equation (3)) [49].
where A and E represent the cross-sectional area and elastic modulus of the bar, respectively; and represent the diameter and thickness of the specimen, respectively; , and represent incident strain, reflected strain and transmitted strain, respectively; , and represent strain, stress and strain rate, respectively.
Digital Image Correlation Method
The DIC method (Figure 8) can be used to analyze the information at a specific point based on the change in the shape and position of the speckles on the surface of the specimen when applying force to the object [50]. In order to ensure that the image conditions satisfy the recognition requirements, the specimen needs to be sprayed in a 'scattered spot' manner ( Figure 8b). The basic principle of the digital image correlation method ( Figure 8d) is to select a square image sub-region, where the center of the sub-region is the pixel point.
Static Test Results and Analysis
The quasi-static compressive strength and quasi-static tensile strength of SCAC with different mixing ratios of limestone powder and slag powder were statistically analyzed, and the results were as shown in Figure 9. Figure 10 shows the failure morphology of SCAC under static compressive and static tensile tests with different dosage ratios of limestone powder and slag powder. aterials 2023, 16, x FOR PEER REVIEW 8
Static Test Results and Analysis
The quasi-static compressive strength and quasi-static tensile strength of SCAC different mixing ratios of limestone powder and slag powder were statistically anal and the results were as shown in Figure 9. Figure 10 shows the failure morpholo SCAC under static compressive and static tensile tests with different dosage ratios of stone powder and slag powder. It can be seen from Figure 9a that the addition of limestone powder and slag po influenced the static compressive strength of SCAC, and the ratio of limestone po and slag powder had a significant effect on the quasi-static compressive strength of sand concrete. Compared with SCAC (L0S0) without the addition limestone powde slag powder, the static compressive strength of SCAC with 8~20% limestone powde 20~32% slag powder increased with increasing dosage of limestone powder and dec ing dosage of slag powder. According to the different dosage ratios of limestone po and slag powder (2:8, 3:7, 4:6, 5:5), the quasi-static compressive strength of SCA creased by 9.53%, 12.94%, 14.75% and 17.97%, respectively, reaching a maximum va 57.4 MPa when the limestone powder dosage was 20% and the slag powder dosag 20%. However, when the limestone powder content was greater than 20%, the quasicompressive strength of SCAC with limestone powder and slag powder ended its d
Static Test Results and Analysis
The quasi-static compressive strength and quasi-static tensile strength of SCAC with different mixing ratios of limestone powder and slag powder were statistically analyzed, and the results were as shown in Figure 9. Figure 10 shows the failure morphology of SCAC under static compressive and static tensile tests with different dosage ratios of limestone powder and slag powder. It can be seen from Figure 9a that the addition of limestone powder and slag powder influenced the static compressive strength of SCAC, and the ratio of limestone powder and slag powder had a significant effect on the quasi-static compressive strength of coral sand concrete. Compared with SCAC (L0S0) without the addition limestone powder and slag powder, the static compressive strength of SCAC with 8~20% limestone powder and 20~32% slag powder increased with increasing dosage of limestone powder and decreasing dosage of slag powder. According to the different dosage ratios of limestone powder and slag powder (2:8, 3:7, 4:6, 5:5), the quasi-static compressive strength of SCAC increased by 9.53%, 12.94%, 14.75% and 17.97%, respectively, reaching a maximum value of 57.4 MPa when the limestone powder dosage was 20% and the slag powder dosage was 20%. However, when the limestone powder content was greater than 20%, the quasi-static compressive strength of SCAC with limestone powder and slag powder ended its downward trend with increasing limestone powder content, and when the limestone powder content was 32% and slag powder content was 8%, the addition of limestone powder and slag powder caused the quasi-static compressive strength of SCAC to decrease by 3.48%. It can be seen from Figure 9a that the addition of limestone powder and slag powder influenced the static compressive strength of SCAC, and the ratio of limestone powder and slag powder had a significant effect on the quasi-static compressive strength of coral sand concrete. Compared with SCAC (L0S0) without the addition limestone powder and slag powder, the static compressive strength of SCAC with 8~20% limestone powder and 20~32% slag powder increased with increasing dosage of limestone powder and decreasing dosage of slag powder. According to the different dosage ratios of limestone powder and slag powder (2:8, 3:7, 4:6, 5:5), the quasi-static compressive strength of SCAC increased by 9.53%, 12.94%, 14.75% and 17.97%, respectively, reaching a maximum value of 57.4 MPa when the limestone powder dosage was 20% and the slag powder dosage was 20%. However, when the limestone powder content was greater than 20%, the quasi-static compressive strength of SCAC with limestone powder and slag powder ended its downward trend with increasing limestone powder content, and when the limestone powder content was 32% and slag powder content was 8%, the addition of limestone powder and slag powder caused the quasi-static compressive strength of SCAC to decrease by 3.48%. By comparing Figure 9a,b, it can be found that the limestone powder content and slag powder also affected the quasi-static tensile strength of the SCAC, and it can be observed that the strength first increased and then decreased with increasing limestone powder dosage.
It can be observed from Figure 10 that the failure morphology of SCAC specimens under quasi-static compression loading is dominated by shear failure and accompanied by intermediate tensile failure. The expansion and penetration of cracks are the main factors that led to the failure of the SCAC specimens. Different of limestone powder and slag powder contents affected the failure morphology and post-failure morphology of SCAC. With increasing limestone powder content, the axial cracks of SCAC gradually decreased, the inclined cracks gradually increased, and the failure morphology gradually developed from complete crushing splitting failure to oblique shear failure with the tensile effect. When limestone powder content was 20% and slag powder content was 20% (L20S20, Figure 10e), the SCAC underwent typical oblique shear failure. The specimen broke into two main fragments along the shear surface, and the specimen after failure still had a certain bearing capacity. However, when the limestone powder content exceeded 20%, the failure morphology of SCAC began to develop into the form of tensile failure. The expansion of multiple parallel axial cracks led to a decrease in the bearing capacity of the fragments after the failure of the specimens, and the degree of fragmentation increased. After the static tensile test, SCAC underwent typical radial splitting failure, and the fracture end surface were relatively flat. Different limestone powder and slag powder contents affected the static tensile failure form and post-failure form of the SCAC. With increasing limestone powder content, more than a fracture surface began to appear. The increase in limestone powder content and the decrease in slag content affected the crack resistance of the SCAC.
Based on the above experimental phenomena described above, it can be seen that limestone powder and slag powder contents were between 8~20% and 20~32%, respectively, facilitated the improvement of quasi-static compressive strength and tensile strength of the coral concrete. Studies have shown that the interaction between sulfate and Ca(OH) 2 affects the strength of concrete during the hydration process of the concrete, and the addition of limestone powder and slag powder to concrete can effectively alleviate the effect of sulfate and Ca(OH) 2 [29]. In the early stage of cement hydration, CaCO 3 particles in limestone powder play the role of the crystal nucleus in the Ca(OH) 2 and C-S-H produced by cement hydration, accelerating the hydration of clinker minerals such as C 3 S [51], thus effectively improving the early strength of concrete. In the later stage of cement hydration, higher ratios of the reaction of C 3 S might relatively decrease the content of C 2 S, which may be the responsible for subsequent strength development [52]. Figure 11 shows the typical dynamic stress equilibrium verification of SCAC specimens with various ratios of limestone powder and slag powder (LS:SG = 2:8, 4:6, 5:5, 8:2) in the SHPB experiment. The validity of the experimental data was determined on the basis of the dynamic stress balance in the SHPB experiment according to the method described in the literature [53]. It can be observed from Figure 11 that there was a similar trend between "ε t (t)" and "ε i (t) + ε r (t)" in the SHPB experiment on the SCAC, meaning that the dynamic stress equilibrium conditions were basically satisfied. The satisfaction of dynamic stress equilibrium conditions provided favorable evidence for the constant strain rate loading and verified the validity of the experimental results.
Stress-Strain Curve
The dynamic tensile stress-strain curves with different limestone powder and slag powder contents under different strain rates were obtained by processing the original waveform data, as shown in Figure 12. It should be noted that the "low", "medium" and "high" strain rate levels mentioned here are only used to facilitate their naming, and do not express the same concepts as "low strain rate", "medium strain rate" and "high strain rate", in the strict sense [33]. It can be observed that the dynamic tensile stress-strain curve of SCAC has similar characteristics to those of other concrete materials in the SHPB experiment, showing four stages: a compaction stage, an elastic stage, a crack development stage, and a failure stage.
• Compaction stage (Ⅰ): because there are fine cracks inside the concrete that close under the action of external forces, the curve shows a slow upward trend of strain hardening.
•
Elastic stage (Ⅱ): the specimen undergoes elastic-like deformation, and the curve grows in a nearly linear manner. • Crack generation and propagation stage (Ⅲ): microcracks begin to appear inside the specimen, and as the stress increases, the concrete specimen is destroyed. The cracks inside the concrete form rapidly, the density gradually increases, the stress reaches the maximum value, and the concrete also reaches its maximum bearing capacity.
•
Fracture and failure stage (Ⅳ): the strain continues to increase, while the bearing capacity of the concrete decreases. At this stage, the micro-cracks of the concrete gradually penetrate until the specimen is complete destroyed.
Stress-Strain Curve
The dynamic tensile stress-strain curves with different limestone powder and slag powder contents under different strain rates were obtained by processing the original waveform data, as shown in Figure 12. It should be noted that the "low", "medium" and "high" strain rate levels mentioned here are only used to facilitate their naming, and do not express the same concepts as "low strain rate", "medium strain rate" and "high strain rate", in the strict sense [33].
Stress-Strain Curve
The dynamic tensile stress-strain curves with different limestone powder and slag powder contents under different strain rates were obtained by processing the original waveform data, as shown in Figure 12. It should be noted that the "low", "medium" and "high" strain rate levels mentioned here are only used to facilitate their naming, and do not express the same concepts as "low strain rate", "medium strain rate" and "high strain rate", in the strict sense [33]. It can be observed that the dynamic tensile stress-strain curve of SCAC has similar characteristics to those of other concrete materials in the SHPB experiment, showing four stages: a compaction stage, an elastic stage, a crack development stage, and a failure stage.
• Compaction stage (Ⅰ): because there are fine cracks inside the concrete that close under the action of external forces, the curve shows a slow upward trend of strain hardening.
•
Elastic stage (Ⅱ): the specimen undergoes elastic-like deformation, and the curve grows in a nearly linear manner. • Crack generation and propagation stage (Ⅲ): microcracks begin to appear inside the specimen, and as the stress increases, the concrete specimen is destroyed. The cracks inside the concrete form rapidly, the density gradually increases, the stress reaches the maximum value, and the concrete also reaches its maximum bearing capacity.
•
Fracture and failure stage (Ⅳ): the strain continues to increase, while the bearing capacity of the concrete decreases. At this stage, the micro-cracks of the concrete gradually penetrate until the specimen is complete destroyed. It can be observed that the dynamic tensile stress-strain curve of SCAC has similar characteristics to those of other concrete materials in the SHPB experiment, showing four stages: a compaction stage, an elastic stage, a crack development stage, and a failure stage.
•
Compaction stage (I): because there are fine cracks inside the concrete that close under the action of external forces, the curve shows a slow upward trend of strain hardening. • Elastic stage (II): the specimen undergoes elastic-like deformation, and the curve grows in a nearly linear manner. • Crack generation and propagation stage (III): microcracks begin to appear inside the specimen, and as the stress increases, the concrete specimen is destroyed. The cracks inside the concrete form rapidly, the density gradually increases, the stress reaches the maximum value, and the concrete also reaches its maximum bearing capacity. • Fracture and failure stage (IV): the strain continues to increase, while the bearing capacity of the concrete decreases. At this stage, the micro-cracks of the concrete gradually penetrate until the specimen is complete destroyed.
Strain Rate Effect
In addition to the characteristics of the stress-strain curve, SCAC also showed an obvious strain rate effect, similar to other concrete materials [36,54,55]. It can be seen from Figure 13a that the tensile stress-strain curve of SCAC indicated an increase in peak stress with increasing strain rate level. Figure 13a that the tensile stress-strain curve of SCAC indicated an increase in peak stress with increasing strain rate level.
Dynamic Increase Factor (DIF) [56] (Equation (4)), a common index, can be used to investigate the sensitivity of materials to strain rate. The peak stress and DIF of the SCAC in the experiment are shown in Figure 13b.
where and represent the dynamic tensile strength and static tensile strength, respectively, of SCAC. It can be observed from Figure 13a that the dynamic tensile strength of SCAC mainly fluctuates in the range of 7.8 MPa to 46.01 MPa. The dynamic tensile strength of SCAC with the same ratio increased with increasing strain rate grade. In addition, with increasing strain rate, the DIF of the SCAC under different ratio conditions also showed an increasing trend, with the DIF of the SCAC varying from 1.39 to 6.91. In order to better observe the effect of varying limestone powder and slag powder contents on the dynamic tensile strength and DIF of SCAC, the dynamic tensile strength and DIF of SCAC with different strain rates and different contents were determined, and the results are statistically shown in Figures 14 and 15. (b) (c) Dynamic Increase Factor (DIF) [56] (Equation (4)), a common index, can be used to investigate the sensitivity of materials to strain rate. The peak stress and DIF of the SCAC in the experiment are shown in Figure 13b.
where σ t and σ s represent the dynamic tensile strength and static tensile strength, respectively, of SCAC. It can be observed from Figure 13a that the dynamic tensile strength of SCAC mainly fluctuates in the range of 7.8 MPa to 46.01 MPa. The dynamic tensile strength of SCAC with the same ratio increased with increasing strain rate grade. In addition, with increasing strain rate, the DIF of the SCAC under different ratio conditions also showed an increasing trend, with the DIF of the SCAC varying from 1.39 to 6.91. In order to better observe the effect of varying limestone powder and slag powder contents on the dynamic tensile strength and DIF of SCAC, the dynamic tensile strength and DIF of SCAC with different strain rates and different contents were determined, and the results are statistically shown in Figures 14 and 15.
It can be observed from Figures 14 and 15 that under the same strain rate level, the dynamic tensile strength of SCAC increased at the beginning and then decreased with increasing limestone powder content, reaching its maximum value when the limestone powder content was 16% and the slag powder content was 24%. However, the addition of limestone powder did not completely guarantee an improvement in the dynamic tensile strength of SCAC. At all strain rate grades, the dynamic tensile strength of SCAC decreased under the condition of 32% limestone powder content and 8% slag powder content, and this attenuation effect was more obvious at low strain rate levels (171.12~153.85 s −1 ). At the same strain rate level, the DIF of SCAC did not show the same trend as dynamic tensile strength. With increasing limestone powder content, the DIF of SCAC exhibited a fluctuation phenomenon around a specific value, which was 1.57, 3.58 and 6.25 at different strain rate levels. In addition, at all strain rate levels, the DIF of SCAC reached its maximum value under the condition of 24% limestone powder content and 16% slag powder content, and this maximum value increased with increasing strain rate level. These rules seem to correspond to the failure pattern of SCAC in the SHPB experiment ( Figure 16). It can be seen from Figure 16 that under dynamic tensile conditions, although the SCAC specimens with typical splitting failure did not break completely, the failure morphology of SCAC specimens showed a trend in which the particle size of the broken slag decreased, with the amount of broken slag increasing, as well as breaking more thoroughly, with increasing strain rate grade. It can be observed from Figure 13a that the dynamic tensile strength of SCAC mainly fluctuates in the range of 7.8 MPa to 46.01 MPa. The dynamic tensile strength of SCAC with the same ratio increased with increasing strain rate grade. In addition, with increasing strain rate, the DIF of the SCAC under different ratio conditions also showed an increasing trend, with the DIF of the SCAC varying from 1.39 to 6.91. In order to better observe the effect of varying limestone powder and slag powder contents on the dynamic tensile strength and DIF of SCAC, the dynamic tensile strength and DIF of SCAC with different strain rates and different contents were determined, and the results are statistically shown in Figures 14 and 15. It can be observed from Figures 14 and 15 that under the same strain rate level, the dynamic tensile strength of SCAC increased at the beginning and then decreased with increasing limestone powder content, reaching its maximum value when the limestone powder content was 16% and the slag powder content was 24%. However, the addition of limestone powder did not completely guarantee an improvement in the dynamic tensile strength of SCAC. At all strain rate grades, the dynamic tensile strength of SCAC decreased under the condition of 32% limestone powder content and 8% slag powder content, and this attenuation effect was more obvious at low strain rate levels (171.12~153.85 s −1 ). At the same strain rate level, the DIF of SCAC did not show the same trend as dynamic tensile strength. With increasing limestone powder content, the DIF of SCAC exhibited a fluctuation phenomenon around a specific value, which was 1.57, 3.58 and 6.25 at different strain rate levels. In addition, at all strain rate levels, the DIF of SCAC reached its maximum value under the condition of 24% limestone powder content and 16% slag powder content, and this maximum value increased with increasing strain rate level. These rules seem to correspond to the failure pattern of SCAC in the SHPB experiment ( Figure 16). It As mentioned above, the addition of limestone powder to SCAC affects the strength of the SCAC by affecting the C-S-H in the concrete. The addition of limestone powder affects the quasi-static strength of the SCAC. However, under dynamic load with high strength and a short action time, this effect was not obvious. Although C-S-H can fill the pores and micro-cracks in SCAC and reduce the porosity of concrete, the bonding effect of the limestone powder was shown macroscopically. However, once the limestone powder content had exceeded a certain threshold, the resulting C-S-H was not only not able to continuously fill the pores and microcracks, it also had a tendency to increase the porosity [57]. This can be proved by performing both quasi-static and dynamic mechanical experiments on SCAC. In addition, under the action of dynamic load, the rapid tensile effect caused by dynamic load input in a short time was much higher than the enhancement of the bonding ability of SCAC caused by C-S-H. Therefore, in the dynamic splitting test, the dynamic tensile strength of SCAC with different limestone powder contents at the same strain rate level showed obvious differences, and these differences increased with increasing strain rate grade.
Failure Process and Strain Field of SCAC in the SHPB Experiment
The failure process of SCAC in the SHPB experiment was observed by means of highspeed camera technology, and the strain field change in the SCAC during the dynamic tensile process was observed by DIC technology. Figure 17 shows the strain field
Failure Process and Strain Field of SCAC in the SHPB Experiment
The failure process of SCAC in the SHPB experiment was observed by means of high-speed camera technology, and the strain field change in the SCAC during the dynamic tensile process was observed by DIC technology. Figure 17 shows the strain field distribution of the SCAC with different limestone powder contents along the Y direction under typical strain rate conditions (301.1~343.17 s −1 ). The upward stress is positive, and the downward stress is negative. In addition, in order to better compare the strain field distribution of the SCAC under impact loading under different strain rate conditions, in Figure 18, the strain field of the SCAC in the failure stage was quantified under different strain rate conditions. . The upward stress is positive, and the downward stress is negative. In addition, in order to better compare the strain field distribution of the SCAC under impact loading under different strain rate conditions, in Figure 18, the strain field of the SCAC in the failure stage was quantified under differen strain rate conditions. It can be seen from Figure 17 that the development process of the strain field of the SCAC with different limestone powder contents possessed a unified trend. From left to right are the images of the initial loading stage, the crack generation stage, the crack propagation stage, and the complete failure stage of the specimen. In the early stage of impact loading, the deformation and strain accumulation of the SCAC first appeared in the area of contact with the bar. As the impact load continued to act on the SCAC specimen, symmetrical cracks began to appear along the symmetrical direction in the SCAC strain field, symmetrically distributed from the midline to the edge of the specimen, which began to expand.
In the later stage of impact loading, the propagation and interconnection of cracks caused penetrating cracks to appear, followed by the failure of the specimens. The above phenomenon is consistent with the results of the numerical calculation of dynamic splitting in SCAC carried out by Ma et al. [31] using LS-DYNA software, where it was found that the failure of the SCAC specimen showed that the external failure of the specimen was greater than the internal failure, and the central failure was greater than the edge failure. In addition, when comparing the strain field of the SCAC in the failure stage under different strain rate conditions (Figure 18), the maximum strain of SCAC showed an increasing trend with the increase in strain rate level, thus demonstrating obvious strain rate sensitivity. However, at the same strain rate grade, the change trend of the maximum strain value of SCAC with different limestone powder contents was not consistent with the change trend for stress. It can be seen from Figure 17 that the development process of the strain field of the SCAC with different limestone powder contents possessed a unified trend. From left to right are the images of the initial loading stage, the crack generation stage, the crack propagation stage, and the complete failure stage of the specimen. In the early stage of impact loading, the deformation and strain accumulation of the SCAC first appeared in the area of contact with the bar. As the impact load continued to act on the SCAC specimen, symmetrical cracks began to appear along the symmetrical direction in the SCAC strain field, symmetrically distributed from the midline to the edge of the specimen, which began to expand.
In the later stage of impact loading, the propagation and interconnection of cracks caused penetrating cracks to appear, followed by the failure of the specimens. The above phenomenon is consistent with the results of the numerical calculation of dynamic splitting in SCAC carried out by Ma et al. [31] using LS-DYNA software, where it was found that the failure of the SCAC specimen showed that the external failure of the specimen was greater than the internal failure, and the central failure was greater than the edge failure. In addition, when comparing the strain field of the SCAC in the failure stage under different strain rate conditions (Figure 18), the maximum strain of SCAC showed an increasing trend with the increase in strain rate level, thus demonstrating obvious strain rate sensitivity. However, at the same strain rate grade, the change trend of the maximum strain value of SCAC with different limestone powder contents was not consistent with It can be seen from Figure 17 that the development process of the strain field of the SCAC with different limestone powder contents possessed a unified trend. From left to right are the images of the initial loading stage, the crack generation stage, the crack propagation stage, and the complete failure stage of the specimen. In the early stage of impact loading, the deformation and strain accumulation of the SCAC first appeared in the area of contact with the bar. As the impact load continued to act on the SCAC specimen, symmetrical cracks began to appear along the symmetrical direction in the SCAC strain field, symmetrically distributed from the midline to the edge of the specimen, which began to expand.
In the later stage of impact loading, the propagation and interconnection of cracks caused penetrating cracks to appear, followed by the failure of the specimens. The above phenomenon is consistent with the results of the numerical calculation of dynamic splitting in SCAC carried out by Ma et al. [31] using LS-DYNA software, where it was found that the failure of the SCAC specimen showed that the external failure of the specimen was greater than the internal failure, and the central failure was greater than the edge failure. In addition, when comparing the strain field of the SCAC in the failure stage under different strain rate conditions (Figure 18), the maximum strain of SCAC showed an increasing trend with the increase in strain rate level, thus demonstrating obvious strain rate sensitivity. However, at the same strain rate grade, the change trend of the maximum strain value of | 11,327.4 | 2023-04-26T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
The Focinator - a new open-source tool for high-throughput foci evaluation of DNA damage
Background The quantitative analysis of foci plays an important role in many cell biological methods such as counting of colonies or cells, organelles or vesicles, or the number of protein complexes. In radiation biology and molecular radiation oncology, DNA damage and DNA repair kinetics upon ionizing radiation (IR) are evaluated by counting protein clusters or accumulations of phosphorylated proteins recruited to DNA damage sites. Consistency in counting and interpretation of foci remains challenging. Many current software solutions describe instructions for time-consuming and error-prone manual analysis, provide incomplete algorithms for analysis or are expensive. Therefore, we aimed to develop a tool for costless, automated, quantitative and qualitative analysis of foci. Methods For this purpose we integrated a user-friendly interface into ImageJ and selected parameters to allow automated selection of regions of interest (ROIs) depending on their size and circularity. We added different export options and a batch analysis. The use of the Focinator was tested by analyzing γ-H2.AX foci in murine prostate adenocarcinoma cells (TRAMP-C1) at different time points after IR with 0.5 to 3 Gray (Gy). Additionally, measurements were performed by users with different backgrounds and experience. Results The Focinator turned out to be an easily adjustable tool for automation of foci counting. It significantly reduced the analysis time of radiation-induced DNA-damage foci. Furthermore, different user groups were able to achieve a similar counting velocity. Importantly, there was no difference in nuclei detection between the Focinator and ImageJ alone. Conclusions The Focinator is a costless, user-friendly tool for fast high-throughput evaluation of DNA repair foci. The macro allows improved foci evaluation regarding accuracy, reproducibility and analysis speed compared to manual analysis. As innovative option, the macro offers a combination of multichannel evaluation including colocalization analysis and the possibility to run all analyses in a batch mode. Electronic supplementary material The online version of this article (doi:10.1186/s13014-015-0453-1) contains supplementary material, which is available to authorized users.
Background
Radiotherapy (RT) is a mainstay in modern cancer treatment. To evaluate the efficacy of IR alone or in combination with chemotherapy or drugs inducing DNA damage and targeting DNA repair, radiation biologists usually count fluorescence-labeled protein-foci in the nucleus using fluorescence microscopy. For this purpose the proteins of interest or their specific phosphorylated isoform are visualized by immunofluorescence using protein-specific (e.g. p53 binding protein 1 (53BP1)) or phospho-protein-specific (e.g. phospho-histone 2.AX (γ-H2.AX)) antibodies directly linked to a fluorophore or detected by using a secondary fluorophore-labeled antibody. Another possibility is to fuse the proteins of interest with fluorescent proteins, such as green fluorescence protein GFP [1]. This method takes advantage of the fact that many repair proteins and repair associated proteins, such as γ-H2.AX, 53BP1 and RAD51, accumulate and co-localize at the site of DNA damage [2][3][4][5][6][7][8]. To evaluate formation and processing of these DNA damage foci, a reliable and accurate image analysis is required.
Due to the wide use of methods retrieving images of foci and cells, multiple evaluation procedures have been developed. However, the programs currently available for counting and analysis of nuclei are often based on manual analysis. Several publications showed that manual counting of foci is time consuming, frequently inaccurate and subjected to investigator-related bias. Conversely, automated computer-based foci analysis is considered to yield better sensitivity, comparability and consistency of the data [9][10][11][12][13]. However, some current software solutions for automated analysis are unsatisfactory as they provide limited algorithms, are stand-alone tools or are simply expensive [13,14]. For example, Böcker et al. developed a software based on a costintensive program, ImageProPlus (Media Cybernetics Inc., US) [12]. Another commercially available package is IMARIS (Bitplane AG) [13,15]. Moreover, not all existing tools support the complete range of file formats commonly used for image acquisition [16]. The FociCounter, a freely available, non-customizable stand-alone tool, does not support all formats, for example files used by Zeiss (CZI and ZVI) and by Leica (LIF). Moreover, the Foci-Counter only allows manual selection of cells [17]. However, integration of automated cell selection and a batch mode performing automated analysis of various pictures would result in desirable time-saving steps for data analysis. TRI2 and CellProfiler are stand-alone tools written with the programming language Python [18][19][20]. One disadvantage of stand-alone tools can be the lack of updates by an established platform. In contrast, the platform of ImageJ offers support, frequent updates and the possibility to change the source code or to link it with additional programming tools [9-13, 21, 22]. ImageJ-based solutions have already been described by several authors and institutions, but these solutions frequently provide incomplete algorithms or macros not suited for immediate use [23,24]. For example, Cai and colleagues published the source code for an ImageJ macro without interface, like a menu and buttons [25], and Du and colleagues developed a tool for foci picking without batch mode and automated foci selection [26]. The FindFoci plugin for ImageJ supports self-learning parameters but does not support multi-channel analysis [10]. Thus, there was a demand for the development of easy-to-use, customizable and reliable software solutions with an intuitive interface combined with an automated open-source platform, like ImageJ [13]. To overcome these limitations, we have developed an automated, adjustable and user friendly macro based on ImageJ named "Focinator" for quantitative and qualitative analysis of nuclei, γ-H2.AX foci and other biological foci with the possibility of easy data export and processing. In addition, we integrated an option for multi-channel analysis, e.g. 53BP1 foci and γ-H2.AX foci in one image file and implemented the option for colocalization studies. This option enables the determination of absolute numbers and the percentages of colocalized foci. We used ImageJ as an established platform, as it is an image processing software that is routinely used by many investigators to analyze western blots, fluorescence cell images [13,27], immunohistochemical probes [28], DNA double strand break repair [29], cell size [30] and to quantify soft tissue in tomography images [21] or wound healing [31]. We adapted the Focinator based on algorithms published by the Light Microscopy Core Facility -Duke University and Duke University Medical Center by adding additional setting preferences [24,25]. To further facilitate data analysis, a program for automated analysis and data export into a spreadsheet was integrated.
Chemicals, antibodies and drugs
Antibodies linked with Alexa Fluor 647 against γ-H2.AX protein were obtained from Becton Dickinson (Heidelberg, Germany). Hoechst 33342 from Invitrogen (Eugene, USA) and DAKO Fluorescent mounting medium from Dako North America Inc. (Carpinteria, USA) were used. All other chemicals were purchased from Sigma-Aldrich (Deisenhofen, Germany) if not otherwise specified.
γ-H2.AX immunofluorescence
Cells were irradiated with 3 Gy and fixed and permeabilized (3 % para-Formaldehyde (PFA) and 0.2 % Triton X-100 in PBS buffer; 15 min; room temperature) at different time points (30 min, 1, 2, 4, 6, 8 and 24 h) after irradiation. After washing, cells were blocked overnight with 2 % goat serum in PBS buffer. Staining with the Alexa Fluor 647-conjugated anti-γ-H2.AX antibody was performed for one hour at a 1:75 dilution in blocking buffer. Samples were washed three times with PBS and stained for 30 min in the dark with 0.2 % (w/v) Hoechst 33342 in PBS. Samples were again washed three times with PBS, mounted with DAKO mounting medium and stored at 4°C in the dark. Single layer fluorescence images were taken with a Zeiss AxioCam MRm (1388 × 1040 pixels) mounted at a Zeiss Axio Observer Z1 fluorescence microscope with Plan-Apochromat 63x/1.40 Oil M27 lens, 49 DAPI filter, 78 HE ms CFP/YFP filter (γ-H2.AX AF-647 detection) and "ApoTome" transmission grid (High Grid: PH/VH with 5 phase images) (Carl Zeiss, Goettingen, Germany). Images were taken with exposure times of 500 ms for the DAPI channel and 1500 ms for the Alexa Fluor 647 antibody. The pictures were saved as 16-bit Zeiss Vision Image ZVI files with no further editing.
Software and programming
The macro "Focinator" was programmed as a macro for automated quantitative and qualitative analysis of foci with the open-source software ImageJ, a public domain Java image processing program developed at the National Institutes of Health (NIH) [14,32]. ImageJ is designed with an open architecture and provides extensibility via Java plugins and automation with macros. Custom-built tools can be developed to solve image processing or analysis problems. [21,22]. ImageJ is available for Windows, Mac OS, Mac OS X and Linux. It has its own ImageJ macro language that is able to control ImageJ procedures and the automation of action series including variables and user-defined functions. The macro, instructions and support are obtainable at http://www.focinator.oeck.de.
In addition, a tool for batch mode and import of data from foci-count and ROI analysis was developed: the batch mode was programmed using R, a free software environment for programming [33]. The R-script allows automated opening of images, foci evaluation and the direct export of foci data in a Microsoft Excel spreadsheet. Excel spreadsheets enable further statistical analysis as well as easy export into other statistical software.
Foci analysis methods
For evaluations of foci counting, the respective groups counted γ-H2.AX foci formed in TRAMP-C1 cells at different time points after exposure to 3 Gy. Additionally, one experiment was performed with 0.5 to 3 Gy. In total 24,858 nuclei in 3361 images were counted. Two trained investigators performed manual foci-counting using a standard manual counter and two trained investigators analyzed foci with ImageJ. Three different user groups (in total 6 investigators) tested the Focinator's user-friendliness: two programmers of the Focinator with prior knowledge of ImageJ and image processing software, two biologists with basic knowledge of image processing in a scientific context and two users without scientific background or prior knowledge of image processing. All investigators used the same workstations. The investigators did not receive any prior training in this software, but had to read the software's instruction manual (Additional file 1: Supplement). The images were fully blinded before analysis; therefore investigators had no information about exposure details, dose, time points or type of analyzed cells. Manual counting, analysis with ImageJ and the Focinator were performed independently. Results from manual counting were not available for the investigators performing computational counting.
The software-based analysis for accuracy, comparability, validity and velocity was done by the primary developers of the Focinator (SO and NMM). Consequently, these people had prior knowledge of the image processing software.
Statistical analysis
Data represent mean values of at least 3 independent experiments ± standard deviation (SD). Data analysis was performed by two-way ANOVA test with Bonferroni two pair comparison post-test and determination coefficient calculation using Prism5TM software (Graph pad Inc., La Jolla, USA). P values ≤ 0.05 were considered as significant.
Results and discussion
The Focinator To develop the Focinator as a tool for automated quantitative and qualitative analysis of foci with ImageJ, we first integrated a user-friendly interface. The interface ( Fig. 1) includes eight buttons, a menu as well as nine shortcuts for the following commands < F1 > Automated Mode, <F2 > Options, <F3 > Thresholding, <F4 > Separation, <F5 > Selecting ROIs, <F6 > Thresholding and Selecting ROIs, <F7 > Analyzing -Foci Count, <F8 > Open Next Image in the folder. The menu also includes further information under About The Focinator and an instruction manual under Help. The second step for the development was an automated selection of the regions of interest (ROIs), such as cells or nuclei, depending on their appearance (Fig. 2). Moreover, automated detection of foci and the analysis of ROIs and foci were included (Fig. 3).
When running the Focinator, the selected ROIs are measured, including the area as well as mean, minimal and maximal grey values within the selection. The foci are detected based on the "Find maxima…" command of ImageJ.
[24] Using the "Find maxima…" command, ImageJ identifies signal peaks of the 16-bit grey scale of an image compared to the grey scale values of the surrounding pixels. Testing of the Focinator was performed by capturing fluorescence images for detection of γH2AX foci. In this case higher fluorescence intensities of a putative focus correlate with an incensement of grey values in the image file. Importantly, the thresholds or contrasts of the foci images are not altered. The user might add a value for the noise level to the "Find maxima…" command, to disregard lower grey values caused by background noise. Background noise can be caused by unspecific staining due to unspecific antibody binding, insufficient blocking or washing. The maximal, mean and minimal densities as well as the localization and determination of ROIs size and intensity are also measured (Fig. 3).
The Focinator can either be run in an automated mode or in a semi-automated mode (shortcuts < F3 > to < F8>), respectively, with the possibility of manual addition or deletion of ROIs for better control and adjustments. After starting the Automated Mode via the button or < F1>, the macro selects the ROIs automatically using a preset threshold. When choosing "active separation", it separates the cells. After this, the foci are counted and the results are saved in the chosen directory ( Fig. 1.1 and 1.2). After having tested and adjusted the parameters on several pictures, it is possible to run a batch mode. We recommend testing parameters on multiple pictures before running the batch mode [10]. The batch mode analyzes all pictures in a selected folder including all subfolders. After completion of the batch analysis all retrieved values are summarized and means are calculated.
Adjusting preferences in Focinator options
Preferences of the Focinator can be changed in Options < F2 > (Fig. 1.3). At first, the analysis mode has to be chosen, e.g. multi-channel analysis or separated pictures for each Fig. 1 The ImageJ-based interface of the Focinator offers options to adapt the evaluation parameters to distinct image characteristics. Figure 1 shows ImageJ with the Focinator macro installed as start-up macro after opening a multi-channel image. This microscope image with the file format ZVI 16-bit includes three fluorescence channels. The main window of the Focinator is implemented into the ImageJ window. It consists of a menu (2), buttons (1) and Focinator Options (3 and 4). The Focinator Options windows offer several preferences for the user to adapt the macro's behavior to individual requirements. Picture Settings: First step is to tell the macro, the input folder and if there is a multi-channel image or more single pictures will be opened. In the second step you choose in which channel the foci have to be counted and where the ROIs should be selected. In our example, the γ-H2.AX foci are in channel number 2 (on top after opening the image). The macro will use the setting "1st foci channel = front channel" for all pictures automatically. If no second foci channel is used the setting should be changed to "inactive". ROI Settings (3): Depending on image quality, size and magnification, it is recommended to set the threshold and the size filters for ROIs. Alternatively, the choice of automated thresholding is possible. It is possible to exclude objects that are partially outside of the image. If there are objects to exclude because they are not circular enough or too small, it is possible to exclude them via circularity filters or size filters. "Use fill holes" should be activated, if the ROI selection left holes in the cells. Overlapping ROIs (cells, nuclei) might be separated by choosing "watershed". Regarding the batch mode "check selection" offers the possibility of stopping during the selection process. "Invert images" should be checked when working with images with light background. For the automated batch (4) mode, output directories need to be chosen to save the results. An important step of evaluation is to choose the right noise level. Noise level values can be set independently in multi-channel analysis to exclude background artifacts. By defining the cut off, foci with intensities below a certain value are deleted, which excludes background noise. The value for area correction is dependent on the mean size of the analyzed nuclei. The factor corrects the foci number divided by the individual area of each nucleus. The usage of the percentile option enables the user to delete the outliers, such as cells with false γ-H2.AX foci induced by replication. Colocalization analyses are also possible. This option compares the localization of two foci in two different channels with a selectable tolerance channel. The basic multi-channel analysis uses one channel for ROI selection and one or two foci channels, e.g. based on different stainings. The criteria for ROI selection can be defined in ROI Settings, a sub-paragraph of the Options window. The aforementioned window also includes preferences for the threshold level of the picture, the size and circularity of included particles, separation of overlapping cells or exclusion of areas being cut by the frame. For detection of foci a suitable noise level can be set. Finally, the last two dialogs of the options window offer the opportunity to change the saving directory and the file format.
For data export, we chose MS Excel because this program is widely used for spreadsheet calculation including the scientific context. Moreover, it enables further statistical analysis, presentation in graphs and charts, as well as easy export into other statistical software.
Comparison of the Focinator to manual analysis and counting with ImageJ without automation
We tested the Focinator by counting radiation-induced γ-H2.AX foci in TRAMP-C1 cells at different time points after exposing the cells to 3 Gy. The results of the Focinator-analysis were compared to manual analysis as visual method and ImageJ-based counting via manual ROI marking and "Find Maxima" function as described by the Light Microscopy Core Facility -Duke University and Duke University Medical Center (Fig. 4) [24]. Manual counting of foci from images was chosen in the present study. By processing 35 multi-channel images, we counted 439 nuclei. Our software significantly reduced the analyzing time by a factor of approximately 23, from 132.07 ± 13.44 min for manual analysis to 5.61 ± 0.67 min with the Focinator (Fig. 4a). Surprisingly, evaluation with ImageJ without automation via macro needed more time than analysis with the Focinator or even the manual analysis (Fig. 4a). Nevertheless, analysis by ImageJ allowed the acquisition of more information about foci and nuclei than manual analysis. Importantly, there was no difference in nuclei detection between ImageJ-based methods and manual counting (Fig. 4b). Image acquisition was not part of the analyzing time; as fluorescent stainings are not stable, it is necessary to save image files for permanent documentation of the results with different counting methods, Fig. 2 The macro automates the setting of the threshold and the contains an automated ROI selection. Figure 2 shows the calculation process frozen at the point of completed ROI selection. The ROI selection is necessary for the measurement of ROI area, intensity information (mean, maximum and minimum) and the foci count of each ROI (e.g. nucleus). Adjusting the threshold is the first step of ROI selection. The ROIs are marked by signal intensity-triggered selection of the areas. This selection and ROI marking is based on ImageJ "Create Selection" algorithm with options including filters for edge ROI exclusion, minimum and maximum size, watershed for overlapping objects and consideration of circularity manual and automated. Moreover, image files can be used for more convenient manual foci counting with the option to mark counted foci with the software to avoid mistakes. Manual counting from images was chosen in the present study. In their routine protocol, Moquet et al. [19]. These data reveal that the new macro allows for highly accurate ROI selection, meaning that the nuclei or cells were selected in a valid and fast way. Moreover, we provide evidence that the Focinator excels manual analysis concerning time, effort for image processing and informative content, thereby corroborating with findings by others. However, the integration of an automatic ROI selection proved to be a valid and time-saving step, which is what makes the Focinator superior compared to other software solutions [13,17,20].
Validation of the Focinator
For validation of the Focinator, TRAMP-C1 cells were irradiated with 3 Gy and foci were analyzed before and 0.5, 1, 2, 4, 6 and 24 h after irradiation. Again, the results of the Focinator were compared to ImageJ-based analysis and manual counting. All three methods showed a uniform time-dependent decrease of γ-H2.AX foci after the initial maximum at 30 min post-irradiation, proving the validity of the developed macro for reliable focicounting (Fig. 5a). Counting of foci after irradiating cells with different doses (0.5, 1.5 and 3 Gy; 30 min after Fig. 3 The Focinator counts foci for each pre-selected ROI automatically. Figure 3 demonstrates the calculation process stopped at the automated foci finding step for all ROIs. The image shows the selected foci in ROI 4. This part of the automation is based on the user's noise level settings and on the previously marked ROIs, which are directly imported to the foci channel. Foci counting is followed by the closing of all channels and the immediate export into data files. The ROIs information will be imported into the export files in the order they were displayed in the ROI Manager window and named as numbers starting from one irradiation) was performed to validate the use of the Focinator at different amounts of DNA damage. The dose response curve shows a linear relationship between the number of foci per cell and the clinically relevant radiation doses used in the present study, thereby concurring with previously published literature [35][36][37][38]. Moreover, there was a strong similarity of foci numbers counted in 439 nuclei manually or with the Focinator (Fig. 5c, R 2 = 0.9670). The comparison of ImageJ-based analysis with the Focinator achieved also high correlation indicating that the results of both analysis programs were very similar (Fig. 5d, R 2 = 0.9914). Though a slight underestimation of counted foci was observed one hour after irradiation when using ImageJ and the Focinator compared to manual analysis, this effect was not significant. A similar phenomenon has previously been described by others and has been attributed to the increasing amount of overlapping foci at high foci numbers per nucleus, as well as at high irradiation doses and shorter repair times yielding increased foci size [12]. However, it has been suggested that the falsification of the results by high numbers of overlapping foci can be minimized when considering an additional analysis of foci intensity [25]. In contrast to manual analysis, the Focinator provides the opportunity to quantify the nuclei size as well as the minimal, mean and maximal intensity of the foci and the nuclei, and is thus superior to manual analysis. Another advantage of the Focinator is the opportunity to measure the ROIs area size. Accordingly, foci can be counted per area and not only per nucleus.
Our results support the conclusion that computational analysis is well suited to replace manual analysis with high accuracy; moreover it is time-saving and offers the opportunity to acquire further valuable parameters, such as nuclei size or the intensity of the foci. These values can be used for further normalization as explained in other publications [18,25,39]. In contrast, manual analysis is highly dependent on the experience of the investigator and requires extensive training [9][10][11][12][13]. Potential errors in manual analysis include multiple counting of single foci, counting of regions without foci and rare selection of less intense foci. [10] Finally, manual counting is not always reproducible [9][10][11]. Therefore, we and others recommend automated analysis to overcome the limitations of manual evaluation [9][10][11][12][13].
Moreover, manual analysis provides only a quantification of the number of foci, and there is no possibility of gathering additional information such as the size of nuclei or the intensity of the foci.
Applicability of the Focinator for different users
To prove the Focinator's user-friendliness, the macro was tested by three different Focinator user groups, namely by the programmers of the Focinator (n = 2), by biologists (n = 2) and by users with no scientific background (n = 2) (Fig. 6). For the evaluation of applicability, all groups counted γ-H2.AX foci generated in response to different radiation doses in TRAMP-C1 cells at different time points post-irradiation using predefined parameters adjusted by an experienced scientist. In total 24,858 nuclei in 3361 images were counted. All users were able to use the Focinator after reading the software's instruction manual. The data obtained by the programmers of the Focinator, the biologists and the users with no scientific background did not vary significantly in the mean evaluation times (Fig. 6). While the programmer needed 1.2 s per nucleus, the biologist needed 1.0, the users with no scientific background needed 1.54 s per nucleus (Fig. 6). Fig. 4 Use of the Focinator macro reduces counting times compared to ImageJ-based counting and manual evaluation. TRAMP-C1 cells were irradiated with 3 Gy. The cells were fixed and permeabilized for 15 min with 3 % PFA and 0.2 % Triton X-100 at different time points after irradiation. The nuclei were stained with Hoechst 33342. DSB foci were labeled with Alexa Fluor 647-linked anti-γ-H2.AX antibodies. The evaluation time for the same 35 multi-channel images containing 439 nuclei was compared between the analysis with the Focinator, ImageJ-based counting via manual ROI marking and "Find Maxima…" function or manual counting. a Evaluation times using the different counting methods. b Comparison of detected nuclei numbers by ImageJ-based analysis, Focinator batch mode and manual counting shown as overall ROI count Users with no scientific background needed 21 min for counting 553 nuclei in 90 images in the first try. However, after the second analysis round, the untrained user needed only 14 min and 7 s for 653 nuclei in 80 images. The fastest analysis was executed in 10 min and 1 s for 461 nuclei in 82 images and was performed by the programmers. Although the measurements were performed by users with different professional backgrounds all investigators were able to successfully perform the analysis rapidly and particularly faster than by manual analysis.
The results obtained were rather similar, confirming the reproducibility of data. Evidently the Focinator achieves user-friendliness by redundancy of controls, like buttons and shortcuts, as well as by a menu and an implemented manual instruction as shown in Fig. 1.1 and 1.2. Taken together, the Focinator is a user-friendly program. Moreover, a high comparability and consistency is achieved by automated computer analysis at increased velocity. This is achieved by automated analysis independent of the investigator's prior knowledge if parameter setting is performed by an experienced researcher.
Actual limitations of the Focinator and potential solutions
As outlined above, the Focinator is a valid open-source tool based on ImageJ for both the non-experienced and experienced user of scientific image processing alike. The Focinator offers advantages over manual analysis and already established software solutions. However, there are also limitations to its application. Since foci size and number varies depending on radiation dose and repair time, detection of overlapping foci can be difficult, Fig. 5 The Focinator's accuracy is comparable to manual counting and evaluation only with ImageJ. ImageJ-based, manual counting and the usage of the Focinator macro were compared. To evaluate the repair time-dependent decrease of γ-H2.AX foci after irradiation TRAMP-C1 cells were irradiated with 3 Gy, incubated at 37°C and fixed 0.5, 1, 2, 4, 6 and 24 h after irradiation. The cells were permeabilized and stained with an Alexa 647-linked anti-γ-H2.AX antibody. A total number of approximately 40 nuclei per time point was evaluated. a Development of the mean foci count per nucleus form three independent experiments at stated time points after irradiation. b A dose response curve depicts foci count after different doses (0.5, 1.5 and 3 Gy) 30 min after irradiation. A direct correlation between the different scoring methods with respective correlations value (R 2 ) at the time points 0.5, 1, 2, 4, 6 and 24 h after irradiation is shown for Focinator-based evaluation in comparison to using ImageJ alone in (c) and compared to manual counting in (d) a problem also recognized in manual analysis or when using an alternative software [12]. Because computational analysis offers measuring of qualitative parameters, a correction of these overlapping foci is possible by taking the intensity of foci into account. The Focinator counts foci based on signal intensity. This provides the opportunity to set a noise level to exclude foci with low intensity for further calculation, thereby strengthening the results [12,25]. Cai and colleagues also suggested to include watershedding of foci to separate overlapping foci [25]. Watershedding is a procedure offered by ImageJ, which can be used for the segmentation of overlapping objects, like cells or even foci, in greyscale images [40]. We decided against this procedure, because watershedding of foci requires 8-bit formatting, which would cause the information about the signal intensity to be lost. Plugins such as 3D Object Counter [41], top-hat filter in Fast Filters [42] and FociPicker 3D [26] are alternative approaches to solving the problem of overlapping foci by taking size, intensities and algorithms into account. Though these options are more suited for the advanced user, implementation of the FociPicker 3D into the Focinator's source code is feasible. Nevertheless, it is still possible that a single focus is too small to be detected with a confocal microscope, because the resolution of the microscope might be too low to display small foci separately [12,43,44]. To counter this problem, the macro offers the option of noise level adjustment. Lowering the noise level might result in higher foci numbers, due to noise artifacts been recognized as foci. The real number of foci can be validated by measuring the intensities of foci and taking these into consideration. As other authors have shown, not only the foci count, but also their intensity plays an important role and correlates well with the absorbed radiation dose. [12,25] The use of an implemented cut off can further improve the results by deleting foci with a value below a chosen intensity to eliminate background signals. Intensities and XY-localization of each focus are exported into Excel. Setting a cut off for each channel and performing colocalization analyses are possible with these exported values.
Another limitation of the Focinator can be high cell density, which might result in overlapping cells and, thus, in overlapping nuclei. Therefore, it is not recommended to use overcrowded images. The result without use of watershedding or manual correction of the selection would combine two overlapping cells into one ROI with a larger area size. However, when using the Focinator the overall foci count will not be affected by a high cell density, because the foci count can be normalized by the area of the ROI.
However, it is not currently possible to analyze three dimensional or multilayer images with the Focinator/ ImageJ while the IMARIS software (Bitplane AG) offers this possibility [13,15].
Though being intuitive, the user needs to adjust parameters, such as the noise level, on their own, in contrast to FindFoci, where the program is able to learn the parameters on its own [10]. Although it is possible to perform analysis with predefined settings and an automated threshold, parameter setting for the individual cell type is a major step and can only be validly performed by people with sufficient background knowledge e.g. of the specific foci-related protein of interest. Parameters are supposed to be set according to image quality and the corresponding values found in published data such as 15-19 foci per nucleus per 1 Gy [35][36][37][38].
Another limitation of our macro is the restriction of multi-channel analysis to three channels only. This limits the macro to two foci channels with distinct fluorescence labeling (for example. using γ-H2.AX and 53BP1 antibodies with different secondary antibodies) since one channel is needed to select the ROI (e.g. using DAPI, 4′,6-diamidino-2-phenylindole to mark the nucleus).
Advantages of Focinator-based foci evaluation
The Focinator is an inexpensive alternative to commercial packages. In contrast to the limited file formats accepted by some of the commercially available software solutions, the Focinator supports all file formats of ImageJ including TIFF, PNG, GIF, JPEG, BMP, DICOM and FITS, as well as raw formats. It is also possible to use stacked images and device-specific formats such as Zeiss' AxioVision ZVI or Leica's LIF [16]. Fig. 6 The Focinator is a user friendly method that can be used without long term training. In Fig. 6, three different groups of users are compared. Programmers of the Focinator (n = 2), Biologist (n = 2) and users with no scientific background (n = 2) evaluated ten different cell lines. For each cell line about 80 pictures containing a total of about 500 nuclei were evaluated by the different users with the Focinator. The graph shows the calculated evaluation times per nucleus including a correction based on the numbers of pictures that had to be opened The Focinator provides the possibility of adjusting multiple parameters for better image processing. Automated selection of cells or nuclei as ROIs is possible. For advanced users with prior knowledge of image processing who wish to adjust their analysis, the software offers further preferences and the choice of running the analyses in an automated mode or a semi-automated mode with the possibility of adding or deleting ROIs manually. It is even possible to analyze overlapping ROIs, a problem occurring in other automated solutions [13,20].
Adjustability of the Focinator parameters is achieved without manipulating the images. The Focinator ImageJ macro exports the intensity and XY-localization of each focus. This step allows further processing of exported foci values, such as setting a cut off for each channel and colocalization. The raw data of the pictures are not changed for analysis by filters like changing of contrast, blurring or sharpening. This prevents the results from being manipulated [45]. We consider counting of one ROI per time and displaying the results as advantage and improvement in quality. Moreover, it is possible for the user to observe problems or incorrect preferences in selecting ROIs or foci counts. [13,15] Being developed as a macro for ImageJ, the Focinator allows the implementation of new algorithms in order to customize the macro [14]. This is an important advantage of the Focinator compared to the standalone solutions. Due to the open-source nature of ImageJ, it is also possible with the macro to change the source code, to use the functions and plugins of ImageJ, and to program additional macros to solve the array of problems associated with image processing in a scientific context [13,21,22].
Further advantages compared to other established software solutions include the option to run the analysis in a batch mode and a data export into Microsoft Excel spreadsheet for further statistical and graphical evaluation. The Focinator Batch is programmed with R enabling modification by the user.
The possibility of observing the tool while selecting ROIs and foci in the batch mode, is very useful for further analysis security. This enables the user to recognize aberrant data and to adjust the settings accordingly.
Conclusions
The Focinator is a costless, reliable and user-friendly open-source tool for fast automated high-throughput quantitative and qualitative analysis of DNA damageinduced foci formed by repair-associated proteins such as γ-H2.AX at the DNA damage sites with high accuracy and reproducibility. The Focinator is based on ImageJ with additional data export to Microsoft Excel. In comparison to manual analysis, it overcomes investigatorrelated bias and significantly reduces analyzing time. Moreover, it delivers a valid, fast and automated selection of nuclei and cells. Furthermore, it enhances the speed and reliability of analysis, and provides additional options for qualitative foci analysis like area size of nuclei and the intensity of foci. Importantly, the Focinator offers analysis of multi-channel pictures and colocalization. Its selfexplanatory features make it possible to use the Focinator without prior training and the batch mode enables the user analyzing data in his absence. Data export into different output files with consecutive export into a spreadsheet is available, thus enabling further data processing and analysis. With the option to run data analysis in a batch mode, we think that the Focinator is a valid tool for efficient preclinical testing of the efficacy of new drugs targeting DNA repair alone and in combination with radio(chemo)therapy. For differing scientific aims, using further functions and plugins of ImageJ or programming of additional macros is possible. | 8,404.2 | 2015-08-04T00:00:00.000 | [
"Biology",
"Computer Science",
"Medicine"
] |
Implementing Data Distribution Management System Using Layer Partition-Based Matching Algorithm
Simulation has become a popular tool to study a broad range of systems. The growing number and quality of simulation software requires expertise for their evaluation. In High Level Architecture paradigm, the Runtime Infrastructure (RTI) provides a set of services. Data Distribution Management (DDM) service reduces message traffic over the network. DDM services are used to reduce the transmission and receiving of irrelevant data and aimed at reducing the communication over the network. Currently, there are several main DDM filtering algorithms. The paper describes practically testing simulation for the data distribution management (DDM) service of the Battleground simulation with the dynamic fighters by using layer partition-based algorithm. The layer partition-based matching algorithm is based on divide and conquers approach. It selects the dynamic pivot by detecting the regions distribution on the routing space. This system intends to detect the movement of the fighter objects, searches overlap between the fighter object and every battalions (extents). It is large-scale distributed simulation in order to minimize subsequent computations and algorithm complexity. The developed system can be used not only in the research purpose but also in the real-world distributed applications. It provides the low computational time and exact matching result.
I. INTRODUCTION
HLA (High Level Architecture) is a software architecture which can be reusable for execution of distributed simulation applications. HLA consists of rules, interface specification and object model template. The interactions between federation and federates are governed by an interface specification with the Runtime Infrastructure. Object Model Template (OMT) is a role to document major information about simulations.
HLA presents a framework for modeling and simulation within the Department of Defense (DoD). The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. HLA allows interconnection of simulations, devices, and human operators in a common federation. It builds on composability, letting designer construct simulations from pre-built components. Each computer-based simulation system is called a federate and the group of interoperating systems is called a federation. HLA specifications-incorporated as IEEE 1516 standard-were developed to provide reusability and Manuscript received October 19, 2019; revised January 28, 2020. Nwe Nwe Myint Thein is with University of Information Technology, Myanmar (e-mail: nwenwemyintthein@uit.edu.mm). interoperability.
HLA was developed by the Defense Modeling and Simulation Office (DMSO) of the Department of Defense to meet the needs of defense-related projects, but it is now increasingly being used in other application areas. The Department of Defense's policy is to disseminate information about the HLA as widely as possible, both inside and outside the US, and even to provide free supporting software to help new users to evaluate and use the HLA as easily and inexpensively as possible [1], [2].
A hierarchy of components of increasing levels of aggregation is considered as a complex simulation. The lowest level is the model of a system component. This may be a mathematical model, a discrete-event queuing model, a rule-based model etc. The model is implemented in software to produce a simulation. When this simulation is implemented as part of an HLA-compliant simulation, it is referred to as a federate. HLA simulations are made up of several HLA federates and are called federations. There can be multiple instances of a federate, for example several Boeing 747 simulations or F-16 simulations, and the instances' number can change as the simulation continues. The simulations that use HLA are modular in nature allowing federate to join and resign from the federation as the simulation executes. Previous and recent developments of distributed simulation systems have been carried out mainly in the areas of military applications such as commercial war games. In war gaming, a tank may regularly publish its position and another object e.g. an aircraft may be interested to detect the position of the tank by subscribing to receive the position update data published by the tank [3], [4].
Efficient data distribution is an important issue in large scale distributed simulations with several thousands of entities. The broadcasting mechanism employed in Distributed Interactive Simulation (DIS) standards generates unnecessary network traffic and is not suitable for large scale and dynamic simulations. An efficient data distribution mechanism should filter the data and forwards only those data to federate who need the data. Several filtering mechanisms have appeared in the literature and some of them have been implemented in RTI [5], [6].
The goal of the DDM module in RTI is to make the data communication more efficient by sending the data only to those federates who need the data, as opposed to the broadcasting mechanism employed by Distributed Interactive Simulation (DIS). The approaches used by DDM are aimed at reducing the communication over the network, and the data set required to be processed by the receiving federates [7].
In the implementation of distributed interactive simulation, it requires the specification of routing space and the number of update regions and subscribes regions. This implementation needs to search actual overlap information between the update and subscribe regions. The overlapping result also needs to be known in a time saving manner for the participants of the distributed simulations [8].
Modeling and simulations have always been a major part of human history. Modeling can be defined by "the process of producing a model; a model is a representation of the construction and working of some system of interest" and simulation can be defined as "A simulation of a system is the operation of a model of the system" simply. Modeling and Simulation(M&S) is used to simulate real system's objectives by modeling components, simulation steps and process and implementing produced models in a time flow. The area of M&S was extended from War Game to Task Request, Weapon Acquisition, Decision, Analysis and Military Training. Also, efficient operation of massive simulation and interoperability between complex systems has been studied. [9] The remainder of the paper organizes as follows. Section 2 describes the theory background of DDM and matching algorithms. The layer partition-based matching algorithm explains in section 3. Section 4 represents the simulation model by using layer partition-based matching algorithm. Finally, section 5 offers conclusion.
II. LITERATURE REVIEW
Data Distribution Management (DDM) is a set of services defined in HLA to distribute information in distributed simulation environments. HLA's Run Time Infrastructure (RTI) is a software component that provides commonly required services to simulation systems. There are several groups of services, which are provided by RTI to coordinate the operations and the exchanges of data between federates (simulations) during a runtime execution. The interaction of object instances supports by the function of RTI, which is like a distributed operating system. The evolution of the DDM service provides solutions to problems by using filtering mechanism that is suitable for large-scale simulation. These services rely on the computation of the intersection between "update" and "subscription" regions. When calculating the intersection between update regions and subscription regions, the higher computation overhead can occur. Therefore, many DDM filtering algorithms are proposed [10].
A. Region-Based Algorithm
The region-based algorithm checks all the pairs of regions until an intersection found for each pair of update regions and subscription regions or the end of the regions list reached. The implementation of this algorithm is straightforward, but the performance is varying greatly. If there is N update regions and M subscription regions. "There are N*M pairs t check in the worst case" [11]- [13].
B. Grid-based Algorithm
In the grid-based approach, the routing space partition into a grid of cells. Each region mapped onto these cells. If a subscription region and an update region intersect with the same grid cell, they assumed to overlap with each other. Although the overlapping information is not exact, the grid-based algorithm can reduce the computation complexity than the region-based algorithm. The amount of irrelevant data communicated in the grid-based filtering depends on the grid cell size, but it is hard to define the appropriate size of grid cells [12]- [14].
C. Hybrid Approach
The hybrid approach is an improvement approach over the region-based and the grid-based approaches. The matching cost is lower than the region-based approach, and this advantage is more apparent if the update frequency is high. It also produces a lower number of irrelevant messages than that of the grid-based approach using large cell sizes. The major problem is that it has the same drawbacks as the grid-based approach: the size of the grid cell is very crucial to the behavior of the algorithm [14], [15].
D. Sort-Based Algorithm
The sort-based algorithm used a sorting algorithm to compute the intersection between update and subscription regions. The algorithm's performance degraded when the regions are highly overlapped, and it needed to optimize the sorting data structure for the efficient matching operation [16].
E. Binary Partition-Based Algorithm
The binary partition-based algorithm takes a divide-and-conquer approach. It performs binary partitioning which divides the regions into two partitions that entirely cover those regions according to the midpoint (pivot value) of the routing space. It matches and compares the pivot partition with the left and right partitions. The overall overlap information can be obtained by combining the overlap information of each dimension. The selection of the pivot partition is a determinant factor of the performance of the algorithm. The overlap information of all region is obtained by the pivot partition. If there is no overlapping at all in these partitions, the comparing process of the pivot partition with the left and right partitions is the most time-consuming operation. With the greater overlap rate, the binary partition-based algorithm performs well. The partitioning process of the algorithm are stated in Fig. 1 [17], [18]. The Layer partition-based matching algorithm supports to search the overlapping information for data distribution management of HLA. It executes in dimension by dimension. This algorithm accepts all regions in the routing space. It also generates all regions randomly. Then it sends these regions to the Layer partition-based matching algorithm. The LPM algorithm firstly chooses the optimal pivot to define the matching area. The efficiency and performance of the divide and conquer approaches depends on the choice of the pivot value. Some algorithms choose the middle point as the pivot value. The LPM algorithm accepts the projected regions list and select one point of that list as the pivot value. At that point, the most subscriber regions and updater regions are converged in the projected regions list [19].
To define the exact matching area, a region distribution detection algorithm mainly used in the first layer of layer partition-based matching algorithm. The LPM algorithm firstly calculates the regions distribution. Then, the partitioning among regions performs based on the result of choosing pivot based on region detection and defines the matching area that entirely covers all regions, which need to match with regions at pivot point. The process of optimal pivot choosing and defining the exact matching area is shown in Fig. 2. The algorithm guarantees low computational overheads for matching process and reduces the irrelevant message among federates [20]. The LPM algorithm promises the lower number of pivot point choosing. It also reduces the number of matching process between the updater regions and the subscriber regions of the routing space. The LPM algorithm reduces the half of matching process by defining the exact matching area. It also assures the lower number of pivots choosing for partition the routing space.
In the second layer, the specific decision of the region's selection performs to calculate the matching data between the three sets. This layer also supports the subtracted region lists. These lists subtract from the input regions set for next matching calculation. The classification of regions carries out in the region classifier algorithm. The actual matching between the updater regions and the subscriber region execute in intersection calculation algorithm. The subtracted region lists use to reduce the next calculation. The final overlapping information produces by observing the result of two matrixes for two-dimensional routing spaces. The LPM algorithm complete when all dimensions are covered. [20] The process of layer partition-based matching algorithm is shown in the Fig. 3. For the matching algorithms of DDM, the impact of network speed on the algorithm does not care and, there are no messages transferred in the network in all the approaches. Thus, a single computer used to make experiments. One of the important experimental parameters is the number of regions. The overlap rate defines as the proportion of the scene volume occupied by the regions. Therefore, we define the overlap rate as shown in equation (1): where ∑ area of regions = number of regions × high of region × width of region. The performance of the LPM algorithm analyze on same size regions and different size regions, which generate randomly. The execution time of same size regions can be reduced about two third than the other matching algorithms for the overlapping rate is 1. LPM algorithm is more efficient than the existing matching algorithms of DDM at any overlapping degree using different size regions. The main advantage of this algorithm is its support for scalability very well, when the overlapping rate is large. The overlap rate defines as the proportion of the scene volume occupied by the regions.
The computational complexity of the LPM algorithm, we suppose that there are N regions with the number of dimensions, d=2 in the multidimensional space. The optimal pivot algorithm requires O(N) computation for the size of region is M. The first layer partition algorithm requires O(N) computation. The total number of recursions for matching algorithm requires O (log N) computation. Moreover, the second layer partition algorithm also needs O(n) computation and the matching process of comparing the intersection of regions between partitions requires O (n 2 ) computation (where n is the number of regions in each partition). The complexity of the intersection calculation procedure is proportional to n. It seems that the most important points are the exact matching partitions. It is obvious that the number of regions, n, is a determinant factor. Because the overlap information of all regions obtains by the pivot partition, it is not necessary to compare their overlap information in the left and right partitions of pivot partition. Therefore, the computational complexity of the LPM algorithm is n2 x N x O (log N) computation. If the number of regions, n, is normally very small in a large-scale spatial environment, so the LPM algorithm should be very efficient. Therefore, the actual computational complexity depends on how the exact matching partition well achieved. The proposed algorithm is also sevaluated in firstly calculates the X-dimension for matching result. Then modify the input region of Y-dimension based on the matching result of the previous dimension. The proposed algorithm is more efficient most of overlapping degree but sometime the proposed algorithm has more execution time than the original layer partition-based algorithm [21], [22].
IV. SIMULATION
The proposed system focuses on the battleground simulation by using new layer partition-based matching algorithm for data distribution management system. The overall system design and architecture of implementing distributed data management system in battleground by using layer partition-based matching algorithm is shown in the following Fig. 4 and Fig. 5. It is implemented the simulation using java programming language. The system consists of four distributed federates and communicates with them via RTI.
A. Scope and Architecture of the Proposed System
The system consists of centralized FighterController, OverlapDetector, and Coordinator and distributed local DDM_Managers of each battalion groups. The system consists of four distributed groups. In the proposed system, there are two kinds of entities: fighter objects and federates (battalions). The system checks the movement of fighter at each time step and calculate the overlapping algorithm. When overlapping is detected, it informs the corresponding battalion.
In order to perform the overlapping detection, the system comprises three sub-models: • FighterController: It is used to control the moving of the user defined fighter objects. • OverlapDetector: It uses layer partition-based matching algorithm to detect the overlapping condition between fighter objects and 12 battalions.
• Coordinator: It performs the message transportation duty.
B. Overview of the Proposed System
The system will be implemented for dynamic fighter simulation object moving in a battleground, travelling at constant speeds in various directions. The number of objects is initially placed at user's input boundary. It is assumed that the fighter moves constant velocity and constant direction for several time-steps. The position and movement of the fighter can be detected by investigation of FighterController. The regional battalions act as subscribers and fighter object acts as updater. To search the overlap information of fighter and regional battalions, the system uses layer partition-based algorithm. OverlappedDetector has detected the position of fighter; it continues to distribute the fighter's information to the overlapped battalion which needs to know that information by means of Coordinator.
For simplicity, the fighter moves at the start of the random position on user's defined four boundary side of the battleground, North, East, South and West. When the fighter object is moving to the destination boundary, the central regiment server detects the fighter's position and search the overlapping result between the fighter and the 12 battalions. When the fighter object is reached on another side of the boundary of the battleground, this fighter is not considered in the detecting fighter and searching the overlapping information. All fighter objects pass through the boundary of the battle ground the simulation is terminated. In the proposed system, twelve battalions are grouped in order to perform the relevant actions. Assume that these groups are in distributed manner. The Fig. 6 shows twelve regiments, their grouping. Fig. 6. Twelve regiments divided into four co-operated group.
C. Operational Components of the Proposed System
The system implements the simulation of detecting enemies' moving fighter objects by using layer partition-based algorithm. In this platform, the fighter objects are simulated using the FighterController sub-model. This sub-model is responsible for the creation and deletion of the simulated fighters from the runtime infrastructure. It also in control of the movement of the simulation objects within the routing space and it calculates the boundary point for each fighter and control the fighter object's movement. The fighter objects move according to the random position of the user's assigned direction and at each time step it checks for meeting the boundary of battleground area. When the fighter objects reach the other side of the boundary of the battleground, these fighters are removed from the current runtime infrastructure. Sequence diagram for the interaction between fighter object, FighterController and user is shown in Fig. 7. The OverlapDetector oversees data filtering strategy in this system. The position of the fighter objects may change at every time steps. The OverlapDetector search the overlapping information between the battalions and the fighters by using LPM. When it knows the overlapping information, it connects Local DDM_Manager of the regional battalions (federates). Then it sends the command directly to overlapping federates. If overlapping happens, it produces actual overlap battalion name and sends information to that battalion. Therefore, the battalion may perform required action in timely manner. In this sub-model, OverlapDetector require to communicate with Coordinator. When it knows the overlapping information, it connects Local DDM_Manager of the regional battalions (federates). Then it sends the command directly to overlapping federates. In the Fig. 8 shows the sequence diagram for interaction between Fighter, OverlapDetector, Coordinator and Local DDM_Manager, the fighter objects always send the position of the fighters to OverlapDetector to calculate the overlapping information. If OverlapDetector detected the overlapping results between battalions and fighter objects, it sends all overlapping fighters list to Coordiantor. When the fighters passed through the boundary of the battleground, these are not interested by the OverlapDetector and all these fighters are not considered in the runtime infrastructure. The Fig. 9 shows the algorithm incorporated in the system for performing the time-stepped simulation of the battleground. The coordinator sub-model is responsible for communication between battalions and their local DDM managers. In the proposed system, it is to be assumed that four federate groups are in distributed manner. Each group has three battalions. In order to connect them Coordinator uses RMI methodology.
The proposed algorithm selects the dynamic pivot by detecting the regions distribution on the routing space. Firstly, it generally defines the left set, right set and the pivot set. Before matching the left set and right set with the pivot set, the proposed system defines the exact matching area of regions for the left set and right set and calculate the subtract set for each left and right set upon the exact matching area. So, the proposed algorithm matches the exact matching area of the left set and right set with the pivot set. After matching the left and right sets with the pivot set, the two subtract lists are omitted in the input regions lists for the next repetition of the LPM. It is the detection and producing the overlapping results of the battleground simulation of large-scale distributed simulation in order to minimize subsequent computations and algorithm complexity.
V. CONCLUSION
The Efficient data distribution is an important issue in large scale distributed simulations with several thousands of entities. The broadcasting mechanism employed in Distributed Interactive Simulation (DIS) standards generates unnecessary network traffic and is unsuitable for large scale and dynamic simulations. In this paper present the performance of the layer partition-based matching algorithm based on the previous calculated dimension. The proposed algorithm also provides the minimum matching cost between the updaters and the subscribers within the routing space. The system can be used not only in the research purpose, but also in the military and civil applications. This system is developed to search the overlapping information between the data updaters and the data subscribers based on the time-stepped simulation's infrastructure. Due to the Layer Partition-based algorithm, the implementation can decrease the message traffic over the communication network. It can grow up the efficiency to distribute the updater's information. It can use in any distributed simulation in large scale distributed system. The small execution time for matching process can get from the simulation system with the use of this proposed algorithm when the overlapping degree is lower.
CONFLICT OF INTEREST
The author declare no conflict of interest.
AUTHOR CONTRIBUTIONS
The author, Nwe Nwe Myint Thein, studied thoroughly High Level Architecture's Services. The author interested in Data Distribution Management (DDM) Service. Firstly, the author applied existing algorithm of DDM and investigated them. Then, the author proposed the new algorithm for DDM Services. The author analyzed the proposed algorithm and compare with the existing algorithms. The author published many papers according to the analyzed result. During the research journey, the author was advised by the author's supervisor -Dr Nay Min Tun. The author had approved the final version. | 5,279.4 | 2020-03-01T00:00:00.000 | [
"Computer Science"
] |
Breaching Subjects' Thoughts Privacy: A Study with Visual Stimuli and Brain-Computer Interfaces
Brain-computer interfaces (BCIs) started being used in clinical scenarios, reaching nowadays new fields such as entertainment or learning. Using BCIs, neuronal activity can be monitored for various purposes, with the study of the central nervous system response to certain stimuli being one of them, being the case of evoked potentials. However, due to the sensitivity of these data, the transmissions must be protected, with blockchain being an interesting approach to ensure the integrity of the data. This work focuses on the visual sense, and its relationship with the P300 evoked potential, where several open challenges related to the privacy of subjects' information and thoughts appear when using BCI. The first and most important challenge is whether it would be possible to extract sensitive information from evoked potentials. This aspect becomes even more challenging and dangerous if the stimuli are generated when the subject is not aware or conscious that they have occurred. There is an important gap in this regard in the literature, with only one work existing dealing with subliminal stimuli and BCI and having an unclear methodology and experiment setup. As a contribution of this paper, a series of experiments, five in total, have been created to study the impact of visual stimuli on the brain tangibly. These experiments have been applied to a heterogeneous group of ten subjects. The experiments show familiar visual stimuli and gradually reduce the sampling time of known images, from supraliminal to subliminal. The study showed that supraliminal visual stimuli produced P300 potentials about 50% of the time on average across all subjects. Reducing the sample time between images degraded the attack, while the impact of subliminal stimuli was not confirmed. Additionally, younger subjects generally presented a shorter response latency. This work corroborates that subjects' sensitive data can be extracted using visual stimuli and P300.
Introduction
Technology is closely linked to our daily lives, making it impossible to think about performing some tasks without direct or indirect help. is is mainly due to the constant evolution of technology and the current trend to make it more user-friendly. Consequently, new technologies based on computer-human interaction, such as Kinect devices [1] or Brain-computer interfaces (BCIs), have gained relevance for the last decades. BCIs provide a bidirectional channel between the brain and external devices, enabling two modes of use [2]. On the one hand, BCI can stimulate or inhibit neuronal activity to treat neurodegenerative diseases. On the other hand, they can also monitor brain activity to diagnose diseases or control external devices.
BCIs can be mainly classified into two categories depending on their invasiveness level in the human body [3]. On the one hand, there are invasive interfaces, for which a surgical operation is necessary.
is is the case of brain implants [4], which can either pass through the cerebral cortex to measure the activity of single neurons or be implanted on the cerebral cortex surface to measure the activity of groups of neurons. e application scenarios for this type of interface are usually clinical due to the impact on the subjects' physical integrity. On the other hand, noninvasive BCIs have electrodes placed on the head surface to capture the transmission of electrical impulses during brain activity, known as electroencephalography (EEG). In this case, the obtained signal is the aggregation of the neurons located in the area close to the electrodes. In summary, noninvasive BCIs provide less accurate measurements than invasive, as the skull weakens the signal and adds noise. However, the advantage of avoiding the surgery in noninvasive approaches and the price justify why noninvasive BCIs are much more extended than invasive in entertainment scenarios.
anks to the evolution of technology associated with BCIs, they have gone beyond the medical field and have reached other sectors such as entertainment and video games [5], where the purpose is to give players a greater sense of immersion. In the same way, the clinical sector has also advanced and expanded the use of these interfaces. For example, the literature has proposed gamification processes, which seek to know the subject's emotional state, subjecting it to a training process called mindfulness [6]. Another field of application, where BCI has been tested, is in industrial robotics, where the control of robots is sought to implement significant precision tasks with brain activity [7]. is last application is one of the most promising due to the increment of life expectancy in society. It is associated with a progressive increase in the number of people in a situation of dependence. e evolution towards aging societies demands new solutions to assist the elderly, requiring help to carry out daily life activities. In this sense, BCI systems can be extremely beneficial, as they facilitate a new way of interacting with the different devices existing in their environments [8]. erefore, BCIs contribute to an increase in the dependent people's autonomy, improving their quality of life and their integration into society.
One of the most well-known application scenarios of BCI is the study of bioelectric potentials produced as a response of the central nervous system to certain stimuli.
is is known as "evoked potential" (ERP) [9]. ere are several types of evoked potentials, such as visual, auditory, or sensory.
anks to the study of these evoked potentials, diverse information can be obtained from the subject. For example, the subject's intelligence has been related to the latency of appearance of the evoked potential; the higher the intelligence quotient, the shorter the latency time of the evoked potential [10]. e evoked potential P300 [11] is a response of the brain that occurs about 300 ms after a "significant" event has taken place (hence the name P300, "P" since it is a positive increase in brain voltage and "300" because, as mentioned above, it occurs about 300 ms after the event). is potential is mainly observed in the occipital and parietal areas of the cerebral cortex. e events that provoke this wave can be visual and auditory. However, this work focuses on visual stimuli.
is potential can be intentionally provoked by following the Oddball paradigm (among others), which is based on randomly showing a series of known stimuli in a set of unknown stimuli. One of the most famous scenarios, where this paradigm has been tested, is with the P300 Speller, which, in a very abbreviated form, is a matrix of letters illuminated by rows and columns, whose purpose is to try to guess the character on which the user has placed his attention [12].
In particular, the P300 [11] potential appears when the subject already knew the stimulus presented to him/her. erefore, malicious use can cause some privacy implications that must be taken into account when using BCIs. e first and most important is to study whether it is possible to obtain sensitive information from the subject through evoked potentials. For example, images can obtain private personal information, such as ideology or sexual orientation.
is could apply to people with high relevance, such as the Prime Minister, whose data would be highly valued. On the other hand, if less prominent users are targeted, the attacks could obtain banking information for financial gain. In this sense, Martinovic et al. [13] carried out a series of experiments to obtain bank details or known locations through the P300. e experiment concluded that it was possible to detect when a subject knew a specific visual stimulus. is cybersecurity issue becomes even more dangerous when the stimuli are produced without the subject being aware of them. It is the case of subliminal images or, in other words, images that subjects are not conscious of seeing, but they have been processed by the brain (or not). Moreover, the data obtained and transmitted by BCIs are critical. Because of that, BCI technologies could benefit from the application of existing cybersecurity mechanisms such as blockchain [14], a promising solution to improve the security of the data and prevent attacks affecting the integrity of P300. However, the subliminal aspects of these stimuli are not clear in the literature and are, therefore, an open challenge. Due to the fact that there is only one paper dealing with the impact of subliminal attacks (Frank et al. [15]), many questions remain open about the methodology of these experiments. Some of the issues that are sought to be clarified are as follows: (1) how much time the images should be shown, (2) how much the sampling time of stimuli should be reduced, (3) whether images should be entirely invisible, or (4) whether the reaction is uniform in all subjects, among others. In this context, different parameters should be considered when experimenting in this field. As an example, we highlight the sampling time of the images and the way they appear. Similarly, these images should be shown to a more significant number of subjects, in which different ages and genders should be considered. After this, some new and strong conclusions indicating whether the results are similar to those obtained in the literature experiments should be provided.
To improve some of the previous challenges, the first contribution of this work is the implementation of a BCI framework able to acquire, process, and store the EEG signal to detect a P300 wave. is framework also displays, in a visual way, the EEG signal, which allows detecting if there is a P300 or not. Once the framework is implemented, the second contribution, and the main one, is the creation of a set of experiments, where different videos show known and unknown visual stimuli (following the Oddball paradigm) to different subjects wearing a BCI headset, in which EEG is acquired and managed by the BCI framework to show the P300 wave graphically. e experiments gradually reduce the sampling time of the known stimuli to study how this aspect affects the generation of the P300 wave. ey begin with a visible image, shown during 500 ms, and ends with an invisible, or subliminal, image (shown during 10 ms). It is also important to mention that these experiments have been carried out with ten subjects of different ages and gender to obtain the most reliable results. After performing the experiments, it is determined that, for the supraliminal experiments (experiments from one to four), a P300 potential is generated about 50% of the time on average for all subjects. If attention is focused on how characteristics affect the generation of the P300, younger subjects generate P300 potentials with a slightly shorter latency. Concerning the fifth experiment, based on subliminal stimuli, no evidence has been obtained that they generate an impact on the subject's brain. e remainder of the paper is structured as follows. Section 2 introduces the related work existing in the literature. After that, Section 3 describes the architecture of the framework built to deal with data acquisition, processing, and visualization developed, followed by an analysis of the experiments and the results obtained. Finally, Section 5 highlights conclusions and future work.
Related Work
is section reviews the existing literature concerning cybersecurity in BCI, studying possible attacks and emphasizing those focused on attacking the integrity of the user's sensitive information. After that, it analyzes the impact of subliminal stimuli from a psychological perspective, indicating how this issue has been approached over the years and how effective these stimuli are.
Oddball Paradigm and Potential P300.
e most common way to induce the generation of a P300 potential has been through the Oddball paradigm [16], based on presenting a known stimulus from a larger set of unknown stimuli [17]. Following this pattern, numerous experiments based on the Oddball paradigm and the generation of P300 potentials have been performed. In these experiments, it is studied how the parameters used to present the stimuli affect the P300 wave [18].
Moving to the application scenarios, in which Oddball and P300 have been used, there are numerous studies proposing many applications for this paradigm (see Figure 1). For example, Campanella et al. [19] performed an experiment, where they combined both visual and auditory stimuli to increase the clinical sensitivity of these P300 modulations like amplitude or latency. Other studies have focused on studying how the different physiological factors of the subject affect the generation of P300, as in Kamp [20], where they focused on the age of the subjects. is paradigm has been extensively used in the medical field, specifically for the detection of mental illnesses such as Parkinson's disease [21], Alzheimer [22], schizophrenia [23], locked-in syndrome [24], or disorders such as anxiety [18].
anks to the progress made by BCIs in recent years, these scenarios have been able to go beyond the medical sector and reach other areas. For instance, the video game sector aims to obtain greater immersion while playing [25]. Another area, where they have had a great impact, is robotics. is is based on the control of robots through the P300 potential [7], being useful for people with psychomotor problems, allowing the use of robotic extremities or prostheses. Moreover, Arrichello et al. [26] proposed the creation of an assistant robot to perform manipulation tasks that may help in daily life operations controlled by the P300 potential. Finally, the generation of the potential using the Oddball paradigm has been used for user authentication, for example, based on the recognition of a set of faces [27].
Cybersecurity on BCI.
Although BCIs have significantly progressed in recent years, studies addressing cybersecurity in this area are scarce, being nowadays an open challenge [28]. In this context, there has been a recent attempt to study the problem of cybersecurity in BCI, where there are works that have partially studied specific aspects of cybersecurity in BCI, such as the possibility of disrupting neuronal signaling during neurostimulation [4]. One of these studies [29] has proposed the classification of attacks depending on the field of application of BCI, as neural applications, user authentication [30], entertainment, and video games, and smartphone applications.
A work performed by Landau et al. [31] studied the feasibility of obtaining information about a subject's personality while playing video games using a BCI. is experiment was developed using different machine learning algorithms to classify the data captured during the game. Once the data were available, they were compared with those obtained when the subject was resting. With this, they obtained 73% of accuracy, enough to demonstrate that they could violate the privacy of the subjects who had undergone the experiment.
Meng et al. [32] created an experiment, where they focused on the integrity of the captured data, applying deliberate modifications to them. In the same way, Sundararajan [33] developed a laboratory scenario, where the author ran tests with different attacks to the proposed scenario. ese attacks were as follows: (1) passive eavesdropping, intercepting the data without the user being aware, (2) active interception, able to collect the data and discard or resend them, (3) denial of service, and (4) data collection, modification, and retransmission to obtain a different response.
Focusing on possible attacks through visual stimuli [34], Martinovic et al. [13] showed four different types of images in their experiment: (1) automated teller machines (ATMs), (2) debit cards, (3) geographical points, and (4) famous people. With these images, the authors sought to obtain private information about the user, especially the user's residence, using the geographic points. e images appeared randomly throughout the experiment, and each image had a duration of 250 ms. Before starting the experiments, the subjects had to pass a training phase, where each user's P300 was classified. For this training, random numbers were displayed, and the user was asked to count the number of occurrences of each number, which was used as a calibration.
Frank et al. [15] tested subliminal attacks effectiveness in extracting private information, following the same protocol as for supraliminal stimuli. e procedure consisted of an initial calibration phase of the BCI using a series of random numbers. e protocol consisted of showing a 15-minute video, taken from the film " e Gold Rush" (1925), with images of former President Barack Obama and another unknown person inserted. ey concluded that the results were similar to those obtained previously with supraliminal images, being able to extract information from the subjects without them knowing they were being studied. Table 1 shows a comparison of the parameters used in this work with respect to the literature. Since these parameters could be different depending on the objective of the studies, they have been compared with works that have a similar purpose, which is compromising the integrity of the user's data. e most relevant parameters have been considered, such as when the images are displayed or the task the subject has to perform.
Although there are studies in the literature on subliminal and supraliminal attacks, the scarcity of studies and the different methodologies used do not allow comparison. Because of this, the next section analyzes subliminal stimuli from a psychological point of view to determine if they are biologically possible.
Subliminal Stimuli.
Subliminal stimuli are extremely controversial, with researchers disagreeing on whether they affect human behavior or not. e first time these stimuli were used was in 1972 by the market analyst Karremans [35]. In this experiment, he showed on the screen of a cinema the phrase "Drink Coca-Cola", and according to his results, the sales of Coca-Cola increased by 20%. Years later, he admitted that he had never conducted this experiment. In addition, such stimuli have been studied before, for example, by Freud [36]. He determined that stimuli could have a small influence on factors such as sleep or wakefulness. Lundy and Tyler [37] indicated that there are subliminal effects in audiences exposed to sound or visual messages when the voice volume is lowered, because this change in perception demands greater, sometimes extreme, attention from the listener. As their perceptual capacity is enhanced, they generally pick up what is meaningful, which is what has been previously encoded with this intention.
Another use of this type of stimulus is called priming [38], an effect related to implicit memory, whereby exposure to certain stimuli influences the response to the presented stimuli [39]. is concept is widely used in the educational field, for example, when a teacher asks students to read the topic before the lesson.
is first read improves later attention in class due to the priming effect [40]. However, following this methodology, several experiments have proposed this approach to influence users' decisions [39]. Nevertheless, many other experiments deny that this has a real impact on the user [41].
In general, visual stimuli are not well established, and numerous studies claim they are useless. Pratkanis et al. [42] reviewed more than 200 works and concluded that none of them provided reliable evidence that subliminal messages influence behavior. Many of these works did not find the desired effect, while those finding an effect suffered from methodological flaws. In this regard, Moore [43] stated: "there is no empirical evidence for more substantial subliminal effects, such as eliciting specific behaviors or changes in motivation". ere is some evidence of subliminal perception (not persuasion), i.e., minimal processing of information that escapes the conscious mind. An example of this is the so-called cocktail party phenomenon [44]. is situation consists in increasing the attention of a subject after identifying his/her name, even when the attention was focused on different tasks.
is indicates that the brain is processing information without being aware of it. However, so far, no studies have demonstrated the effects on motivation and behavior similar to those claimed by advocates of subliminal persuasion.
To emphasize subliminal stimuli ineffectiveness, Pratkanis et al. [42] conducted a study of retail audio tapes containing subliminal messages aimed at improving either self-esteem or memory. Neither self-esteem nor memory improved in any of the subjects. is experiment was repeated two more times, and the results showed that the subliminal stimuli did not have any effect.
BCI Framework: Generation, Acquisition, Processing, and Visualization of P300
is section describes the design and implementation details of each element defining the proposed BCI framework, which is graphically represented in Figure 2. e first phase is responsible for EEG acquisition, which hosts the software in charge of connecting the BCI device to the computer. In the same way, it saves the acquired data for later processing. Once the data is stored in a file, the EEG processing procedures must be performed. e purpose of this phase is to facilitate the study of the P300 in the next stage. For this, it is necessary to follow a series of steps, such as filtering, editing (e.g., downsampling to resample the signal), and labeling the data. Such labeling allows marking the points of interest like stimulus appearance, stimulus disappearance, and time of appearance of the P300. Finally, the last step of our proposal is the Visualization of the P300. For this, the framework graphically displays the EEG to allow the identification of the P300 potential, the main goal of this work. Each of the processes that integrate the framework and the functions they perform is detailed below.
BCI Headset.
One of the fundamental pillars of this work is the BCI headset, which makes it possible to obtain the brain waves of a subject and develop the necessary experiments. erefore, the choice of the BCI device is of vital importance for the objective pursued. e equipment used in this work is the OpenBCI UltraCortex Mark IV EEG Headset [45], which is oriented towards the academic sector. is BCI has several advantages, such as the setup speed, as the BCI configuration is straightforward, or its low cost compared to other devices with similar specifications. However, it cannot be used for medical purposes due to the low number of channels and the limited accuracy of its electrodes. Also, the BCI kit adds electrodes that can be attached to the body, which allows performing ECG to measure cardiovascular activity or even study muscle activity (EMG).
e OpenBCI UltraCortex Mark IV EEG Headset integrates its intelligence in the Cyton biosensing board. In addition, the BCI is connected to the computer via Bluetooth, where the computer uses a USB receiver, enabling the reception of data. e BCI headset follows the international format 10-20 System and offers 35 placement positions, which can be seen in Figure 3. Although the interface can use up to 16 electrodes simultaneously, this work considers 8 electrodes. In particular, the locations selected are FP1, FP2, C3, C4, P7, and P8. Additionally, we have used O1 and O2, because they correspond to the occipital area of the brain, responsible for reacting against visual stimuli.
Once the BCI is ready, the next step is to configure the presentation of both known and unknown images. For this, the framework implements a Python script that allows displaying images and marks the timestamp at which it has been exposed. In particular, the script saves, in Unix Timestamp format, the starting and ending times of the experiment and the exact time of the stimulus appearance.
is will be used in subsequent processes for the labeling of the EEG signal. It is also relevant to mention that the script has several configurations for different experiments, adapting the sample rate of the target images (more details in Section 4). Finally, it is essential to note that, as it is not a static video but a process running on the machine, it may inevitably delay. e idea is to reduce as much as possible the background processes running on the machine and assign real-time priority to the process, both for memory and CPU.
EEG Acquisition.
e software used by the framework to acquire the EEG signal is OpenBCI GUI, offered by OpenBCI. is application is the one that best adjusts to our needs, and the simplest one, although it also has some disadvantages. Among the advantages, it allows seeing graphically and in real-time the data received from each electrode, making it easier to perform initial calibration and check that electrodes are touching the scalp correctly. Another advantage is to record the experiments and later reproduce them. ese reproductions are possible, thanks to the fact that it saves all the data in a file. Nevertheless, this is also a disadvantage, since the text file format is only understandable by the OpenBCI application, and it is not unique between different versions of the application.
e OpenBCI application offers compatibility with Lab Streaming Layer (LSL) communication functionality. LSL [46] is a library designed to allow the transmission of data between different devices. Several tools have been built on top of this library, such as data recording, file import, and applications that allow data from various acquisition hardware (a BCI, for example) to be available on the laboratory network. OpenBCI GUI uses this protocol as a gateway to other applications. In particular, After the reception of the data, the next step is to store all the data into a portable file, which external applications can understand. e file format selected for this purpose is CSV.
erefore, an external application is needed to receive the data via LSL and convert it into a CSV file. is functionality is provided by OpenViBE, a platform dedicated to the design, test, and use of BCI. OpenViBE is extremely intuitive, since it is based on the so-called "boxes." Each box offers a specific functionality, such as file writing, data filtering, or graphical representation of the data. is offers the possibility of deploying a complete scenario, which receives the data, processes them by applying the selected filters, and exports them to a CSV file. e proposed solution uses OpenViBE, which assembles a scenario that receives the data through LSL, exporting them to a CSV file.
EEG Processing.
e EEG processing phase intervenes after storing the recorded EEG. Our framework uses MatLab for EEG processing since it is one of the most widely used software devices for this purpose. ere are a large number of plugins that extend the Matlab functionality, with Letswave [47] being one of the most used and well known for BCI. Letswave is specially optimized for EEG data and provides multiple functions for processing neurophysiological signals, including data processing and analysis of time-and frequency-domain signals, also presenting a GUI to ease the process. Additionally, it allows the comparison of the data between each of the steps performed, such as filtered and unfiltered data.
After configuring Letswave, the framework imports the data previously stored in the CSV file. Letswave version 7 offers the possibility of importing data from Matlab. Once the data are loaded into the framework, it applies the notch filter at 50 Hz and a band-pass filter between the frequencies of 5 Hz and 30 Hz. Finally, a 5 Hz downsampling is performed, leaving the signal with 50 samples per second (originally 250). At this point, algorithms for feature extraction such as ICA [48] or Bilinear Analysis [49] could be applied.
After processing, the next step is to label the data. For this, one more column is added to the data called "control column" (Figure 4). is column has all values set to zero, except when a stimulus is displayed, which will have a higher value than the rest of the data. In this way, it is easier to discriminate the points of interest in the graphs.
P300 Display.
With the EEG data ready for study, the last process of the framework is to plot the EEG and detect possible P300. Several tools are available for this purpose. Letswave 7 offers a module with a series of functionalities for creating figures and the graphic representation of the EEG. e graphing data module of this tool helps make the first contact and check that the data is correct. However, the generation of figures exported for external use is complex, so this option has been discarded. us, the figures have been created with Excel, as it is simpler and meets the requirements for the purposes we are pursuing.
For the visual identification of the P300 potentials, the literature has visually studied the behavior of the signal in terms of voltage and amplitude. e shape of a P300 potential starts with a decrease of the signal, which can reach negative voltage values and then increase the voltage until a peak that will have a slight decrease in the voltage. After this, the maximum voltage peak occurs, in our particular case, usually up to 40 µV.
Experiments
is section details the design and setup of the experiments performed and the protocol used to conduct them. In this sense, this section offers information about the subjects intervening in the experiments, including relevant personal and medical information. Finally, we compare the results obtained and analyze whether there are similarities between them. Figure 5 presents the scenario used to validate the framework, whose functionality has been tested with ten subjects. In particular, both target and nontarget images were presented to the subjects wearing a BCI Headset. e cerebral activity captured due to these stimuli was used as input to the implemented BCI framework. Moreover, the personal information of these subjects can be found in Table 2. In general, most of the subjects participating in the experiments had an average of 23.7 years old, being mainly men. Most subjects had no diagnosed neurological diseases, except for one subject who presented hyperactivity and attention deficit disorder.
Five individual experiments were performed on each subject, based on sampling target images differentiated from a more extensive set of nontarget images, with about an 8% probability of occurrence. ese experiments aimed to detect whether these target images produced a P300 potential in the subject. e target images have been displayed at the same size as the nontarget images. Likewise, no attempt has been made to hide them, but they have been displayed full screen. e videos shown in the five experiments were a series of images with a total average duration of 50 seconds (depending on the sample time of the target image). e experiments differed on the sampling time of target and nontarget images, decreasing the target images sampling time in each successive experiment (500 ms, 250 ms, 100 ms, 50 ms, and 10 ms). e sample times of the nontarget images remained at 500 ms regardless of the experiment. e target images were personalities known by the subjects (Figure 6(a)), shown to them before starting the experiment to ensure that they were recognized. Unfamiliar images were neutral images, which the subjects were not familiar with, usually of natural landscapes (Figure 6(b)). ese images varied from experiment to experiment, thus preventing them from being recognized by the subjects. Similarly, the number of times the target image appeared was randomized. is is because of what has been studied in state of the art, where it is confirmed that the time between target images influences the generation of the P300.
Concerning the protocol followed for the execution of the experiments, we begin by detailing the environment, in which the subjects were at the time of the experiment. In this sense, a room with as little noise as possible was selected to avoid the subject receiving external stimuli that could generate noise in the EEG or alter the generation of the P300. Subjects were asked to try to avoid all possible movements, especially facial movements. ese movements add a significant amount of noise to the data acquired and may cause a P300 wave to be lost. Similarly, they were asked to count the number of occurrences of the target image mentally. is is not necessary to generate a P300, although we ensure that the subject is concentrating on the experiment. Between each experiment, approximately 30 seconds was allowed for the subject to settle back in and check that all electrodes were Journal of Healthcare Engineering 7 still in contact with the scalp. It also allowed the subject to pay more attention to the new experiment, avoiding fatigue. Once the details of the experiments have been presented, we analyze the results obtained from each experiment. To validate the EEG signals acquired, a figure from the literature is taken as a reference, including both P3a (target) and P3b (nontarget) Figure 7. e results obtained for each subject and experiment are shown in Table 3. In this table, three possible types of responses to a target image are identified. e first consists of the identification of a P300 (✓), the second is that no P300 has been detected (7), and, finally, a pattern resembles the P300, but it is not possible to confirm it (?). e values for each cell represent a value over the total of three or four tests, depending on the experiment. For the first experiment, in which the images were shown for 500 ms to the subjects, at least one P300 wave was obtained in all subjects, indicating that the subjects' brain commonly recognized this duration of the image. In total, for all subjects, 17 P300 waves have been elicited, exceeding 50% of presented P300.
For experiment 2, the target image sampling time was reduced to 250 ms, and the number of times the target image is displayed was increased. In this case, 16 P300 waves have been detected, representing a 40% of occurrence over the total number of target images presented. is is 10% lower than in experiment 1 and may be due to reducing the target image sampling time. However, many factors can influence this reduction, such as subjects' concentration in performing the task. Experiments 3 and 4, whose sampling time is 100 ms and 50 ms, respectively, present similar results. For these experiments, 50% of P300 waves have been obtained concerning the displayed stimuli.
e results between experiments are quite similar, because the stimulus that the subject receives with these sampling times is quite similar (the image cannot be observed too clearly). However, there is an improvement between these two experiments concerning experiment 2, although the sampling time has been reduced by more than half. We highlight that this situation could be caused by the perception that subjects have over the (a) (b) Figure 6: Example of (a) target image and (b) nontarget image. Journal of Healthcare Engineering sampling time, where the transition between target and nontarget images is more clearly appreciated than the target image itself. Finally, experiment 5 reduces the sampling time to 10 ms, representing the situation in which the images are subliminal. e aim is to study whether this stimulus can be processed by the brain or not. In this experiment, some results have been classified as doubtful. is is motivated by certain variations existing in the EEG during the intervals, where the P300 should appear, between 200 ms and 1000 ms after the stimulus, that could be interpreted as P300 potentials. However, this cannot be confirmed to be the case, because they differ with the usual latency and peak voltage. In the same way, according to the literature, where it is stated that these stimuli do not impact brain activity, it is most likely to be noise from other external factors.
e time required to perform the experiments can be divided into two phases. On the one hand, the time needed to generate the EEG and, on the other hand, the time required to process it and display the P300. For the calculation of the total time, the following aspects have been considered: the preparation of the stage and checking of the correct functioning of the material (ten minutes approximately), each sample of images that lasts about five minutes, and two minutes that are left between each video for rest to avoid visual fatigue. is would make a total of 35 minutes for the conduction of all the experiments for each subject. After this, preprocessing is applied, which takes about 20 minutes, while image generation takes 20 minutes per experiment, 100 in total for the five experiments. In summary, for each subject, the total amount of time, starting from the moment when the EEG is captured until it is processed and converted into images, can be calculated as follows: preparation time + (image sampling time * 5 + rest time * 5) + preprocessing time + (image creation time * 5) � 10 + (5 * 5 + 2 * 5) + 20 + (20 * 5) � 165 minutes/per subject.
Analyzing the results in terms of the subjects, and not focusing on each individual experiment, we have observed that older subjects have a slightly longer latency to onset of P300. is may be why there is a reasonably uniform distribution, since the ages are not too far apart (16-31 years). On the other hand, the gender difference does not seem to have affected the P300 potential. However, it is not possible to delve too deeply into the gender difference due to the imbalance between males and females in the experiments.
Conclusions
is work presents a study of the feasibility of visual attacks aiming to compromise the privacy of the user's data employing the P300 potential. For this purpose, a scenario based on EEG-based BCI technologies has been designed. erefore, as a contribution of this work, a BCI framework has been developed to supply all the BCI cycle phases. e experiments performed first display known (target) and unknown (nontarget) images to the subject. Simultaneously, the BCI framework receives and stores the EEG sent by the BCI device. Once the data have been stored, they are correctly processed and displayed to detect the existence of a P300 event visually. To test its functionality, five experiments have been designed, tested on ten subjects. In each of these experiments, the sampling time of the target image has been reduced (500 ms, 250 ms, 100 ms, 50 ms, and 10 ms), starting from supraliminal stimuli to subliminal stimuli. It is worthy to mention that a comprehensive and exhaustive configuration of experiments has been performed, improving an important gap existing in the literature and helping understand if subliminal stimuli can generate P300 waves, or not. After the experiments, we have observed that supraliminal stimuli produced P300 over 50% of the time in the subjects. On the other hand, no evidence has been found that subliminal stimuli produce any impact on the users' brains. Some different conclusions have been obtained; for example, a more significant number of P300 were obtained in people with younger age and with a shorter latency.
As future work, we will focus on subliminal stimuli differently, such as hiding them in the video, e.g., following the approach of Frank et al. [15]. On the other hand, we plan to improve the conditions in which the experiments have been performed, using an environment more isolated from external stimuli. Regarding the BCI used, we also plan to increase the number of electrodes used to 16 and use muscle electrodes. In the same way, it is planned to increase the number of subjects participating in the experiment, trying to provide a greater variety between ages or diagnosed diseases. In this way, it will be possible to conclude more clearly whether these parameters affect the P300 potential.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this study. | 9,133.6 | 2021-08-09T00:00:00.000 | [
"Computer Science"
] |
Nutritive cost of intraguild predation on eggs of Coccinella septempunctata and Adalia bipunctata ( Coleoptera : Coccinellidae )
Coccinella septempunctata was approximately 20% more reluctant to eat the eggs of Adalia bipunctata than the reverse. In addition, fourth instar larvae of C. septempunctata failed to complete their development on a diet of A. bipunctata eggs and only 30% of those of A. bipunctata completed their development on a diet of C. septempunctata eggs, and the survivors took nearly 2 times as long as those fed aphids. This is an indication that the costs of intraguild predation might outweigh the benefits.
INTRODUCTION
Associated with large aggregations of prey are usually large numbers of insect predators belonging to several taxa (Rosenheim et al., 1993).For example, two or more species of anthocorid bugs, hoverflies, ladybirds, mirid bugs and parasitoids may all attack a population of sycamore aphids, Drepanosiphum platanoidis (Schrank), at the same time (Dixon & Russel, 1972;Dixon, 1998).In summer, when immature individuals of the sycamore aphid become scarce the nymphs of Anthocoris confusus Reuter and A. nemorum (L.) resort to feeding on sycamore aphid mummies containing parasitoids.The proportion of anthocorid nymphs that reach maturity each year is dependent on the abundance of parasitized aphids (Dixon & Russel, 1972): when the main food resource becomes scarce the various members of this aphidophagous guild may resort to eating one another (Lucas et al., 1998;Obrycki et al., 1998a,b).Similarly, the frequency of intraguild predation in the field between the ladybirds Adalia bipunctata (L.), Coleomegilla maculata (De Geer) and Hippodamia convergens (Guerin) increases when the aphid population on maize crashes (Schellhorn & Andow, 2000).In addition, many ladybirds like A. bipunctata choose to oviposit near large aggregations of aphids and as a consequence their eggs suffer a higher mortality from predation by other insect predators attracted by the aphids than species like C. maculata, which oviposits far from large aggregations of aphids (Schellhorn & Andow, 1999).Intraguild predation is seen as adaptive as it supplies a source of food as well as removing potential competitors (Polis et al., 1989).As it is often asymmetrical (but see Rosenheim et al., 1995), with the sedentary stages of the various natural enemies more at risk of being eaten than the active stages, one would expect the more vulnerable species and stages to evolve means of avoiding being eaten.For example, parasitoids tend to cease ovipositing and leave areas where ladybirds are present and represent a threat to the survival of their offspring (Taylor et al., 1998).However, with a few notable exceptions little attention has been given to how species might defend themselves against intraguild predators (Canard & Duelli, 1984;Eisner et al., 1996).
In this paper the effect of feeding fourth instar larvae of A. bipunctata and C. septempunctata on their own and the other species eggs was studied under laboratory conditions.In order to (a) exclude physical defence as a factor eggs rather than larvae were used as prey so that differences in performance of the predator could be attributed to nutritive effects, and (b) facilitate the measurement of performance fourth instar larvae were used as predators as they have the greatest growth potential of all the instars.
Ladybird culture
The ladybirds were reared as previously described in Hemptinne et al., (1998) except the adults of A. bipunctata and C. septempunctata were kept at 15° and 25°C, respectively, in order to encourage egg production.The egg clusters were removed from the paper on which they were usually laid by cutting round them with fine scissors.From the beginning of the third instar the larvae were isolated in 5-cm diameter Petri dishes lined with filter paper.
Nutritive value of hetero-and conspecific eggs for fourth instar larvae of A. bipunctata and C. septempunctata
A third instar larva of A. bipunctata was isolated in a 5-cm diameter Petri dish and fed daily an excess of pea aphids (Acyrthosiphon pisum Harris).As soon as it moulted to the fourth instar, it was weighed.It was then fed daily an excess of pea aphids until it pupated when it was weighed again.This was repeated 19 times (control).The experimental procedure involved either feeding daily 15 larvae kept in isolation with hatches of freshly laid conspecific eggs or 10 larvae with hatches of eggs of C. septempunctata.Their survival, time spent in the fourth instar and consumption of eggs were recorded and their relative growth rates calculated as follows: ln Wr ln Wt D where Wj is the weight of a pupa less than 24 h old, Wi the weight of the larva at the heginning of the fourth instar and D the duration of the instar in days.
The mortalities were compared hy means of %2 tests, the duration of development and relative growth rates hy means of Mann-Whitney's and Kruskal-Wallis tests and non parametric multiple comparisons (Zar, 1996).
The same experiment was repeated using fourth instar larvae of C. septempunctata.In this case 17 larvae were fed aphids, 17 conspecific eggs and 20 the eggs of A. bipunctata.
All these experiments were done at a temperature of 20°C ± 1°C, under artificial lighting of 2,000 lux.and a photoperiod of 16h light and 8 h darkness.
Nutritive value ofhetero-or conspecific eggs for fourth instar larvae o f : (a) A. bipunctata
Fourth instar larvae of A. bipunctata developed and survived as well on a diet of conspecific eggs as on a diet of pea aphids (Tahle 1).In contrast, a diet of C. septempunctata eggs resulted in 70% of the larvae dying, a mortality rate significantly greater than when fed either conspecific eggs or aphids (%2 = 16.57; 2 d. f.; P < 0.001).Those that survived took significantly longer to complete the fourth instar (Kruskal-Wallis statistics = 22.861; 2 d.f.; P < 0.001), and had a tendency to grow more slowly than larvae fed aphids or conspecific eggs (Kruskal-Wallis statistics = 6.536; 2 d.f.; P = 0.038).One of the three larvae that survived was lighter at the end than at the heginning of the fourth instar.Frequently, after ingesting yolk from seven-spot eggs, two-spot larvae were ohserved to vomit a hlack liquid.
(h) C. septempunctata Larvae of C. septempunctata fed conspecific eggs grew significantly slower than those fed aphids (Mann-Whitney's U = 2.000; P < 0.001).Similarly, there was also a significant difference in the length of the fourth instar (Mann-Whitney's U = 282.000;P < 0.001; Tahle 1).However, as the larvae often ate all the eggs provided they were prohahly never satiated.It was not possihle to give them more than 62 eggs per day on average hecause the supply of eggs was limited hy the size of the culture of C. septempunctata.The ohserved differences in growth rate and development time could therefore result from a relative shortage of food.
A diet of eggs of A. bipunctata resulted in the death of all the larvae of C. septempunctata, after an average of 8.5 days.The mortality was significantly greater than that ohserved when larvae were fed pea aphids or conspecific eggs (%2 = 74.01; 2 d.f.; P < 0.001; Tahle 1).That is, the eggs of A. bipunctata appeared to he more toxic to C. septempunctata, than the reverse.
DISCUSSION
The study of Agarwala & Dixon (1992) on A. bipunctata and C. septempunctata, that of Agarwala et al. (1998) on Menochilus sexmaculatus (F.) and C. transversalis and that of Cottrell & Yeargan (1998) on Harmonia axyridis Pallas and C. maculata indicate that these species are more reluctant to eat each other's eggs than their own eggs.In the two first cases, interestingly, it is the eggs of the smaller species that are less likely to he eaten.This is also supported hy our results.As all stages of ladyhirds contain species specific alkaloids (Pasteels et al., 1973), which are known to he toxic to vertehrates (Frazer & Rothschild, 1960;Marples et al., 1989), it is not unreasonahle to assume that these alkaloids also afford the ladyhirds protection from invertehrate predators.A. bipunctata adults contain more alkaloid per unit weight than C. septempunctata (de Jong et al., 1991;Hollowayetal., 1991).
The results presented here on the incidence of predation hy A. bipunctata on C. septempunctata eggs (50%) and that of C. septempunctata on A. bipunctata eggs (30%) are very similar to the 62% and 23%, respectively, reported hy Agarwala & Dixon (1992).Therefore, A. bipunctata eggs appear to he more strongly protected than those of C. septempunctata.In addition, our results show that only 30% of the A. bipunctata larvae (1) Figures followed hy the same letter in the same column and for each species do not differ significantly at a = 0.05.
completed their development on a diet of C. septempunctata eggs and took nearly two times longer to do so, whereas none of the C. septempunctata larvae completed their development on a diet of A. bipunctata eggs.This indicates that the eggs of both species are clearly toxic to the other species.As all stages of ladybirds contain similar concentrations of the species specific alkaloids (Pasteels et al., 1973), it could be assumed that all stages of both species are toxic to the other species.Although the prey in this study were eggs it is likely that similar results would have been obtained with larvae as prey.
In the field it is very unlikely that ladybird larvae would feed solely on ladybird eggs.It is more likely they eat ladybird eggs and larvae along with aphids, i.e., a mixed diet.However, when an aphid population on which ladybirds are feeding crashes the proportion of conspecific and heterospecific larvae in the diet may be very high.The eating of eggs and larvae of other species of ladybird is likely to reduce the quality of the diet of ladybird.Poor quality food generally prolongs larval development in ladybirds (Blackman, 1965(Blackman, , 1967;;Radwan & Lovei, 1983a;Obrycki et al., 1998a,b), which as well as delaying maturity is also likely to increase the risk of the larvae being killed.Consequently, there are potential costs associated with intraguild predation.
In conclusion, well fed larvae should avoid eating the immature stages of other ladybirds because the costs in terms of prolonged development and decreased survival are potentially large.However, when starving the eating of the immature stages of other ladybirds, even though they are toxic, could be advantageous because it may prolong their survival, especially if combined with other more acceptable prey.That is, for intraguild predation to be advantageous the benefits should outweigh the costs.
Table 1 .
Numher dying, egg consumption, duration of development and relative growth rate of fourth instar larvae of Adalia bipunctata and Coccinella septempunctata fed either pea aphids, conspecific or heterospecific eggs. | 2,437.4 | 2000-12-30T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
An investigation into the role of listed property shares in a retirement fund portfolio in South Africa
The main aim of this research paper was to investigate the role of listed property shares in a retirement fund portfolio in South Africa, one objective being to determine the appropriate weightings to be allocated to listed property shares. This research paper uses data collected from January 1995 to December 2004. The Elton and Gruber computer programme is used to test the data to give optimal weightings to the listed property sector and to produce an efficient frontier. The results of this research paper demonstrated the benefits offered by listed property shares and revealed that the sector should be treated as a separate asset class from equity owing to low correlation of returns between these two classes of assets. Results also demonstrated that an increase in the allocation to the listed property sector results in better investment performance over the study period.
Introduction
Real estates are everyone's investment tools, whether in the form of home ownership or commercial use. Real estates are believed to be alternatives to other forms of investment assets. "These alternatives range from passive investments in companies that own and manage real properties to active investments in which the individual owns properties and rents the space" (Mayo, 2002: 767). Most scholars in favour of real estate investments anticipate two main benefits of investment in real estates, namely: diversification and inflation-hedged benefits. As an alternative to owning real estates, investors invest in shares of real estate investments trusts. These shares are "bought and sold like the stocks of other companies (Mayo, 2002: 784). It could therefore be inferred that portfolio/investment managers must invest either directly in real estate (property) or indirectly in real estate shares (listed property) in order to maximise benefits (in the form of diversification and inflation-hedging) of their investment portfolios.
However, the Alexander Forbes Large Manager Watch Survey (2004) shows that investment managers in South Africa have not recognised the full benefits offered by listed property, owing to the low allocation of funds to this sector. The I-Net Bridge (2004) database shows that listed property shares pay out a major portion of their income during periods of downward trends in capital markets. According to Chen and Mills (2004), any portion of total return that is achievable with greater certainty limits the potential downside of an investment and lowers the vulnerability of the investment returns to negative surprises. The implications are that listed property shares possess unique attributes that contribute directly to low riskiness of any portfolio.
Portfolio managers are constantly evaluated on how they perform, that is, the total return they achieve for the investor. An overriding objective on the part of a portfolio manager is therefore to maximise returns while minimising risk. Unfortunately, even though the portfolio manager might be aware of the risks, the returns from listed property have been more difficult to quantify because listed property in South Africa is poorly researched (Maritz & Miller, 2004). These authors point out that academic research on the role of listed property in investment portfolios has received little attention, both internationally and locally. This implies that listed property will continue to be regarded as a neglected asset class as long as the latter is not researched with far more diligence. To achieve such an outcome, this paper attempts to investigate the role of listed property shares in a retirement fund portfolio in the South African context. The specific intention is to investigate some of the most important facets of the listed property sector, such as its relationship with equities, its portfolio risk-reduction ability and its general performance relative to equity.
Literature review
According to a report by Datamonitor (2004), Australia was one of the first countries to follow the United States' lead with the introduction in 1960 of real estate investment trusts (REITs). Canada followed suit, but only in the early 1990s. Reilly and Brown (2003) describe REITs as investment funds that hold portfolios of real estate investments. was passed in the mid-1970s to limit abuse in the pension fund world by creating a series of regulations to govern plan sponsor behaviour (Winograd & McIntosh, 2002). According to Winograd and McIntosh (2002), ERISA was inspired by modern portfolio theory to promote prudent portfolio diversification in order to reduce overall risk and stimulate pension fund investment in property.
Hudson-Wilson, Fabozzi and Gordon (1990) identified four main reasons why a portfolio should be exposed to property. Firstly, overall risk-reduction of the portfolio would be accomplished by combining asset classes that respond differently from expected and unexpected market conditions. Secondly, it would achieve returns above the risk-free rate and deliver strong cash flows to the portfolio. In other words, this would improve risk-adjusted performance of a portfolio as measured by the Sharpe index. Thirdly, it would hedge against unexpected inflation, ensuring that the portfolio produces positive real returns. Finally, it would constitute part of a portfolio that is a reasonable reflection of the overall investment universe and the economy.
According to a number of authors, investment management attention has shifted from an emphasis on asset allocation to a more balanced emphasis on diversification and the interrelationship of individual asset class characteristics within the portfolio. Karlberg, Crocker and Greig (1996) suggest that a 9 per cent allocation to real estate is optimal, rather than the 20 per cent figure suggested in other studies. Hoesli, Lekander and Witkiewicz (2003b) found that investing directly in offshore real estate reduced portfolio risk. In a study by Hoesli, Lekander and Witkiewicz (2004) and in support of Hoesli, Lekander & Witkiewicz (2003a), property was found to be a very effective portfolio diversifier in seven countries on three continents over the 1987-2001 study period.
Other authors doubt the ability of property to reduce portfolio risk. For example, Glascock, Lu and So (2000) show that, from 1987 to1991, listed property was segmented from equity, but was co-integrated from 1992 to 1996. These authors argue that the benefits of including listed property in a multi-asset portfolio have diminished since 1992. Glascock et al. (2000) show that over their study period listed property was co-integrated with unsecuritised property. Their results suggest that the ability of listed property to reduce portfolio risk was reduced. On the other hand, Liang and McIntosh (1998) argue that the benefits of diversification from including listed property in a multi-asset portfolio increased after 1992. They conclude that the uniqueness of listed property as a risk diversifier is enhanced and listed property should form a significant part of any portfolio. Tarbert (1966: 77) defines a perfect hedge against inflation as "an asset where the nominal returns perfectly co-vary with inflation". In general, property has been perceived as providing a hedge against inflation. Most research into the ability of property to hedge against inflation shows that, in the long run, property seems to provide a better hedge against inflation than does equity (Hoesli, 1994). Fraser, Leishman and Tarbert (2002: 354) suggest that there is a "low correlation between conventional gilts and property, as the former is inflation prone and the latter is generally viewed as an inflation hedge".
Given the four anticipated benefits of investing in listed property as reported by Hudson et al. (1990), this research paper attempts to carry out an investigation into the role of listed property shares in a retirement fund portfolio in South Africa. Specifically, it establishes the optimal allocation of listed property that would increase the returns of the portfolio.
Research design 3.1 Data
Data were collected from two electronic feed sources namely: I-Net Bridge and the JSE Securities Exchange on each of the different asset classes. These sources form part of reliable databases in South Africa. Proxies for asset classes were used to create a portfolio of mixed asset classes. Equities are represented by the JSE All Share Index. Bonds are represented by the 7-12 year (medium-term) bond index, the All Bond Index (ALBI). The reason for the use of the ALBI is that the modified duration 1 of property is similar to that of medium-term bonds. Cash was used to represent a risk-free asset. Listed property is represented by the Property Unit Trust Index, the J255 index. Data used in this research paper consist of the weekly closing prices of well-established benchmarks, namely the market indices of the ALSI, the J255 and the ALBI for the ten-year period from January 1, 1995 to December 31, 2004. These data were downloaded from a reliable I-Net Bridge research database.
Data manipulation
Total returns are calculated for each different asset class. As reported by Msweli-Mbanga and Mkhize (2007), Affleck-Graves, Burt and Cleasby (1988) and Van den Honert, Barr and Smale (1988) share/index price returns (R index ) are computed using the following formula: Where: P 1 is the price of an index at the close of the last trading day in a week. P 0 is the price of an index at the opening of the trading day in a week. Investment income is assumed to be incorporated in the market share price. That is semi-strong market efficiency. The formula is adopted here in order to compute weekly index returns. Weekly returns of these indices were thereafter compounded to calculate an annualised return. All annualised returns were, in turn, averaged in order to obtain a single average return over the study period. Weekly returns were regressed each year in order to obtain standard deviations for that particular year. In addition, weekly returns of indices were regressed against each other to obtain their correlation coefficients over the study period.
Research methodology
After data manipulation, resultant data were fed into an investment management programme called the Elton and Gruber's Markowitz Module (the Investment portfolio, Version 1). This programme is used to produce the Markowitz Efficient Frontier. This module calculated covariances between indices (asset classes). After prompting for portfolio weightings (if any), the Elton and Gruber Module is ready to produce the Markowitz Efficient Frontier, which assists in determining optimal allocation weightings of listed property, as measured by the J255 index in this study. Prompting for weightings is a crucial step because it affords portfolio managers the opportunity of adhering to prudential investment guidelines, as different investors stipulate different policies and guidelines.
In order to construct a traditional portfolio, initial ALSI and ALBI weightings of 75 per cent and 25 per cent respectively are assumed. These maximum weightings are suggested by legislated prudential investment guidelines, which are intended to ensure a conservative investment spread for retirement funding products in order to protect the investor from loss of value due to risky investment selection. Furthermore, prudential investment guidelines state that the weightings to listed property should be limited to 25 per cent. This weighting of 25 per cent to listed property is fed into Elton and Gruber's Markowitz Module in order to determine the role listed property may play in a retirement fund portfolio.
However, the Elton and Gruber Markowitz Module will recommend weightings of an optimised portfolio without stating the anticipated performance of such a portfolio. In order to do that, the risk-adjusted performance of both traditional and optimised portfolios is computed using the Sharpe index. Fabozzi (1999) argues that the Sharpe index is a measure of the reward-to-risk ratio, and that the risk of the asset is measured by the standard deviation (SD) of that asset. As reported by Msweli-Mbanga and Mkhize (2007), Mayo (2005) states that the Sharpe index (SI) is given as follows: where R i is the average return of the J255, ALSI or ALBI index. R f is the return received from investing in a risk-free asset. Cash is a proxy for a risk-free asset in this research paper. SDi is the standard deviation of the J255, ALSI or ALBI index. The higher the Sharpe index, the better the risk-adjusted performance of a variable under consideration.
Research results
Annualised returns of indices over the study period are presented in Table 2 below. Annual standard deviations of indices over the study period are depicted in graph 1 below. Table 4 (below) presents the risk-adjusted performance of the market indices used over the study period.
Discussion and conclusions
According to Table 2, the yearly property returns ranged from -11.34 per cent in 1996 to 55.99 per cent in 1999. Equity returns ranged from -6.62 per cent in 2002 to 70.56 per cent in 1999. In the case of bonds, yearly returns were at the highest rate of 36.39 per cent in 1995, slumping to a low of 3.38 per cent in 1998. The risk-free rate was at its highest point of 17.79 per cent in 1998 and reached the lowest rate of 8.22 per cent in 2004. Over the study period, the highest rate achieved by any asset class was 70.56 per cent, realised in the equity market in 1999. The lowest rate (highest loss) of -11.34 per cent was experienced by the property market in 1996.
Graph 1 shows that listed property experienced the highest risk (standard deviation) of 3.18 per cent in 1998 and the lowest risk of 1.15 per cent in 1995. In the equity market, the highest level of volatility was at 4.63 per cent in 1998 and the lowest level of risk (1.61 per cent) was achieved in 1996. In the bond market, risk reached its lowest level of 0.76 per cent in 2003 and was at its highest level of 3.07 per cent in 1998. Graph 1 also shows that the highest risk recorded for cash is 0.42 per cent in 2004. This risk level may be immaterial to investors who regard cash as a risk-free asset. As expected, the higher level of nominal returns realised in the equity market is accompanied by a high level of risk. For example, the period from 1995 to 1996 shows a low level of risk, and from 1998 to 1999 there is a high level of risk. Nominal returns are at their highest during the 1998 to 1999 period and, on average, at their lowest over the 1995 to 1996 period. Because the risk-free rate is high over the 1998 to 1999 period, the equity market risk premium is not increased over the 1998 to 1999 period. Consequently, the performance of the equity market is not superior over the 1998 to 1999 period. In conclusion, the result of this study suggests that the 1995 to 1996 period was the low-risk period on the South African capital market. On the other hand, highest levels of risk were experienced across the combined South African capital market from 1998 to 1999.
Generally, bonds are less risky than equities, which is why an initial weighting to equity is limited to 75 per cent. This is in accordance with legislated investment guidelines intended to protect the investor from loss of value due to risky investment selection. When constructing a traditional portfolio comprising only bonds and equity, and limiting weightings to equity to 75 per cent as mandated on the investment guidelines for a retirement fund industry in South Africa, the total return of 15.46 per cent is realised. However, when listed property is taken into account and investment guidelines for a retirement fund industry which includes this asset class in a portfolio are adhered to, different sets of results are realised. The Elton and Gruber's Markowitz Module is used to compute optimal portfolio weightings.
Results of this programme show that the exposure to the listed property should be 12.41 per cent (approximately 12 per cent), while the ALSI exposure should be 12.589 per cent (approximately 13 per cent). According to the programme, the highest exposure to bonds should be 75 per cent. This research paper uses these recommendations to calculate new total return of an optimised portfolio according to the Elton and Gruber Markowitz Module. Results displayed in Table 4 (above) indicate that total returns of a portfolio increase from 15.46 per cent to 19.63 per cent. Elton and Gruber's Markowitz Module's results also show that the security market line of an efficient frontier, given the above-mentioned weightings, starts at 13.45 per cent. This risk-free rate of 13.45 per cent therefore represents the compounded cash returns over the study period of this research paper.
However, rates of return, discussed above, are simply market-risk premiums. In order to make economic sense and to indicate the role of listed property in a retirement fund portfolio, Sharpe indices of both the traditional and optimised portfolios are computed. Table 4 presents the Sharpe indices of the listed property sector, equity market and bonds market. Results shown in this table indicate that the bond market yields the highest returns on a risk-adjusted basis, followed by listed property and then the equity market. In fact the equity market realised a loss of 6.1 per cent on a risk-adjusted basis over the study period. Table 3 also shows the correlation co-efficient between indices. These results show that the correlation co-efficient between the equity market and listed property sector of 0.582 is larger than the correlation co-efficient between the listed property sector and the bonds market. As a result, it would be more profitable to combine listed property assets with debt owing to low correlation than it would be to combine listed property assets with equity. Results also show that most diversification benefits (illustrated by lower correlation co-efficient) will be realised when combining equity with debt in a portfolio. This is also supported by the low risk associated with bonds or higher Sharpe index of bonds. These correlation co-efficient results (in Table 4) are in support of risk-adjusted performance (refer Table 4) of a portfolio consisting mostly of bonds and listed property, as suggested by the Markowitz Module.
In conclusion, the Sharpe index of the listed property sector is higher than that of the equity market; and the correlation coefficient between the listed property sector and the equity market is approximately 0.5. These results support the hypothesis that listed property presents unique attributes, different from those presented by the equity market. These results also suggest that listed property contributes significantly in a positive way to the performance of a retirement fund industry in South Africa. In other words, listed property plays a positive role in the riskadjusted performance of a retirement fund portfolio in South Africa.
Future research may focus on the qualitative factors that fund managers consider when investing or not in listed property.
Endnote
1 "Duration is a measure of the average (cashweighted) term-to-maturity of a bond. There are two types of duration, Macaulay duration and modified duration. Macaulay duration is useful in immunisation, where a portfolio of bonds is constructed to fund a known liability. Modified duration is an extension of Macaulay duration and is a useful measure of the sensitivity of a bond's price to interest rate movements", author unknown [online] (http://www.finpipe.com/duration.htm) [accessed 16 February 2008.] | 4,390.4 | 2011-04-21T00:00:00.000 | [
"Economics",
"Business"
] |
Sub-nanotesla magnetometry with a fibre-coupled diamond sensor
Sensing small magnetic fields is relevant for many applications ranging from geology to medical diagnosis. We present a fiber-coupled diamond magnetometer with a sensitivity of (310 $\pm$ 20) pT$/\sqrt{\text{Hz}}$ in the frequency range of 10-150 Hz. This is based on optically detected magnetic resonance of an ensemble of nitrogen vacancy centers in diamond at room temperature. Fiber coupling means the sensor can be conveniently brought within 2 mm of the object under study.
I. INTRODUCTION
The sensing of magnetic fields using the nitrogen vacancy center (NVC) in diamond has seen rapid growth over the last decade due to the promise of high sensitivity magnetometry with exceptional spatial resolution [1,2] along with a high dynamic range [3]. The use of NVC ensembles rather than single centres improves sensitivity while degrading the spatial resolution [3][4][5][6][7][8][9][10][11][12][13]. Recent advancements have demonstrated ensemble sensitivities of 0.9 pT/ √ Hz for d.c. fields [14] and 0.9 pT/ √ Hz for a.c fields [15]. However, these results have been limited to systems that are bulky and are typically fixed to optical tables. In contrast, fibre-coupling provides a small sensor head that may be moved independently from the rest of the control instrumentation and thus offers the possibility of application in medical diagnostic techniques such as magnetocardiography (MCG) [16,17]. Most fiber-coupled diamond magnetometers have relied on using nanodiamonds/microdiamonds attached to the end of a fiber, achieving sensitivities in the range of 56000-180 nT/ √ Hz [18][19][20]. Utilising a two-wire microwave transmission line in addition to a fiber-diamond set-up was able to achieve a sensitivity of ∼300 nT/ √ Hz [21]. A fiber-based gradiometer approach was able to provide a sensitivity of ∼35 nT/ √ Hz with projected shot-noise sensitivities potentially allowing for MCG [22,23]. Using a hollow-core fiber with many nanodiamond sensors in a fluidic environment provided a sensitivity of 63 nT/ √ Hz per sensor and a spatial resolution of 17 cm [24]. Other compact magnetometers that use a fiber have demonstrated sensitivities in the ranges of 67-1.5 nT/ √ Hz [25][26][27]. The best sensitivity reported for a fibre-coupled diamond magnetometer so far is 35 nT/ √ Hz when sensing a real test field [22], and 1.5 nT/ √ Hz when estimating the sensitivity based on the signal-to-noise-to-linewidth using the slope of a resonance in the magnetic resonance spectrum [27]. Other diamond magnetometers which offer high portability whilst maintaining a compact structure have been demonstrated with a compact LED-based design achieving a minimum detectable field of 1 µT whilst offering minimal power consumption [28]. Here, a diamond-based fiber-coupled magnetometer with sub-nT sensitivity is presented. The key feature is the use of lenses to reduce optical losses from the fiber to the diamond and back as shown in figure 1b).
The NVC, when in its negative charge state, is a spin S = 1 defect that can be optically initialised into the m s = 0 ground state and possesses spin-dependent fluorescence giving rise to optically detected magnetic resonance (ODMR) [29]. The energy level diagram is shown in figure 1a). The Zeeman-induced splitting of the NVC leads to the detection of magnetic fields with high sensitivity where the sensitivity of the magnetometer scales with 1/ √ N , where N is the number of centers probed [30,31]. The zero-field splitting at room temperature is ∼2.87 GHz. Upon application of an external magnetic field Zeeman-induced splitting leads to sub-levels that are split by where g e = 2.0028 is the NVC g-factor, µ B is the Bohr magneton, B || is the projection of the external magnetic field onto the NVC symmetry axis (the 111 crystallographic direction) and h is Planck's constant. The energy levels are further split by the hyperfine interaction between the electron spin and 14 N nuclear spin (I = 1) by A ≈ 2.16 MHz. Under a continuous wave excitation scheme, which is employed in this paper, the photon-shot-noiselimited sensitivity of a diamond-based magnetometer is given by where ∆ν is the linewdith, C is the measurement contrast (the reduction in fluorescence when on resonance compared to when not on resonance) and I 0 is the number of collected photons off resonance [5,32].
Magnetometry is performed with the set-up shown in figure 1b). A Laser Quantum
Gem-532 with a maximum power output of 2 W is used to excite the NVC ensemble; for our experiments 1 W was used to reduce laser noise. The laser beam is passed through a Thorlabs BSF10-A beam sampler whereby approximately 1% is picked off and supplied to the reference arm of a Thorlabs PDB450A balanced detector to cancel out laser intensity noise; the illumination levels incident upon each photodiode is equal in the absence of microwaves.
The remaining (high-intensity) portion of the laser beam is focused into a custom-ordered For the second method known test fields were applied using a Helmholtz coil which was calibrated using a Hirst Magnetics GM07 Hall probe. The test fields were applied along (100) and the sensitivity was found to be (310 ± 20) pT/ √ Hz, as shown in figure 3. The worse sensitivity is due to this non-optimal test field orientation. For the targeted application the fields of interest will be applied along the (100) direction and thus the sensitivity using the second method is considered to be the true sensitivity. Our sensitivity improves on the value of 35 nT/ √ Hz previously obtained with a fiber coupled NVC magnetometer using applied test fields. The photon shot noise limit is calculated using equation 2 from the fluorescence which was measured to be 1.
IV. DISCUSSION
Our sensitivity is 310 pT/ √ Hz and thus we are a factor of ∼6 away from the shot-noise limit. This may be due to uncancelled laser and microwave noise some of which could be cancelled out through the implementation of a gradiometer which would also alleviate ambient magnetic noise from the environment [34,35]. To detect signals for MCG it is estimated that the sensitivity required would need to be over an order of magnitude beyond what we currently achieve [12,36].
The biggest limitation of our system is the collection efficiency in which significant improvements are expected as the conversion efficiency of green to red photons is calculated to be 0.03%. Improving this would also improve the excitation efficiency. Due to the high refractive index of diamond n d = 2.42, the majority of light emitted by the defects will undergo total internal reflection and thus the majority of emitted light will escape through the sides of the diamond [7]. A possible option for improvement would be an adaptation of the fluorescence waveguide excitation and collection [37] which reported a 96-fold improvement in the light collected. Another approach would be to surrounded the diamond with a total internal reflection lens to collect light from the diamond sides and focus it toward a small area [38], which would be easier to integrate with our system, leading to an enhancement of 56 in the photon collection when compared to a lossless air objective of 0.55 N.A. This would represent a photon enhancement of ∼30 for our system and assuming a shot-noise limited scaling the measured sensitivity would become ∼60 pT/ √ Hz.
Ferrite flux concentrators have demonstrated a ×254 improvement in the sensitivity for a diamond magnetometer [14] at a cost of degrading the spatial resolution due to concentrating the flux from a large area and directing it toward a diamond. Due to the constraints of our system integrating the design discussed in [14] is not straightforward and thus the enhancement to sensitivity will be smaller. A further improvement would be to use the dual-resonance technique [14] which would allow our system to be invariant to temperature fluctuations [39] which is essential for practical applications of our magnetometer. Another way to introduce temperature invariance into our system would be the use of double-quantum magnetometry [40,41]. This would also be compatible with the use of pulsed schemes such as Ramsey magnetometry which would offer significant improvements to the sensitivity of a magnetometer compared to continuous wave excitation schemes [5,9]. However, it should be noted that significantly more laser excitation power and more homogeneous microwave driving fields will be required to realize the potential benefits of Ramsey magnetometry [42][43][44].
V. CONCLUSION
In this work a fiber-coupled magnetometer that reaches a sensitivity of (310± 20) pT/ √ Hz over the frequency range of 10-150 Hz has been presented. The mobility of the system and the compact nature of the sensor head are designed to target the application of magnetocardiography with further improvements discussed to be able to reach higher sensitivities. an Oxford Instrument Optistat cryostat. The concentration was determined to be 4.6 ppm for negatively charged NVC and 0.8 ppm for neutral NVC and was found from the intensities of the 637 nm and 575 nm zero-phonon line respectively [45]. FTIR data, figure 4b) were taken at room temperature using a Perkin Elmer Spectrum GX FT-IR spectrometer. The concentrations from FTIR were established to be 5.6 ppm for neutral substitutional nitrogen (N 0 s ) and 3 ppm for positively charged substitutional nitrogen (N + s ) [46,47].
Appendix B: Zero-crossing Slope vs. Modulation frequency The variation of the zero-crossing slope as a function of the modulation frequency is shown in figure 5. The expected trend of a decrease in the zero-crossing slope for higher modulation frequencies due to the finite repolarisation time of the NVC centre is followed [5,33,48].
Despite the continued increase of the zero-crossing slope at progressively lower modulation frequencies, the best sensitivity was achieved at a modulation frequency of 3.0307 kHz (data not shown), we attribute this to an increased susceptibility to noise at particularly low modulation frequencies nearer to DC. The maximum value of the zero-crossing slope at a modulation of 3.0307 kHz was 17.9 V/MHz which was slightly higher than the maximum in | 2,300.2 | 2020-02-19T00:00:00.000 | [
"Physics"
] |
Distinct acute effects of LSD, MDMA, and d-amphetamine in healthy subjects
Lysergic acid diethylamide (LSD) is a classic psychedelic, 3,4-methylenedioxymethamphetamine (MDMA) is an empathogen, and d-amphetamine is a classic stimulant. All three substances are used recreationally. LSD and MDMA are being investigated as medications to assist psychotherapy, and d-amphetamine is used for the treatment of attention-deficit/hyperactivity disorder. All three substances induce distinct acute subjective effects. However, differences in acute responses to these prototypical psychoactive substances have not been characterized in a controlled study. We investigated the acute autonomic, subjective, and endocrine effects of single doses of LSD (0.1 mg), MDMA (125 mg), d-amphetamine (40 mg), and placebo in a randomized, double-blind, cross-over study in 28 healthy subjects. All of the substances produced comparable increases in hemodynamic effects, body temperature, and pupil size, indicating equivalent autonomic responses at the doses used. LSD and MDMA increased heart rate more than d-amphetamine, and d-amphetamine increased blood pressure more than LSD and MDMA. LSD induced significantly higher ratings on the 5 Dimensions of Altered States of Consciousness scale and Mystical Experience Questionnaire than MDMA and d-amphetamine. LSD also produced greater subjective drug effects, ego dissolution, introversion, emotional excitation, anxiety, and inactivity than MDMA and d-amphetamine. LSD also induced greater impairments in subjective ratings of concentration, sense of time, and speed of thinking compared with MDMA and d-amphetamine. MDMA produced greater ratings of good drug effects, liking, high, and ego dissolution compared with d-amphetamine. d-Amphetamine increased ratings of activity and concentration compared with LSD. MDMA but not LSD or d-amphetamine increased plasma concentrations of oxytocin. None of the substances altered plasma concentrations of brain-derived neurotrophic factor. These results indicate clearly distinct acute effects of LSD, MDMA, and d-amphetamine and may assist the dose-finding in substance-assisted psychotherapy research.
INTRODUCTION
Lysergic acid diethylamide (LSD) is a classic serotonergic hallucinogen that has been widely used recreationally [1] and to a limited extent in psychiatric research [2]. LSD acutely induces marked alterations of waking consciousness [3] that have been shown to primarily depend on an interaction with the serotonin 5hydroxytryptamine-2A (5-HT 2A ) receptor [4], although LSD also acts on 5-HT 1 and dopamine receptors [5]. Recent clinical trials indicate that the quality of the acute psychedelic experience in response to psilocybin or LSD predicts long-term changes in mental health and well-being in patients and healthy persons [6][7][8][9][10][11]. For example, greater psilocybin-induced mystical-type experiences and more pronounced and more positive acute alterations of consciousness were associated with lasting antidepressant responses in patients with depression [6,7]. 3,4-Methylenedioxymethamphetamine (MDMA) is the active compound in the recreational substance ecstasy and is currently investigated as an adjunct to psychotherapy to treat posttraumatic stress disorder (PTSD) [12,13]. MDMA not only exhibits some amphetamine-like properties but also shows hallucinogeniclike effects and can be considered an intermediate substance between a pure stimulant like D-amphetamine and a pure hallucinogenic drug like LSD. MDMA acutely induces feelings of well-being, love, empathy, and prosociality [14,15], and produces mild perceptual alterations that are thought to be primarily mediated by the release of serotonin (5-HT) [16,17] and norepinephrine [18], and the direct activation of 5-HT 2A receptors [19]. Additionally, MDMA releases oxytocin [14,20,21], which may contribute to the mediation of its prosocial effects [22,23]. The unique emotional effects of MDMA lead to its classification as an empathogen or entactogen [24], referring to assumingly distinct effects from psychostimulants [25][26][27][28]. Psychostimulants such as D-amphetamine and methamphetamine primarily activate dopamine and norepinephrine systems, with only minimal effects on 5-HT [29,30], and promote stimulation, wakefulness, and concentration without the MDMA-typical emotional effects [25,27,28,[31][32][33][34][35]. Although MDMA produces less profound changes in perception compared with classic hallucinogens, it is often also classified as a psychedelic substance. On the other hand, LSD was found to exhibit MDMA-like empathogenic mood effects such as increased closeness, openness, and trust [3], indicating overlapping properties with MDMA [14,27] potentially useful to assist psychotherapy. Whether and how the effects of MDMA are similar or differ from the classic stimulant D-amphetamine and classic hallucinogen LSD have not been studied under double-blind conditions in the same study. Comparative studies, particularly within-subjects comparisons of the acute effects of these prototypical substances, are lacking. Therefore, we compared for the first time the acute subjective, autonomic, and endocrine effects of doses with similar cardiovascular activity ("equivalent" doses) of LSD (0.1 mg), MDMA (125 mg), D-amphetamine (40 mg), and placebo in a cross-over study in healthy subjects. By comparing all three substances using a within-subject design, it is possible to directly assess differences and commonalities of these substances. Moreover, by including different substances with partially overlapping effects, it is also possible to considerably improve blinding. This latter point has been a serious shortcoming of almost all previous studies, which compared effects of MDMA and LSD, respectively, with nonactive placebo, which almost inevitably results in unblinding. Dose selection was critical because we could only compare single doses of each substance in this within-subjects study. LSD was used at an intermediate dose of 0.1 mg that is representative of doses that are used recreationally [36] and in research [2]. A higher dose of 0.2 mg LSD has previously been shown to produce greater subjective effects than the 0.1 mg dose [37,38], but was not used in the present study because it was expected to produce greater alterations of waking consciousness than any of the other substances and would not have allowed brain imaging due to expected anxiety and movement artifacts in the scanner. MDMA was used at a high dose (125 mg) that produces the full range of empathogenic MDMA-typical effects [27] and is considered safe [39], and at the upper range of doses used in research investigating the safety and efficacy of MDMA-assisted psychotherapy in the treatment of PTSD [12] and in experimental studies in healthy participants [27,39,40]. Preferred recreational doses are slightly lower and in the range of 80-120 mg [41]. Higher doses are expected to produce largely similar subjective positive responses, but considerably more adverse effects [39,41]. D-Amphetamine was also used at a rather high dose (40 mg) that is in the upper range of doses that are used in patients and in research [31,32,34,[42][43][44].
The main goal of the present study was to describe and compare the subjective and autonomic effects of all three substances over time and determine plasma concentration-time profiles (pharmacokinetics). We hypothesized that LSD would induce more pronounced and different alterations of waking consciousness, assessed by the 5 Dimensions of Altered States of Consciousness (5D-ASC) scale and Mystical Experience Questionnaire (MEQ) compared with MDMA and D-amphetamine [37]. We predicted that MDMA would produce distinct subjective emotional effects compared with D-amphetamine [25,27,28] and induce greater increases in plasma concentrations of oxytocin than LSD and D-amphetamine [3,14]. Finally, we explored effects on plasma concentrations of brain-derived neurotrophic factor (BDNF), a biomarker that is linked to neurogenesis, because psychedelics have been shown to have neuroregenerative potential and may alter BDNF [45,46]. Altogether, we tested whether prototypical hallucinogens, empathogens, and psychostimulants are indeed substances with distinct acute-effect profiles in humans for the first time using a head-to-head comparison with the same study and participants.
Study design
We used a double-blind, placebo-controlled, cross-over design with four experimental test sessions to investigate the responses to 0.1 mg LSD, 125 mg MDMA, 40 mg D-amphetamine, and placebo in 28 healthy participants (14 females, 14 males). The washout period between sessions was at least 10 days. The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee northwest Switzerland (EKNZ). The administration of LSD, MDMA, and D-amphetamine in healthy subjects was authorized by the Swiss Federal Office for Public Health, Bern, Switzerland. All of the participants provided written consent before participating in the study, and they were paid for their participation. The study was registered at ClinicalTrials.gov (NCT03019822).
Participants
Twenty-eight healthy subjects (14 men, 14 women; 28 ± 4 years old [mean ± SD]; range, 25-45 years; body weight, 71.5 ± 12.0 kg) were recruited from the University of Basel. Participants who were younger than 25 years old were excluded from participating in the study because of the higher incidence of psychotic disorders and because low age has been associated with more anxious reactions to hallucinogens [47]. Additional exclusion criteria were age >50 years, pregnancy (urine pregnancy test at screening and before each test session), personal or family (first-degree relative) history of major psychiatric disorders (assessed by the Semi-structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, 4th edition, Axis I disorders by a trained psychiatrist), the use of medications that may interfere with the study medications (e.g. antidepressants, antipsychotics, sedatives), chronic or acute physical illness (abnormal physical exam, electrocardiogram, or hematological and chemical blood analyses), tobacco smoking (>10 cigarettes/day), lifetime prevalence of illicit drug use >10 times (except for Δ 9 -tetrahydrocannabinol), illicit drug use within the last 2 months, and illicit drug use during the study (determined by urine drug tests). A previous study found no difference in the response to LSD between hallucinogennaive and moderately experienced subjects (<10 times) [3]. However, we wanted to exclude frequent substance users because extensive previous uncontrolled experiences may influence/ condition new substance experiences [47]. The participants were asked to abstain from excessive alcohol consumption between test sessions (no more than 10 standard drinks/week) and particularly limit their use to one drink on the day before the test sessions. Additionally, the participants were not allowed to drink xanthine-containing liquids after midnight before the study day.
We performed urine drug tests at screening and before each test session, and no substances were detected during the study. We did not screen for alcohol use.
Study procedures
The study included a screening visit, a psychiatric interview, four 12-h experimental sessions, and an end-of-study visit. The experimental sessions were conducted in a quiet standard hospital patient room. Only one research subject and one investigator were present during the experimental sessions. The participants could interact with the investigator, rest quietly, or listen to music via headphones, but no other entertainment was provided. LSD, D-amphetamine, or placebo was administered at 9:00 a.m. MDMA or placebo was administered at 9:30 a.m. This was because of the different times to peak effects for each substance so that the functional magnetic resonance imaging (fMRI) scan and other assessments could be performed during the expected time-matched peak drug effects [26,27,32,48,49]. The fMRI scan Distinct acute effects of LSD, MDMA, and D-amphetamine F Holze et al.
was performed at 11:00 a.m.-12:00 p.m. and the fMRI findings will be published elsewhere. Autonomic and subjective effects were assessed repeatedly throughout the session. Blood was collected to determine endocrine effects and substance concentrations.
Study drugs LSD (D-lysergic acid diethylamide base, high-performance liquid chromatography purity >99%; Lipomed AG, Arlesheim, Switzerland) was administered in a single intermediate oral dose of 100 µg [50]. D-Amphetamine sulfate (40 mg salt; Hänseler, Herisau, Switzerland) was administered in a relatively high dose in the form of gelatin capsules as a single oral dose that corresponded to 30 mg D-amphetamine base [32]. MDMA hydrochloride (Lipomed AG, Arlesheim, Switzerland) was prepared as gelatin capsules and administered as a single oral dose of 125 mg, which is considered a relatively high dose [28,40,51,52]. Blinding to treatment was guaranteed by using a double-dummy method, with identical capsules and vials that were filled with mannitol and ethanol, respectively, as placebo. At the end of each session and at the end of the study, the participants were asked to retrospectively guess their treatment assignment.
Autonomic effects and adverse effects. Blood pressure, heart rate, and tympanic body temperature were repeatedly measured 1 and 0.5 h before and 0, 0.5, 1, 1.5, 2.5, 3, 4, 5, 6, 7, 8, 9, 10, and 11 h after drug administration (time specifications correspond to MDMA administration) as previously described in detail [60]. Pupil function was measured under standardized dark-light conditions and assessed using a Voltcraft MS-1300 luxmeter (Voltcraft, Hirschau, Germany) after a dark adaption time of 1 min as previously described [61]. Adverse effects were assessed 1 h before and 11 h after drug administration using the 66-item List of Complaints [62]. This scale yields a total adverse effects score and reliably measures physical and general discomfort.
Endocrine effects. Plasma levels of oxytocin were measured at baseline and 1.5, 2.5, 3, and 5 h after MDMA administration.
Oxytocin concentrations were measured using the oxytocin enzyme-linked immunosorbent assay (ELISA) kit (ENZO Life Sciences, Ann Arbor, MI) according to the manufacturer's protocol as previously described [63]. The plasma levels of BDNF were measured at baseline and 3 and 5 h after drug administration. Plasma BDNF levels were measured using an ELISA kit (Biosensis Mature BDNF Rapid ELISA kit: human, mouse, rat; Thebarton, Australia) as previously described [64]. Analyses were performed at the end of the study in one batch.
Statistical analyses For measures repeatedly taken over time during each session, we first determined the peak effects (E max and/or E min ) or peak changes from baseline ( Table 1). The values were then analyzed using repeated-measures analysis of variance, with drug as the sole within-subjects factor, followed by Tukey's post hoc comparisons based on significant main effects. The criterion for significance was p < 0.05.
RESULTS
All 28 participants completed the MDMA, D-amphetamine, and placebo session. One participant quit before the final LSD session and only the data from the other sessions was included in the analysis.
Subjective mood effects Subjective effects were measured over time using VASs (Fig. 1). The corresponding peak responses are presented in Table 1. LSD produced an overall greater response than both MDMA and D-amphetamine, reflected by significantly higher increases in ratings of "any drug effect," "good drug effect," "bad drug effect," and "ego dissolution" compared with MDMA and D-amphetamine. LSD also produced greater "drug liking," "drug high," and "stimulation" than D-amphetamine, whereas the effects of LSD on these scales did not significantly differ from MDMA. MDMA and D-amphetamine but not LSD increased peak ratings of "concentration" compared with placebo and LSD (Table 1). In contrast, LSD induced greater mean reductions over time (Fig. 1) and greater maximal reductions of ratings of talkative, concentration, sense of time, and speed of thinking compared with MDMA and D-amphetamine (Table 1). Only LSD and not MDMA or D-amphetamine induced significant "bad drug effects" compared with placebo. The overall effects ("any drug effect") of LSD, MDMA, and D-amphetamine lasted (mean ± SD) 8.5 ± 2.0 h, 4.4 ± 1.7 h, and 6.2 ± 2.0 h, respectively.
All three drugs similarly increased ratings of feeling "talkative" and "open." MDMA produced higher ratings of "any drug effect," "good drug effect," "drug liking," and "drug high" compared with D-amphetamine.
LSD was the only drug that induced marked alterations of mind, reflected by large increases on all subscales of the 5D-ASC (Fig. 3a, Supplementary Table S1) compared with placebo, MDMA (Tukey's post hoc tests: p < 0.001 for all comparisons), and D-amphetamine (p < 0.001 for all comparisons). MDMA only significantly increased ratings of "blissful state" compared with placebo, whereas D-amphetamine had no significant effects on any of the 5D-ASC subscales.
LSD increased ratings on all scales of the MEQ43 and MEQ30 compared with MDMA, D-amphetamine, and placebo (p < 0.001 for all comparisons), with the exception of nonsignificant differences in ratings of "deeply felt positive mood" for LSD and MDMA on the MEQ43 (Fig. 3b, Supplementary Table S1). MDMA significantly increased ratings of positive mood and ineffability (difficulty describing the experience in words) on the MEQ43 and MEQ30 compared with placebo (p < 0.01). D-Amphetamine moderately increased positive mood ratings on the MEQ43 and MEQ30.
On the ARCI, LSD increased ratings on all subscales that indicated broad (mixed) hallucinogenic, sedative, and euphoriant effects ( Supplementary Fig. S1), with the exception of a decrease on the benzedrine group scale, indicating lower stimulation. In contrast, D-amphetamine was the only drug that increased ratings on the benzedrine group scale.
Vital signs and adverse effects
The effects of the drugs on vital signs over time are shown in Fig. 4, and peak effects are shown in Table 1. All active substances significantly increased blood pressure, heart rate, and body temperature compared with placebo. Systolic hypertension > 140 mmHg was seen in 23, 18, 14, and 3 participants after D-amphetamine, MDMA, LSD, and placebo, respectively. Tachykardia >100 beats/min was seen in 5, 5, 7, and 0 participants after D-amphetamine, MDMA, LSD, and placebo, respectively. D-Amphetamine produced a significantly higher increase in blood pressure compared with LSD and MDMA, and LSD and MDMA produced lower heart rate increases than D-amphetamine over the first 4 h, but all three drugs produced overall similar hemodynamic stimulation, considering the similar increases in the rate-pressure product. All three substances increased pupil size (Fig. 4, Table 1). However, only MDMA markedly and significantly impaired normal lightinduced pupil constriction compared with placebo (Table 1, Supplementary Fig. S2). Only LSD increased the total acute (0-11 h) adverse effects score on the List of Complaints compared with placebo. Frequently reported adverse effects are presented in Supplementary Table S2. No severe adverse events were observed.
Endocrine effects MDMA but not LSD or D-amphetamine increased plasma concentrations of oxytocin (Fig. S4, Table 1). None of the substances altered plasma concentrations of BDNF (Fig. S4, Table 1). 1 Subjective effects of LSD, MDMA, and D-amphetamine over time on the VASs. The data are expressed as mean ± SEM. LSD produced significantly greater ratings of "any drug effect," "good drug effect," "bad drug effect," and "ego dissolution" compared with MDMA and Damphetamine. In contrast, LSD reduced ratings of "talkative," "concentration," "sense of time," and "speed of thinking" compared with MDMA and D-amphetamine. MDMA produced greater ratings of "any drug effect," "good drug effect," "liking," "high," and "ego dissolution" compared with D-amphetamine. The corresponding maximal responses and statistics are shown in Table 1.
Plasma drug concentrations
Distinct acute effects of LSD, MDMA, and D-amphetamine F Holze et al.
Blinding
Data on the participants' retrospective identification of the study substances are shown in Supplementary Table S3. All of the participants correctly identified placebo, 96% correctly identified LSD, 75% correctly identified MDMA, and 75% correctly identified D-amphetamine. MDMA was misclassified as D-amphetamine and vice versa (21%). One participant (4%) misidentified LSD as MDMA and vice versa. One participant (4%) identified D-amphetamine as placebo. Thus, LSD was well distinguished from MDMA and D-amphetamine.
DISCUSSION
As hypothesized, LSD produced stronger and more distinct subjective effects compared with MDMA and D-amphetamine. Specifically, only LSD induced significant and marked alterations of consciousness on all 5D-ASC and MEQ subscales compared with placebo, and responses were also significantly greater compared with MDMA and D-amphetamine. In contrast, MDMA only moderately increased "blissful state" on the 5D-ASC scale and "positive mood" and "ineffability" on the MEQ. D-Amphetamine only weakly increased "positive mood" on the MEQ compared with placebo. Additionally, LSD produced greater overall subjective effects, including both "good drug effects" and "bad drug effects," on the VAS compared with both MDMA and D-amphetamine. Only LSD produced significant "bad drug effects" on the VAS, "anxiety" on the 5D-ASC scale, and "LSD group" effects and "pentobarbital-chlorpromazine-alcohol group" effects on the ARCI compared with placebo. Finally, LSD was correctly identified by 96% and 100% of the participants on the day of administration and at the end of the study, respectively. However, similarities were also observed in the effects of all compounds on scales that measured positive drug effects. All of the drugs produced comparable ratings of "open" and "talkative" on the VAS, and ratings of "drug high," "drug liking," and "stimulated" on the VAS did not differ between LSD and MDMA. The present findings are overall consistent with previous reports on the effects of LSD [3,4,38,50,65], MDMA [18,25,28], and D-amphetamine [32]. In contrast to these previous studies, however, the present study compared the subjective responses to LSD, MDMA, and D-amphetamine using a within-subjects design. Subjective effects of various substances can differ, depending on the comparator that is used. For example, marked effects of MDMA on the 5D-ASC scale compared with inactive placebo have been previously reported [18]. However, when MDMA was compared with LSD in the present study, it induced only minimal and comparatively weak alterations of consciousness. The present findings have clinical implications. First, acute effects of the LSD-like hallucinogen psilocybin on both the 5D-ASC scale and MEQ also used in the present study have been shown to predict long-term therapeutic outcomes in patients with anxiety and depression in previous studies [6][7][8]. Similarly, 5D-ASC scale and MEQ ratings correlated with changes in well-being and life satisfaction 1 year after LSD administration in healthy subjects in a previous study [10]. Thus, stronger acute responses to LSD on the 5D-ASC scale and MEQ, as documented in the present study in healthy participants and previously in patients [37], may also predict better therapeutic outcomes in studies that evaluate the Fig. 2 Subjective effects of LSD, MDMA, and D-amphetamine over time on the AMRS. The data are expressed as mean ± SEM changes from baseline. D-Amphetamine increased ratings of activity and concentration compared with LSD. LSD increased ratings of inactivity compared with MDMA and D-amphetamine. LSD increased introversion and reduced extraversion compared with MDMA and D-amphetamine. MDMA and D-amphetamine increased ratings of well-being compared with placebo, whereas LSD produced no significant effect compared with placebo, and its effects did not differ from MDMA or D-amphetamine. LSD significantly increased emotional excitation and anxiety compared with MDMA and D-amphetamine. The corresponding maximal effects and statistics are shown in Table 1.
Distinct acute effects of LSD, MDMA, and D-amphetamine F Holze et al.
benefits of LSD-assisted psychotherapy in patients with anxiety and depression [66,67]. However, this assumption needs to be verified in patients. Second, the present study found that MDMA produced some qualitatively similar (although less pronounced) positive effects compared with LSD, but with lower associated "bad drug effects" and anxiety. Thus, MDMA may produce less untoward effects than LSD, and this may favor its use in patients afraid to take LSD or at risk of adverse reaction (i.e., high neuroticism, high emotional lability, and young age [47]). In fact, MDMA is often used prior to LSD in substance-assisted psychotherapy in Switzerland so that patients can familiarize themselves with substance-induced states [66,68,69]. For example, MDMA could be used prior to LSD or psilocybin in substance-assisted psychotherapy so that patients can familiarize themselves with substance-induced states. In fact, MDMA has often been used in the first 1-3 sessions before the use of LSD in substance-assisted psychotherapy in Switzerland.
In the present study, we also directly compared the acute effects of MDMA and D-amphetamine and we hypothesized that MDMA would produce distinct subjective emotional effects compared with D-amphetamine. Previous studies have discussed the extent to which the effects of these amphetamines differ [25,27,28,70]. The present study supports the view that the empathogen MDMA produces at least some clearly distinct effects Fig. 3 Subjective effects of LSD, MDMA, and D-amphetamine on the 5D-ASC scale and MEQ. The data are expressed as mean ± SEM. *p < 0.05, **p < 0.01, ***p < 0.001, vs. placebo. a LSD produced significantly greater ratings on all dimensions and subscales of the 5D-ASC scale compared with MDMA, D-amphetamine, and placebo. The effects of MDMA tended to be greater than D-amphetamine, but these differences were not statistically significant. MDMA produced significant increases only on the blissful state subscale compared with placebo. The effects of D-amphetamine did not differ significantly from placebo on any of the scales. The corresponding maximal effects and statistics are shown in Table S1. b LSD produced significantly higher ratings on all scales of the MEQ43 and MEQ30 compared with MDMA, D-amphetamine, and placebo, with the exception of nonsignificantly different positive mood ratings for LSD and MDMA on the MEQ43. MDMA significantly increased positive mood and ineffability ratings on the MEQ43 and MEQ30 compared with placebo. D-Amphetamine significantly increased positive mood ratings on the MEQ43 and MEQ30, but these effects were significantly lower than MDMA. The corresponding maximal effects and statistics are shown in Table S1.
Distinct acute effects of LSD, MDMA, and D-amphetamine F Holze et al.
compared with a pure stimulant, such as D-amphetamine. In the present study, MDMA produced greater ratings of "any drug effect," "good drug effect," "drug high," and "drug liking" on the VAS, greater ratings of "positive mood" on the MEQ, and smaller "benzedrine group" effects on the ARCI than D-amphetamine. MDMA also induced greater impairments in "concentration" and "speed of thinking" compared with D-amphetamine. In contrast and as predicted, MDMA but not D-amphetamine increased plasma oxytocin concentrations, which is thought to be attributable to the MDMA-induced release of 5-HT and 5-HT 1A receptor stimulation [23]. Interestingly, the potent 5-HT 1A and 5-HT 2A receptor agonist LSD [5] did not significantly increase plasma oxytocin levels in the present study, in contrast to a higher dose of LSD and inactive placebo as the comparator in a previous study [3]. Supporting the view of distinct effects of MDMA and D-amphetamine, 75% and 89% of the participants in the present study correctly identified MDMA and D-amphetamine on the day of administration and at the end of the study, respectively. However, MDMA and D-amphetamine also produced overlapping effects, including comparable increases in "open" and "talkative" on the VAS, "wellbeing" and "extraversion" on the AMRS, and a lack of significant "bad drug effects" or "anxiety" compared with placebo and in contrast to LSD. Similar partly overlapping effects of MDMA and lower doses of D-amphetamine (10-20 mg) have been previously reported [33,71]. Interestingly, both MDMA and D-amphetamine seemed to produce relatively comparable "empathogenic" effects in the present study, whereas such effects were somewhat more unique to MDMA compared with the stimulant methylphenidate [27,28]. Thus, MDMA and D-amphetamine are more alike than MDMA and methylphenidate, but this remains to be clarified in future studies. Pharmacologically, D-amphetamine and methylphenidate both activate the dopamine and norepinephrine systems without having relevant effects on 5-HT. However, D-amphetamine also releases monoamines similarly to MDMA, in contrast to the pure uptake inhibitor methylphenidate [29,72].
In the present study, LSD, MDMA, and D-amphetamine produced comparable sympathomimetic activation, reflected by similar increases in the rate-pressure product, body temperature, and pupil size. Additionally, LSD, MDMA, and D-amphetamine produced comparable amounts of total adverse effects as evidenced by similar scores on the List of Complaints (Table 1), although there were some differences between the substances regarding the specific complaints (Table S2). These findings indicate that the doses of the drugs were similar with regard to sympathomimetic effects, including cardiovascular system stimulation and somatic complaints. The finding that LSD produced relatively pronounced sympathomimetic effects confirmed our previous studies [3,38] and contradicted the assumption that LSD does not increase blood pressure [67]. On the other hand, the study findings suggest that LSD is capable of inducing greater acute psychological effects (positive and negative) than MDMA and D-amphetamine at doses that are producing comparable somatic adverse responses.
In the present study, we also determined plasma drug concentrations. Peak concentrations of MDMA and D-amphetamine were similar to previous studies that tested identical doses [32,39,73]. The full pharmacokinetic data for LSD derived from the present study have been published elsewhere [50]. Importantly, slightly higher plasma concentrations of LSD were documented in the present study compared with a previous study that reportedly used the same dose (0.1 mg) [49]. The higher plasma concentrations in the present study can be explained by the use of a higher dose (0.096 mg) of LSD base (analytically confirmed content and stability) compared with a lower estimated dose of 0.070 mg in previous studies [38,49], as discussed previously [50].
The main strength and novelty of the present study was that we employed a double-blind, placebo-controlled, within-subjects design that included different active substances and validated pharmacodynamic and substance concentration measurements. The present study also has limitations. We only used one dose Fig. 4 Autonomic responses to LSD, MDMA, D-amphetamine, and placebo. The data are expressed as mean ± SEM. All of the active substances produced significant sympathomimetic stimulation, reflected by increases in systolic and diastolic blood pressure, heart rate, body temperature, and pupil size. Importantly, the overall hemodynamic response, expressed as the rate-pressure product, was similarly increased by all of the active substances compared with placebo. However, D-amphetamine produced significantly higher increases in blood pressure than LSD and MDMA. Conversely, LSD and MDMA produced greater increases in heart rate than D-amphetamine during the first 4 h. The corresponding maximal effects and statistics are shown in Table 1.
Distinct acute effects of LSD, MDMA, and D-amphetamine F Holze et al.
level of each substance. Full dose-response curves would need to be generated for each substance to achieve valid comparisons. However, we used a relatively low dose of LSD compared with the doses of MDMA and D-amphetamine and nevertheless found stronger effects of LSD compared with MDMA and D-amphetamine. Additionally, a previous study that used a higher dose of LSD (0.2 mg) showed significantly greater acute subjective effects of LSD compared with 0.1 mg LSD (the dose used in the present study), but autonomic stimulation was similar between doses [38]. Specifically, the higher dose produced both greater "good drug effect" and "bad drug effect" ratings on the VASs [38] and higher ratings of "blissful state," "insightfulness," and "changed meaning of percepts," but no increase in "anxiety" on the 5D-ASC [37] compared with the lower dose of LSD. Thus, both desired and untoward drug effects were dose-dependent and future multiple dose-level studies will be needed to further define ideal dose ranges. Thus, higher doses of LSD up to 0.2 mg that are already clinically used [2,67] can be expected to produce even greater subjective effects than the dose (0.1 mg) that was used in the present study. The dose of MDMA that was used in the present study is in the upper range of doses that are used clinically; higher doses would not likely produce stronger positive subjective effects, but would likely result in more adverse somatic responses [39]. Finally, we found that the doses of all of the active substances were equivalent with regard to autonomic stimulation. Nevertheless, there is a need for additional studies including multiple dose levels and additional outcomes such as imaging.
In conclusion, the present study found that LSD induced different and more pronounced alterations of waking consciousness compared with MDMA and D-amphetamine in the same subjects. MDMA also showed partly distinct effects compared with D-amphetamine. The acute-effect profiles of LSD and MDMA will be useful to assist the dose selection for substance-assisted psychotherapy research and to inform patients and researchers on what to expect in terms of positive and negative acute responses to these substances.
FUNDING AND DISCLOSURE
The authors declare no competing financial interests. This work was supported by the Swiss National Science Foundation (grant no. 320030_170249). | 7,147 | 2019-11-16T00:00:00.000 | [
"Psychology",
"Biology"
] |
Semi-automated Pipeline to Produce Customizable Tactile Maps of Street Intersections for People with Visual Impairments
. Street intersections are very challenging for people with visual impairments. Manually produced tactile maps are an important support in teaching and assisting independent journeys as they can be customized to serve the visually impaired audience with diverse tactile reading and mobility skills in different use scenarios. But the manual map production involves a huge workload that makes the maps less accessible. This paper explores the possibility of semi-automatically producing customizable tactile maps for street intersections. It presents a parameterized semi-automated pipeline based on OSM data that allows the maps to be customized in size, map features, geometry processing choices, and symbolizations. It produces street intersection maps in two scales of three sizes, with different levels of details and styles.
Introduction
Crossing the street is an inevitable part of urban journeys, yet it can be very challenging for people with visual impairments (PVIs). Without vision, it can be difficult to acquire the necessary information about the street intersection, such as the layout of the streets and the location of the pedestrian crossing, in order to execute the crossing at a safe location and time. During orientation and mobility (O&M) training and independent journeys, tactile maps have been an important tool for conveying the geometry layout and the surrounding environments of the street intersections, as well as teaching concepts about traffic flows (Wiener et al., 2010). These tactile maps are still made manually by tactile transcribers or O&M instructors, as currently there is no automated tactile mapping service available for this inevitable but challenging part of the journey.
Although tedious, one advantage of manual mapping is that it can cater to the diverse needs of PVIs, in terms of having maps in different sizes with different levels of detail for it to be used in different scenarios such as classroom teaching, onsite sessions, or independent journeys; and adjusting complexity and styles of the geometries for PVIs with various levels of tactile reading and mobility skills. But the workload of manual mapping restricts the availability of the maps (Baldwin and Higgins, 2022). PVIs, as well as O&M instructors, would benefit from a customizable service that produces tactile street intersection maps on demand (Miele, 2004).
One major challenge for tactile maps is that, with limited space, they have to balance the large amount of information needed to understand the mapped area and the requirements in tactile graphics about being simple and clutter-free to be readable (Braille Authority of North America [BANA], 2010). And the automated production of such maps, apart from satisfying those requirements, would also need to include some possibilities to cater to the diverse needs of the visually impaired audience.
Our objective is to design and develop a modular and parameterized pipeline to semi-automatically produce such street intersection maps to assist O&M instructors and/or PVIs. The maps are designed based on O&M instructions and various tactile graphic guidelines, and the transformation from open data to ready-to-print maps is further implemented as modules in the pipeline. The pipeline will serve as an experimental starting point for a future accessible application to serve on-demand, tactile maps for street intersections that can be customized according to the individual needs of the PVIs.
Related Work: On-demand Tactile Mobility Maps for a Diverse Audience
Some automated services or workflows have been developed to provide on-demand neighborhood-level tactile mobility maps. There are a few services that provide maps of the street network (as lines) along with other features such as buildings or obstacles: the TMAP (Tactile Map Automated Production, Miele, 2004), the TMACS (Tactile Map Automated Creation System, Watanabe, et al., 2010), and the Blindweb (Götzelmann and Eichler, 2016, currently out of service). The maps from Mapy (Červenka et al., 2016) and Štampach and Mulíčková (2016) are more detailed, where the streets are shown in different widths and buildings are represented as detailed footprints. There are also documented workflows (Barvir et al., 2021;Touya et al., 2019) that create 3Dprinted neighborhood maps with streets, buildings, and vegetation areas. These services are already assisting manual mapping to some extent and are appreciated by PVIs and O&M instructors (Biggs et al., 2022). But for street intersections, there is not yet a service to automatically produce the maps.
However, as automated services reduce workload, one major advantage of manual mapping is customization: to produce the maps based on the guidelines on tactile graphics and human haptic perception, but also make specific changes according to a specific user or scenario. Tactile mobility maps have to target the users according to their competence in tactile reading and mobility, as well as their specific mobility tasks. The particular mobility task (e.g. to learn the street and traffic concept of a typical intersection layout, or to learn a particular intersection and its surrounding area to prepare for a crossing) and the occasion where the map is used (e.g. indoor with a table or outdoor on the street) will impact the size and the objects included in the map. The readers' tactile reading ability will impact how these objects are represented. For people less skilled in tactile reading, e.g. children, the maps need to be more simplified, with fewer elements and simpler geometries and spatial relationships, accompanied by strong and clear symbols (BANA, 2010). Catering to such diversity in needs would require the automation to allow customizations in terms of size and scale, object choice, geometry processing, and styling, involving all the major steps in the map production process.
The current automated services already allow some customizations. TMAP (Miele, 2004) supports a series of paper sizes and scales, with options to include different levels of street features (e.g. street, path, service roads.) BlindWeb (Götzelmann and Eichler, 2016) allows the user to choose zoom level and map features (e.g. hospitals) and has different formats for different printing technologies. TMACS (Watanabe et al., 2014) further supports some editing of the map features (e.g. adding a line or polygon). These customizations are appreciated by the users (PVIs and/or professionals involved in tactile document transcription; Biggs et al., 2022, Götzelmann andEichler, 2016). At the same time, they have also expressed their wish for further customization possibilities. A study with TMAP users (Biggs et al., 2022, blind users and O&M instructors) reports that they want, for example, options to customize the features for students with very basic tactile reading skills, change the style (the "looking and feel") of the map, and the possibility to include more features.
Therefore, for the street intersections, we aim to explore the possibility of incorporating customization into the automated process to produce the tactile maps with a parameterized and modular pipeline. This paper focuses on the design of the semi-automated pipeline for map production, and the graphical and technical choices we made about the underlying (geometry) process.
A Semi-automated Pipeline
Overall, the pipeline aims to incorporate the objects typically found near the intersections and process them into meaningful representations for the PVIs, considering their unique cognitive and mobility process during a street-crossing task (Arsal et al., 2022). It should also respect the rules regarding tactile graphics, as well as the practicality of automation.
To adapt a classical mapping pipeline for tactile street intersection maps, we identify the decisions and options unique to this tactile mapping context. This includes: objects to include on the map and their meaningful representations (section 3.1), and the potential parameters that ensure the tactile readability of the maps and also enable the maps to be customized (section 3.2). The involvement of these parameters throughout the pipeline is further illustrated in the implementation (section 3.3).
Objects and Representations
The pipeline aims to represent the common objects found at intersections in a way that suits the unique need of the PVIs. And with very limited space available on the map, the number of objects to be included is very limited to avoid clutter and ensure readability. Based on the O&M instructions and practices about street-crossing (Wiener et al., 2010), the pipeline processes the following objects: • Objects on the street (roadway): the street representation needs to indicate the boundary of the street (the curb line) and the width of the street. Pedestrian crossings should be explicitly marked. Traffic islands should be explicitly represented on the map (Fazzi and Barlow, 2017). • Objects on the sidewalk (roadside): Apart from the sidewalk, buildings and grass/green areas near the sidewalk should also be included, but in simplified geometries to reduce clutter (BANA, 2010). Public transportation (bus) stops should be included as they are an important part of PVIs' mobility (Fazzi and Barlow, 2017).
As the pipeline needs to balance the needs of PVIs and data availability, some objects (e.g. push-button traffic lights) that could also be utilized by PVIs but are often missing in the data are currently not included.
The Parameters and Their Defaults
From choosing an area to eventually producing a printable map with processed geometries and styles, the parameters involved can be grouped into five categories: map basics, object choice, tactile graphic parameters, specific geometry processing choice, and styling choice. Although in a real production environment, the users shouldn't have access to modify all the parameters freely, all the parameters are currently modifiable here for experimental purposes.
Basic map parameters include the size option (A3/A4/A5) and the coordinates of the street intersection. The map scale is then fixed for each map size: 1:1000 for A4/A5 maps and 1:500 for A3.
For object choice, under the intersection context, all roadway objects are essential and mandatory by default, and the choice of roadside objects is open.
The tactile graphic parameters ensure the map can be properly perceived by touch and they are heavily involved in the geometry processing. We set their default values according to the tactile graphic guidelines. Still, they could be modified based on the tactile reading ability of the reader (e.g. children in early grades might need bigger gaps.) • Gap around points: 3mm (TABMAP, 2006) • Gap between parallel lines: 3mm (Bris, 2001) • Gap between line and areas: 4mm (Bris, 2001) • Gap between areas: 5mm (Bris, 2001), but no gap is needed when two area features have contrasting textures (BANA, 2010).
While respecting the guidelines, some geometries can still be processed with multiple possibilities. These specific geometry processing choices mainly affect one specific type of feature but could have a chain effect on others in the subsequent steps.
• Point-line overlay preference: Two options are provided when a point comes close to a line: the default is the "direct" overlay mode where the location of the point is preserved (Fig. 1, bus stop to the left). In the alternative "displace" mode, the point is displaced away from the line to prioritize the line continuity (Fig. 1, bus stop to the right). • Level of detail in buildings. Two levels are provided: the default is the "rough" level ( Fig. 2-c), where all the buildings are merged into a block and only the simplified outline is retained. On an alternative "detailed" level, building footprints are generalized while individual buildings are mostly kept from each other ( Fig. 2-b). For the styling choices, they are provided as an option from a set of pre-defined ones (providing the .qml files). Although symbols and textures can be defined with very basic parameters, such as defining a texture with the pitch and thickness of its line patterns (TABMAP, 2006), such granularity is not necessary for this pipeline. A default set of symbols is also suggested based on guidelines, existing research, and manual practice (BANA, 2010;Martin, 2018;Prescher et al., 2017).
Pipeline Implementation
The semi-automated pipeline is implemented in Python based on pyQGIS (QGIS Development Team, 2023), geopandas (Jordahl et al., 2020), and shapely (Gillies and others, 2019). It processes data from OpenStreetMap (OpenStreetMap contributors, 2015) into tactile maps ready to be printed on microcapsule paper (swell paper).
The steps in the pipeline consist of preparing the data from OSM, processing the geometries, and map export are shown in Fig. 3. The parameters mentioned before are prepared in a .json file to be used throughout the pipeline. Currently, human inspection and curation are needed after major steps.
Data Preparation
This step extracts basic features from the OSM dataset and prepares them for the proceeding steps. It's largely independent of the parameters. Two major sub-processes involved in this step are feature extraction from OSM and street lane count estimation.
Based on the coordinates of the intersection, a processing extent of 400x400m (large enough to cover an intersection and its surrounding areas) is generated based on the intersection coordinates. This processing extent will be used later in the geometry processing steps. Features extracted from the OSM data include • roadway objects: street (lines), pedestrian crossings (points) • roadside objects: sidewalks (lines or areas), bus stops (points), buildings (areas), and green areas (areas).
For the streets, the number of lanes indicates their widths and is important for the upcoming steps. Ideally, the lane count is tagged for street features. In such cases, it can be directly extracted. But often, such tags don't exist for every street segment, and the lane counts need to be estimated. Untagged street segments can "inherit" the tag from connecting segments, providing they have the same name and hierarchy ("highway" tag). When such information is not available, lane counts are assigned based on the hierarchy of the street (e.g. streets tagged as "primary" or "secondary" in OSM often have more than 2 lanes in each direction; OpenStreetMap and contributors, 2015)
Geometry Processing
The geometry processing is centered around the transformation of the roadway objects, especially the streets. It is heavily impacted by the tactile graphic parameters.
Figure 3.
The pipeline: starting from preparing the data from OSM, to the geometry processing of various roadway and roadside objects, to assemble and export the map with the chosen template and styles. The parameters set in param.json is involved throughout the process.
For roadway objects, the processing is centered around the street features, including • street transformation.
• pedestrian crossing integration • traffic island estimation.
The street lines are first transformed into an area feature according to their lane counts, the area geometry is then smoothed and the boundary is extracted as the "curb" line.
For each "pedestrian crossing" point, a line is generated perpendicular to the street line it belongs to and adjusted according to the curb lines.
Traffic islands are estimated from the processed street area feature (where islands are extracted as areas that touch the street area feature from the previous step and disjoint from the roadside features.) Non-reachable islands that are not connected by pedestrian crossings are removed as they are less important for the pedestrian. The configuration of the traffic islands (slope or cut-through) needs to be manually set for each island feature. If an island is cut-through, it will be split using the pedestrian crossing lines generated in the previous step.
For roadside features, the processing starts from the ones closest to the curb line and extends outwards, to maintain the required gaps between the features. It includes: • bus stops placement • sidewalks placement • generalization of buildings and green areas When placing the bus stops along the street, in "direct" mode, the points retain their location; in the "displace" mode, the bus stops are displaced away in the direction perpendicular to the curb using the "gap around point" parameter to maintain the continuity of the curb line.
Sidewalk lines from both the line features in OSM and the "sidewalk" tags on the street features are combined and placed beside the street based on the "gap between lines" parameter. On 1:500 maps, if the data is available, area data of the sidewalk is simplified and smoothed and placed directly next to the curb line without a gap (as the texture for the sidewalk will have enough contrast with the street.) The buildings are generalized with an iterative dilationerosion procedure followed by simplification (Douglas-Pucker) to simplify shapes and merge nearby features. The option of having "detailed" or "rough" buildings determines the exact parameter settings for the dilationerosion iterations (i.e., the dilation distance and the iteration count. "Rough" buildings are treated with bigger dilation distances and more iterations). Green area patches are generalized through a similar dilation-erosionsimplification procedure, without the option of a detailed or rough generalization. The generalized buildings and green areas are then clipped to keep a distance from the streets and the sidewalks according to the gap parameter.
Example Maps
Some example maps from the pipeline are shown in Fig. 4 and Fig. 5. Fig. 4 shows an A3 map of a complex intersection in the default styles with rough buildings, green areas, and some sidewalk lines, and an A4 map with detailed building footprints in different textures. Fig. 5 shows an A3 map of a simpler intersection with area sidewalks, buildings, and a directly overlaid bus stop point.
Data and Software Availability
The code and the data (OSM, param settings, style, and template files) used to generate the example maps are available on GitHub under the GPL-2.0 license (https://github.com/myhjiang/human_crossing). The pipeline is executed as a set of numbered scripts.
Summary and Future Work
We explored a modular and parameterized pipeline to semi-automatically produce tactile maps for street intersections and allow customizations. The parameters are identified in five categories and allow the tactile map to be customized in size, map features, geometry processing choices, and symbolizations. This is an experimental pipeline, and to better represent the urban street intersections, it will need to be extended to handle more objects and more complex street configurations. Some other common objects in urban environments, such as tramways, are not yet included because of their complex relationship with the streets. The pipeline is currently not able to deal with bridges or multilevel junctions either. These issues could be explored in further implementations.
To evaluate the pipeline, automated constraint-based evaluation tools can be developed to verify that the map geometries respect the tactile graphic guidelines. And techniques to measure clutter and information value for visual map images could also be used to evaluate the clutter for tactile maps (Rosenholtz et al., 2007;Touya et al., 2019;Wabiński et al., 2021). These measures could potentially enable the automated comparison between the pipeline maps and manually made ones. But more importantly, because the production process (printing and producing relief) can introduce new uncertainties (e.g. the same textures might eventually "feel" different on different swell paper; TABMAP, 2006), a tactile map needs to be evaluated in print to verify the geometries and symbols can be perceived as intended and to investigate the flaws introduced by the printing process (Touya et al., 2019). Now this evaluation can only be conducted with human low-vision experts (O&M instructors, tactile transcribers) and/or PVIs.
Eventually, the pipeline and the maps it produces will be evaluated by the users, both the low-vision professionals who produce and use these maps in their work and the visually impaired end users. In the follow-up work, the evaluation would gather feedback from both groups. But these evaluations are not in the scope of this paper. The ultimate goal of the pipeline is to support a service where O&M instructors, families, or visually impaired people themselves can access and make maps on demand. This would require both an accessible interface and instructions and controls in terms of parameter setting to facilitate an informed customization process. | 4,519.2 | 2023-06-06T00:00:00.000 | [
"Computer Science"
] |
An optimized MLP model to diagnosis the bipolar disorder
The use of artificial neural networks in different areas of engineering science is growing by the day. The significant proportions of research in medical engineering, Therefore in this paper have tried to implemented MLP model with 47 parameters for diagnosis of bipolar disorder. Parameters such as: lack of pleasure, feelings of guilt, worthlessness, lack of success, mental anxiety, somatic anxiety disorder, the disorder of interest, etc. in next part, we done the manipulation structure of MLP model, for this work switching the function in layers. And comparing the error of manipulation structure with previous manipulation. We concluded with using purelin function in layers, the error of diagnosis reduces 4% and this value is an acceptable value.
Introduction
Diagnosis of psychiatric illness always has been one of the most important steps in the treatment of diseases [1]. And with the input of the neural network to diagnose diseases [2], in this paper attempts to identify the optimal diagnosis of bipolar disorder by using the MLP model of neural network. The analysis of scientific investigations shows that very limited works have been reported on diagnosis of depression using neural networks structure , for example: In [3], the authors conducted a resting-state functional connectivity fMRI study of 35 bipolar disorder and 25 schizophrenia patients, to investigate the relationship between bipolar disorder and schizophrenia, using computation of the mean connectivity within and between five neural networks: default mode (DM), fronto-parietal (FP), cingulo-opercular (CO), cerebellar (CER), and salience (SAL). They found that across groups, connectivity was decreased between CO-CER, to a larger degree in schizophrenia than in bipolar disorder. Also, in schizophrenia, there was also decreased connectivity in CO-SAL, FP-CO, and FP-CER, while bipolar disorder showed decreased CER-SAL connectivity. Disorganization symptoms were predicted by connectivity between CO-CER and CER-SAL. In [4], ten different types of classification algorithms are applied to depression diagnosis and their performance is compared, through a set of experiments on SMRI brain scans. In the experiments, a procedure is developed to measure the performance of these algorithms and an evaluation method is employed to evaluate and compare the performance of the classifiers and concluded that with using SVM model the best classification happens with 85.29% accuracy. In this paper, we implemented the proposed method in 2 parts by using MLP model. In first part done with different percentage training and obtain the minimum error and best percentage training, in second part we done the optimized the structure of MLP model. And in the end, comparing the results of both parts.
MLP model
Multilayer Perceptron network is of the feed forward neural networks, that one of the most widely used models of artificial neural networks in modeling .Multilayer Perceptron network, each neuron in each layer are connected to all neurons in the previous layer [5].In such networks, networks that are fully connected [6]. In " Fig. 1" view of this model is visible.
Depression & bipolar disorder
"Depression" is a psychological disorder in which the patient's activities severely reduced and in fact, he was not motivated to do many things. Depressed person's energy and life skills he will fall, and his concentration is greatly reduced. However, sometimes aggressive, sometimes it is frustrating. His guilt is very strong. In addition, the patient will return your goals in life and reduce social activities and production [7]. Bipolar disorder (BD), an unstable emotional condition characterized by cycles of abnormal, persistent high mood (mania) and low mood (depression), which was formerly known as "manic depression" (and in some cases rapid cycling, mixed states, and psychotic symptoms) [8 , 9]. Subtypes include: Bipolar I is distinguished by the presence or history of one or more manic episodes or mixed episodes with or without major depressive episodes [10].
Proposed method
We describe the proposed method for diagnosis of bipolar disorder. Proposed methods have the 2 part. In two parts using the structure of MLP model. The schema of proposed method shown in " Fig.2", and the details are as follows: Data divided in two part, the first part is "train" that use to train network. And second part is "testing" that using in test network. MLP model train from part "train" and parameters for different percentage training. Output of first part obtained and stored in "OP1" and error of this part stored in "E1". The best percentage training that has minimum error, saved in "T". For second part of proposed method, optimized the structure of MLP model, by using "T". For optimized change the function in three layers (input, middle, and output). Output of second part saved in "OP2" and error of this part stored in "E2". For choose the best output, compare the error of both part. That shown in "Eq.1": (1)
Experiment
The number of parameters used in this paper has 47 parameters, these include: lack of pleasure, feelings of guilt, worthlessness, lack of success, mental anxiety, somatic anxiety disorder, the disorder of interest, impaired appetite, suicidal thoughts, insomnia, excessive happiness, distraction, Jabber talking too fast, increasing social activity, jump thoughts, poor judgment, educational activity, dissatisfaction, lavish use of stimulants, slow motion, decreased libido, increase energy, lack of concentration, sex risk, investment risk, increase the time for studying [11], [12]. The experiments of both parts were implemented using MATLAB 8.1. The two parts are described below.
The first part of the experiments
First part implemented with using 10, 5, 4 neurons respectively in input, middle and output layer. This part simulation with the different percentage training. Results of this part shown in " Table 1". As can be seen when using 70% of data for training network have the minimum error, this error is 0.16 and value of T is 70%, also E1=0.16. By using 60% of data for training network have the maximum error (0.22)?
The regression graph as can be seen in " Fig.3". Regression graph shown, when using all data (train, test and validation) for testing network, the output of network how is consistent with target of network. I t is obvious; whatever line of figure is closer to 1, that is better, because the regression graph has the value between 0 to 1. That the value of regression in this part is 0.90.
The second part of the experiments
In this part for optimized, we manipulation the structure of MLP model. We change the function that uses in layers.
Functions that the used in layers, included: Tansig, Purelin or Logsig. The percentage training select of previous part that stored in T. " Table.2" showed the changing function and results. As can be seen when using Purelin function in three layer we have the minimum error, this error is 0.04 (E2=0.04). And when using the purelin in input layer, Logsig in middle layer and Tansig in output layer. The regression graph as can be seen in " Fig.4". Regression graph shown, when using all data (train, test and validation) for testing network, the output of network how is consistent with target of network. I t is obvious; whatever line of figure is closer to 1, that is better, because the regression graph has the value between 0 to 1. That the value of regression in this part is 0.96.
Choose the best part
In this section, implemented the end part of proposed method for choosing the model with minimum error for diagnosis bipolar disorder. This comparing shown in " Fig.5". As can be seen by using the second parts have the minimum error (0.04) than first part, and using the "OP2" for diagnosis.
Conclusion
In this paper, one proposed method definition for diagnosis of bipolar disorder. This method including the 2 part. In first part pay to implemented MLP model by using the different percentage training for get the best percentage training for using second part. In second part done the manipulation MLP model for reduce the error of this model. Then, the error of both parts comparing and select the suitable output. That error of second part equal to 4%. This error is a very good for primary diagnosis of bipolar disorder. | 1,897 | 2015-01-20T00:00:00.000 | [
"Computer Science"
] |
Analysis of External Water Pressure for a Tunnel in Fractured Rocks
External water pressure around tunnels is a main influential factor in relation to the seepage safety of underground chambers and powerhouses whichmakemanaging external water pressure crucial to water conservation and hydropower projects.The equivalent continuous mediummodel and the discrete fracture network (DFN)model were, respectively, applied to calculate the seepage field of the study domain. Calculations were based on the integrity and permeability of rocks, the extent of fracture development, and the combination of geological and hydrogeological conditions in the Huizhou pump-storage hydropower station.The station generates electricity from the upper reservoir and stores power by pumping water from the lower to the upper reservoir. In this paper, the external water pressure around the cavern and variations in pressure with only one operational and one venting powerhouse were analyzed to build a predictive model. The results showed that the external water pressure was small with the current anti-seepage and drainage system that normal operation of the reservoir can be guaranteed. The results of external water pressure around the tunnels provided sound scientific evidence for the future design of antiseepage systems.
Introduction
External water pressure is defined as a boundary load where the groundwater pressure acts on the outer edge of a tunnel lining [1,2].The value of the external water pressure can be obtained by analyzing the seepage field of the tunnel.Assuming that the hydraulic head in the contact surface is ℎ, the external water pressure can be defined as = (ℎ − ).There are some common principles that can ensure the safety of groundwater resources and the environment surrounding the tunnel lining of large hydraulic mountain tunnels [3][4][5].A method that combines mixed blocking and discharging can be adopted when a method that uses pure discharging cannot be used due to special conditions, such as decreasing of the water table around the tunnel.The water pressure at the site was studied and based on previous water conservation and waterpower projects in China.An appropriate design code was formulated for the project.The main calculation methods of external water pressure in the tunnel linings are described in the following sections.
The Reduction Coefficient Method.According to the "Hydraulic Tunnel Design Code" (SL279 2002), groundwater pressure is defined as the volume force of groundwater on the surrounding rocks and linings during the seepage process.For tunnels under simple hydrogeological conditions, groundwater pressure on the outer edge of the lining can be calculated by measuring the water column height below the groundwater table along the length of the tunnel and by using a corresponding reduction coefficient for the tunnels.The reduction coefficient is defined as the ratio of actual water pressure to the maximum water pressure and is quoted as a value ranging from 0 to 1.The external water pressure is determined by multiplying the hydrostatic pressure from the groundwater table to the tunnel axis by the reduction coefficient .
Zhang [6,7] developed a new approach to obtain the correct coefficient on the basis of the standardizing reduction coefficient.Some disadvantages of the external water pressure reduction coefficient include the following: (1) it is difficult for designers to choose the parameter because of its large amplitude, (2) cannot be used in projects that have extremely small permeability as the reduction coefficient is established based on conditions of a normal concrete lining in which cracks are present, and (3) the water pressure of a point in the initial seepage field does not equal the hydrostatic water pressure on the groundwater table because of different terrain and geological conditions in actual cases.It is difficult to determine external water pressure using the reduction coefficient from a former code, especially when the groundwater table in question lies along the tunnel or the state of groundwater is unknown, for example, in unconfined water, water between layers, or perched water.Furthermore, the impact of anti-seepage or drainage galleries and drainage holes on the water level is not considered in this method.
The Analytical Method.The initial stress on the lining can be regarded as the hydrostatic status if the surrounding rock is assumed to be comprised of homogeneous and isotropic elastoplastic media.The pore water pressure on the lining and grouting area can be derived using Darcy's Law based on the surrounding rock model [8][9][10][11][12][13].The external water pressure on the tunnel lining can be calculated using a theoretical analysis method that assumes the grouting area of the surrounding rock is homogeneous; however, this method is not suitable for real projects under complex geological conditions.
The Semianalytical Method.The details of the semianalytical method are as follows: firstly, a hydrogeological conceptual model of a discharging tunnel was established and water inflow was predicted using an empirical method [14][15][16].Water inflow was substituted into a two-dimensional section seepage model of the surrounding tunnel rock to simulate the distribution of a seepage field in the surrounding rocks during draining.Finally, the external water pressure around the tunnel was calculated using the reduction coefficient method.A numerical method was used to simulate the tunnel seepage field during draining of the seepage field of the surrounding rocks.Throughout this process the external water pressure in the lining was obtained.
The water inflow of a tunnel was determined using the analytical numerical method, and the tunnel set as a secondtype boundary condition for numerical calculations where the flux was known in the boundary condition.Some errors exist when using the analytical method to determine tunnel water inflow as it is particularly difficult to define the water level when a tunnel has already been built.In addition, the effect of the drainage gallery is not considered in the model.In this case, the tunnel was regarded as a seepage surface instead of a constant-head boundary condition where the hydraulic head was assumed equal to its position height even when it was higher than the actual position height.The seepage surface was determined using an iterative method.
The Hydrogeochemical Method.The hydrogeochemical method considers the relationship between the external water pressure and the CO 2 partial pressure in a groundwater chemical field.The results from field tests reveal that CO 2 partial pressure and groundwater pressure obey a linear law in the same water body or geological element.Therefore, the CO 2 partial pressure in a borehole under the same hydrogeological conditions can be measured and the equations between the CO 2 partial pressure and the hydraulic head can be derived.The water pressure in a seepage area can be acquired indirectly by measuring CO 2 partial pressure on the tunnel seepage point [17].This is a new method to calculate external water pressure.
The Seepage and Stress Coupling Methods.This method considers the combined effects of surrounding rocks and linings on groundwater.The effect of groundwater on surrounding rocks and linings was calculated using seepage theory.The coupling effect of surrounding rocks and linings can be examined through an analysis of in situ stress and groundwater penetration during tunnel excavation [18][19][20][21].Since the coupling effect is more important than external water pressure on linings, it can be applied to analyze the stress on the tunnel lining structure and the stability of surrounding rocks.
Investigators have made considerable progress after much research on the seepage and stress coupling field coupled models.Noorishad et al. [22,23] presented a continuous porous media seepage and stress field coupling model.Oda [24] established a model that couples seepage and stress field in rock masses using a hydraulic conductivity tensor based on joint statistics.Ohnishi and Ohtsu [25] studied a seepage and stress coupling method in a discontinuous joint rock mass.Wu and Zhang [26] proposed a lumped parameter and fracture network model of rock mass seepage and stress field coupling.The development of rock hydraulics has been promoted in a quantitative direction through these models as the coupled seepage and stress field of linings are not adequate after tunnel excavation and structure support [27][28][29].
The seepage and stress coupling analysis method mainly investigates the effect of groundwater on surrounding rocks and linings.When the effect on surrounding rocks and linings from groundwater was calculated using a seepage theory, the coupling effect on surrounding rocks and linings from the groundwater penetration force can be analyzed through in situ stress caused by tunnel excavation.In this method, surrounding rocks and linings are regarded as a whole that shares external water pressure, except in regions under high water pressure conditions where there will be many fractures or fissures between the surrounding rocks and linings.In the case of such conditions, external water pressure mainly occurs on the linings instead of on the rocks and linings together.
The purpose of this paper was to develop a coupled model to simulate groundwater flow in fractured rocks based on geological and hydrogeological conditions.External water pressure of a tunnel was also calculated to evaluate the seepage stability of a diversion pipeline.
Location of the Study Area.
The Huizhou pumped storage water power station is located in Boluo county, Huizhou city, Guangdong province.It is 112 km away from Guangzhou, 20 km from Huizhou, and 77 km from Shenzhen and has a total capacity of 2,400 MW.The main dams in the upper and lower reservoirs are roller-compacted concrete gravity dams.There are two plants, A and B, located behind the middle of the pipeline in the plant system with a minimum spacing of 150 m to recharge water.The two powerhouses are both 152 m in length, 21.5 m wide, and 49.4 m high.The elevation of the powerhouse vault is 169.9 m.The diameter of high pressure tunnel is 8.0 m.The middle elevation of it ranges from 137.87 to 135.62 m with the maximum hydrostatic head at 630 m.When the power station is operating normally, the water from the upper reservoir flows to the powerhouse to generate electricity and the tail water flows to the lower reservoir.This water is then pumped to the upper reservoir through the waterlines.The diversion tunnel, high pressure tunnel, high pressure branch pipes, and tailrace tunnel are made from a reinforced concrete lining except for the steellined high pressure branch pipes and the tailrace pipes for water diversion in the water power tunnel.
Geology Information.
The lithology disclosed in the powerhouse region is granite, mixed rock, and veins.The medium-fine and medium-coarse grained granite is from the fourth Himalayan period and is fleshy red in color with strong permeability owing to high porosity.The minerals in this region consisted of potassium feldspar (30-50%), plagioclase (20-35%), quartz (20-35%), and black mica (1-8%).The geological age of the mixed rock is between the Caledonian and Himalayan periods.The thickness of this rock is between 35.3 and 46.5 m according to data obtained from different boreholes and its permeability is lower than that of granite.The spaces of some fractures and faults were intruded diorite vein, granitic diorite vein, quartz, fluorite, and calcite veins, which results in weak permeability of fractures and faults.
There are 67 faults disclosed by the exploratory cave in the powerhouse region in the north-east (NE), north-west (NW), and north north-west (NNW) directions (Figure 1).The directions of joints and fractures are the same as those of the faults in the powerhouse region which are mainly in the NNE and NW directions.Faults and fractures are classified into four groups according to their strikes.(1) The NNE group appears mainly in exploration caves, such as PD01-1, PD01-2, PD01-3, and PD01-6 in an east-west or nearly eastwest direction (Figure 1).The tendency is N5-20 ∘ E with a more SE tendency than a NW tendency, and the dip angle is from almost 60 to 80 ∘ .(2) The NW group appears mainly in exploration caves PD01, PD01-2, PD01-3, PD01-4, and PD01-6, and the fissures show a tendency of N35-60 ∘ W with a more SE tendency than a NW tendency, and the dip angle is 60 to 85 ∘ .
Permeability of the Rock Mass.
Field test data comprising 392 total pump-in tests from 23 boreholes in the underground powerhouse region showed conditions of very low permeability, partial low permeability, and a small portion of medium permeability in mixed rock and granite (as seen in Figure 2).The very low permeable rock mass, lower permeable rock mass, and medium permeable rock mass were present at 69.9%, 28.5%, and 1.6%, respectively.There were no highly permeable rocks in the powerhouse region being found and the permeability rate was greatest when the borehole was close to the ground surface, but as depth increased and height decreased, water permeability was reduced.Additionally, the permeability rate increased in some test areas because of fault zones and dykes.The rock permeability of boreholes ZK2002 and ZK2085 changed with depth and time in the powerhouse area, as shown in Figure 3. Permeability of rock can be classified into three categories: (1) highly permeable zones, which include rock in completely weathered and strongly weathered zones, fault f304 fracture zones, fault with water permeability in NE direction, and fault cut by NW direction rock and f304, (2) low permeability zones, which comprise weak weathered rock mass, and (3) very low permeable zones which are slightly weathered zones with fresh rock mass and dykes under a good condition.
Groundwater Recharge, Runoff, and Discharge Conditions.
The recharge sources of underground powerhouses are precipitation and water from the upper reservoir through fault f304.Water discharged from the upper reservoir flows into fault f304 and into underground powerhouses through a fault in the NW direction, such as f31 and f69 because f304 is closer to the underground powerhouses (Figure 4).Based on all things considered, f304 might be the only approach for water from the upper reservoir to flow into the powerhouses.The plant area can be divided into two separate hydrogeological units on both sides of a gully by analyzing the permeability, hydrogeological structure of rock mass, and hydrogeological conditions.
In natural conditions, the main recharge source of the upper reservoir is precipitation and the water from higher level reservoirs.They were discharged to the gully, the powerhouses, and the lower storage reservoir.Precipitation is also the main recharge source of the powerhouses and the lower storage reservoir.Part of the water in the powerhouses and the lower storage reservoir flows into the gully, the shallow water body in earth's surface, and deeper underground for discharge, as shown in Figure 4.
Calculation Methods of External
Water Pressure 3.1.The Equivalent Continuum Model.Research results show that groundwater movement changes with time and space in fractured rocks complies with the relevant laws of seepage.
In such cases, the hydrogeological model can be assumed as a continuum of heterogeneous anisotropic media and the three-dimensional movement of groundwater in rocks based on the following control equation: where ∇ is the Hamilton operator, is the permeability tensor, represents the hydraulic head at any point in the seepage field, is the storage rate, and is the time.
The study area Ω was discretized by a hexahedron with eight nodes.The weighted residual of the governing equation (1) in the whole study area is zero, and the algebraic equations of the whole seepage field could be obtained as follows: where [] is the total seepage matrix (conduction matrix), {} is a hydraulic head array at an unknown node, [] is the storage matrix, and {} denotes the known flux boundary.In this case, / can be replaced by time discretization; that is, where Δ is the time step.An implicit difference scheme was applied to (3); we obtain Equation ( 4) can thus be rewritten as follows: The distribution of the seepage field in the study area can be solved by using (5).
The DFN Model.
In the fractured network seepage model, the intersection of fractures can be regarded as nodes and the fractures between nodes which are seen as elements.
The seepage equations were built to account for either a total flow rate of zero (steady flow) or a flow change that may be equal to the storage capacity from each line element to the shared node (unsteady flow).The mathematical model of groundwater flow using a fracture network was built by combining the initial conditions and boundary conditions [30][31][32].A schematic diagram of a rock fracture network was then obtained and shown in Figure 5, where a dotted-line circle indicates the balanced domain of the fracture element.
A characterized element domain was formed at the center point .The domain, including the node and a closed curve, was formed at the midpoint of each element of a connecting line.The inflow and outflow from each element of a connecting line can be defined as ( = 1, 2, . . ., ), and the vertical recharge from each element of a connecting line in the characterized element domain denotes ( = 1, 2, . . ., ), and the source-sink term on node is .The flow differential value of the characterized element domain per time unit is equal to the amount of water storage changes.
The equilibrium equations of the characterized element domain can be written as The pressure difference remains conserved in a closed loop and can be written as where is the hydraulic head of node , is the number of a line element (the number of pipelines centered at node ), and is the number of fracture sections with formed loops, that is, the dimensions of loops.
In = ( /2) ∑ =1 , is the elastic storage (releasing water) coefficient of a fracture in the characterized element domain center at note , and are the width of a fracture and the length of a line element, and is the hydraulic gradient of fracture element.
The matrix form of a seepage equation in a seepage flow field can be written as where {} is the vector of vertical recharge on a fractured line element, {} is the water storage matrix in the fractures, and = { } indicates the connection matrix of a fracture network.All of these terms describe the connection relationship between line elements and nodes in a fracture network system, where is the matrix element and is defined as follows: is 0, −1, and 1 on three different conditions, there is no connection between line element and node , while the connection between line element and node points to opposite directions of node , and the connection between line element and node points to direction, respectively. / can be defined as / ≈ ( +Δ − )/Δ and can be rewritten as
Calculation of the Coupling Model.
The coupling conditions for an equivalent continuum model and a random discrete fracture network model are as follows: (1) the water volume balance between inflow and outflow in a fracture surface in unit time, that is, conservation of flow in a fracture intersection, and (2) the continuum of a hydraulic head in a fracture intersection, that is, The following iterative methods were employed to solve the mentioned coupling models just described: (1) the initial and boundary conditions in the discrete media domain are substituted into (5) to obtain the hydraulic head in the continuum media domain; (2) the flow contribution of coupling terms can be acquired using , and then can be solved for after substituting the contribution into (9); (3) can be solved by substituting into (5); and finally, (4) the above steps are repeated until the accuracy requirement is satisfied.
Calculation Region and Boundary Conditions.
Point O at the cross intersection, located in the south-eastern direction of powerhouse B shown in Figure 6, is set as the origin of coordinates in the calculation domain.Positive direction points toward the north, positive points to the east, and positive is set vertically upward.The calculation domain is selected along the positive direction to the upper reservoir and the negative direction to the lower reservoir, while the direction extends around 1000 m from the center of the powerhouse to the watershed of the station region.
Boundary conditions in the upper reservoir were set as first boundary conditions with a normal storage water level of 762 m and a dead water level of 740 m.Boundary conditions in the lower reservoir were also set as first boundary conditions with a normal storage water level of 231 m and a dead water level of 205 m.Other boundaries can be regarded as streamlined boundaries as they were located at the position of the watershed.The study domain was meshed by two types of grids as equivalent continuum media and coupled discrete media model were employed to simulate the distribution of a seepage field in the power station region.The study domain was meshed into 42,765 elements including 47,216 nodes when using the equivalent continuum media model, as shown in Figure 7(a).The study indicated that 40 fracture surfaces and 2362 fracture network intersections were formed using a coupled discrete media model, as shown in Figure 7(b).
Determination of Hydrogeological Parameters.
A series of water pressure tests could be conducted in the study area and the hydraulic conductivity was calculated through analyzing the outcomes of pump-in tests.This can be written as where is the hydraulic conductivity of a fractured rock mass (cm/s), is the water unit-absorption rate (l/min⋅m⋅m), is The rock mass can be classified into three zones which include a strongly weathered zone, a weak weathered zone, and a slightly weathered zone in a vertical direction based on the properties of the rock mass structure and various characteristics of the rock properties.Based on the unit water absorption rate and results from tests in each zone, the hydraulic conductivity in each calculating zone can be defined using (10), where the length of the test segment is = 6 m and the radius of the borehole is 0 = 0.0375 m.The hydraulic conductivity results in each zone are listed in Table 1.
Two faults with NW and NE strikes were developed in the plant area.They are the main water control structures in the powerhouse domain.These two faults were considered when establishing the three-dimensional fracture network models.The geometric parameters, such as strike angle, dip direction angle, dip angle, space distance, and width of a broken area, were determined by statistical analysis.The probability distribution of these parameters was defined by analyzing the results of statistics from the foundation for a Monte-Carlo simulation.As listed in Table 2, fracture surfaces were randomly generated in the powerhouse domain using the mean value, variance, and distribution of fracture geometry parameters.These fracture surfaces developed a three-dimensional fractured network by crossing each other and reaching the boundary.
The Change of Internal Water Leakage Quantity after
Water Filling in a Waterway System.During normal operation periods, water leakage occurs in the waterway through the steel concrete lining after water filling occurs in the waterway system where seepage goes from the inner to the outer waterway.The quantity of inner water leakage is related to the location of the waterway and time.Several key feature points along the waterway were selected to study the changing rule of inner water leakage (Figure 6).The water levels of the upper and lower reservoirs were 762 m and 205 m, respectively, under normal conditions.The upper, middle, and lower stories of the galleries in powerhouses A and B discharge water.Waterproof curtains were set at both intersections and branches of high water pressure and water discharging galleries were built in a pilot tunnel at a height of 246 m.Four feature points, A1, A2, A3, and A4, were selected from the water diversion tunnel to the high water pressure tunnel in the underground waterway system for powerhouse A. Points A1, A2, A3, and A4 were located at the upper tunnel, middle inclined shaft, lower inclined shaft, and lower tunnel with heights of 590 m, 458 m, 146 m, and 135 m, respectively.Points B1, B2, and B3 were selected as three feature points in the underground waterway system for powerhouse B from the water diversion tunnel to a high water pressure tunnel.They were located in the upper tunnel, middle inclined shaft, and lower tunnel with heights of 590 m, 458 m, and 135 m, respectively.
Figure 8 shows the changing relationship of internal water leakage over time in a waterway system.The flow quantity at time 0 represents the leakage at the beginning and was calculated at a water level of 762 m in the waterway and a natural water level near the waterway.
The water table is predicted to rise because of inner water leakage, but the leakage quantity will be reduced after water fills into the waterway system for an entire workday.The amount of leakage is higher in this situation than leakage in natural conditions due to water discharge in high water pressure branch pipes and powerhouses.This results in a rise of the water table below the natural water level, although inner water leakage exists in the waterway system.Thus, the water level along the waterway rises gradually, the hydraulic head pressure difference was reduced, and the amount of leakage decreased.At one point it even decreased to 0 over time in the waterway system.
Different amounts of inner water leakage exist at varying points in the waterway systems.The amount of water leakage progressively increases as the groundwater table decreases from the water diversion tunnel to a high water pressure tunnel in natural conditions.As the groundwater level remained high in the upper tunnel and the middle inclined shaft, the hydraulic head differences became smaller and the groundwater level remained low in the lower tunnel and lower inclined shaft.The hydraulic head difference became larger after water filled the waterway system, at which point the inner water leakage amplitude of variation decreased in the upper tunnel and the middle inclined shaft, while it increased in the lower tunnel and lower inclined shaft.
Influence of Internal Water Leakage on External Water
Pressure of the Steel Branch Pipes.Inner water leakage not only increased the amount of water discharge in the powerhouse and high pressure branch pipes, but also raised the external water pressure of the steel branch pipes.The inner water pressure decreased to 0 when the waterway was vented in a short time because of external water pressure.The hydraulic head in the waterway system acted on the surface of the steel branch pipes directly.This may lead to the damage of the steel branch pipes if the pressure at the hydraulic head exceeded the tolerable pressure of the steel branch pipes, and the routine water drainage system is used to reduce external water pressure on the steel branch pipes.A cross-section taken approximately vertical to the steel branch pipes before the drainage hole and after the high water pressure was used to calculate external water pressure (Figure 9), with the unit of contour value as MPa.
The water pressure was calculated after cracking of the reinforced concrete lining.The external water pressure on the steel branch pipes decreased to almost 0 owing to the anti-seepage and drainage in the high pressure branch pipes.The maximum external water pressure is 1.2 MPa, which satisfies the design value (i.e., it was below 1.8 MPa).Ensuring the normal operation of reinforced concrete is important as the external water pressure can reach 1.6 MPa when the reinforced concrete is destroyed and this amount exceeds the designed value.In addition, external water pressures in these parts are almost 0 because the water discharge in the 246 m high gallery and external water pressure were also reduced above the height of 246 m.
Change in the External Water Pressure of the Waterway
System on Venting Conditions.The water level inside the waterway decreased at a rate of 20 m per hour when the waterway system was venting, while the water level outside the waterway decreased at a very low speed because of the antiseepage effect of the concrete lining.The difference between the inner and outer water pressure resulted in an external water pressure.Water in the waterway system was vented over 38 hours, as shown in Figures 10 and 11, to reveal the external water pressure changes over time in powerhouses A and B during waterway system venting.Figure 10 shows the change in external water pressure when cracking occurred in the concrete lining.Figure 11 shows the outcome when the concrete lining remained effective.As illustrated in Figure 10, external water pressures in A1 and B1 were 0.63∼1.40MPa and 0.51∼1.28MPa, respectively.The maximum value and amplitude of variation were both small and external water pressure was insufficient to damage the waterway system because the elevation in these places was 590 m.For A4 and B3, the elevation was 135 m and the external water pressure was large near the steel branch pipes.However, with the venting of the waterway system, external water pressure gradually decreased.The tensile and compressive strengths of the surrounding rock in the waterway system were high, especially for complete granite.Therefore, since the minimum value of the tensile and compressive strengths of the surrounding rock was greater than 7 MPa, external water pressure caused by waterway system venting will not damage the waterway system.
External water pressure in Figure 11 was smaller when compared with Figure 10.It increased slightly due to the antiseepage effect of the concrete lining when the waterway system was filling with water, and a small increase in external water pressure occurred when the waterway system was vented.The maximum external water pressure was below 2.7 MPa, as shown in Figure 11.
Change in External Water Pressure When Only One
Powerhouse Was Operating.Powerhouse A was operated while the waterway system was filling with water and the waterway system in powerhouse B was venting.Water in the waterway system of powerhouse A seeped from the inner to an outer tunnel.Under these conditions, the external water pressure caused by venting water in the waterway system of powerhouse B was calculated.The results showed that the external water pressure on the water diversion tunnel in powerhouse B was 0.3∼1.34MPa, and the external water pressure on the high pressure tunnel in powerhouse B was 0.8∼1.63MPa.Through the rock material of the tunnel and the powerhouse, the rock mass along the two sides of the
Figure 1 :≤ 3 Figure 2 :
Figure 1: Location of the main faults in the powerhouse area.
Figure 3 :Figure 4 :
Figure 3: Change of rock permeability with depth changes in the powerhouse domain.
Figure 5 :
Figure 5: Schematic diagram of the balanced fracture elements.
Figure 8 :
Figure 8: Change of inner water leakage in the waterway system with time.
Figure 9 :
Figure 9: Contour map of external water pressures.
Figure 10 :Figure 11 :
Figure10: External water pressure changes with time when the waterway system was vented after the cracking of reinforced concrete.
Table 1 :
Hydraulic conductivity in each plant zone.
Table 2 :
Geometrical parameters of the fractures. | 7,043.4 | 2017-01-30T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
On double-layer and reverse discharge creation during long positive voltage pulses in a bipolar HiPIMS discharge
Time-resolved Langmuir probe diagnostics at the discharge centerline and at three distances from the target ( 35mm , 60mm , and 100mm ) was carried out during long positive voltage pulses (a duration of 500μs and a preset positive voltage of 100V ) in bipolar high-power impulse magnetron sputtering of a Ti target (a diameter of 100mm ) using an unbalanced magnetron. Fast-camera spectroscopy imaging recorded light emission from Ar and Ti atoms and singly charged ions during positive voltage pulses. It was found that during the long positive voltage pulse, the floating and the plasma potentials suddenly decrease, which is accompanied by the presence of anode light located on the discharge centerline between the target center and the magnetic null of the magnetron’s magnetic field. These light patterns are related to the ignition of a reverse discharge, which leads to the subsequent rise in the plasma and the floating potentials. The reversed discharge is burning up to the end of the positive voltage pulse, but the plasma and floating potentials have lower values than the values from the initial part of the positive voltage pulse. Secondary electron emission induced by the impinging Ar+ ions to the grounded surfaces in the vicinity of the discharge plasma together with the mirror configuration of the magnetron magnetic field are identified as the probable causes of the charge double-layer structure formation in front of the target and the ignition of the reverse discharge.
Introduction
High-power Impulse Magnetron Sputtering (HiPIMS) discharge is a development of the conventional DC magnetron sputtering technique when high power is delivered to the magnetron target in the form of periodically repeating voltage and current pulses with a relatively low duty cycle (typically ≲ 20 %) [1][2][3].This is connected with several significant benefits for film deposition: (1) strong sputtering of atoms from the magnetron target, (2) huge and highly ionized fluxes of atoms to the substrate, (3) augmented energies of ions flowing onto the substrate, (4) enhanced dissociation of reactive gas molecules, and (5) reduced poisoning (coverage with the reactive gas compounds) of the target surface during the reactive deposition [1][2][3][4][5].Unfortunately, the deposition rate of HiPIMS discharges is usually lower or comparable with the conventional DC magnetron sputtering at similar average power densities, mainly resulting from the return of ions to the target [2,3].So, it is advantageous for the film deposition to release these trapped ions from the target vicinity toward the substrate [6].
One possibility how to push ions in the direction toward the substrate after the end of a negative voltage pulse (NP), where the target sputtering takes place, is to apply a positive voltage pulse (PP) to the target after the NP termination [7].This technique is usually called bipolar HiPIMS (BP-HiPIMS).It was used successfully to improve the deposition of different types of films, e.g., to increase the deposition rate [8,9], to improve hardness [10][11][12][13][14], strengthen film adhesion [8,15,16], for film densification [14,17], to smoothen the film surface [12,15], to increase grain sizes [13,17], to rise the content of sp 3 bonds in diamond-like carbon films [9,12,18,19], and to minimize porosity [17].
The effect of PP on the plasma behavior has been analyzed using many diagnostic techniques in recent years.
The results of time-averaged energy-resolved mass spectroscopy [11,15,18,[20][21][22][23][24][25], and those of retarding field energy analyzers [26,27] are quite similar among different sputtering systems.They show that the ion energy distribution function (IEDF) is enhanced by a group of ions whose energies are around a value corresponding to the applied voltage during PP.Time-and energy-resolved mass spectroscopy confirmed that these ions originate from the PP [23,24].
Laser-induced fluorescence applied during BP-HiPIMS sputtering of a Ti target [28] showed that the ground state density of Ti + ions is significantly reduced during PP in comparison with their density without PP as a result of Ti + ions significant acceleration in the target-to-substrate direction.Time-resolved optical emission spectroscopy (OES) was also carried out in BP-HiPIMS discharges [20,22,29] with the result that only argon atom emission lines resume during PP if a sufficiently high positive voltage (U + ≳ 25 V) is applied.
The results of Langmuir or emissive probe measurements from different sputtering systems do not agree.Generally speaking, the results may be divided into two scenarios [26], which are also connected with explaining how and where the ions gain high energies.The first one is that a charge double-layer (DL) potential structure is formed in front of the target biased to positive voltages [15,21,30].The ions then gain energy when they cross the DL potential structure from the DL side closer to the target, where the plasma potential is higher (H-side), to the DL side closer to the substrate with substantially lower plasma potential (L-side).Thus, it is believed that the DL structure makes it possible to enhance the ion bombardment even on insulating substrates.Moreover, the electrons flowing from the L-side to the H-side gain enough energy to ionize atoms on the H-side of the DL structure.The DL structure usually forms gradually in time [15,30], and a distinct decrease in the plasma potential between the H-side and the L-side may be registered somewhere at a distance between the target and the substrate.
The second scenario is that the ions gain energy mainly in the potential drop across the substrate sheath.In this case, the plasma potential rises very quickly (on time scales substantially lower or at most on the order of µs) to positive values in the plasma volume after applying the positive voltage to the target.This leads to the formation of a sheath close to the substrate (grounded or negatively biased) where the ions are accelerated onto the growing film [22,31,32].
The experimental data well support both scenarios.Identifying which discharge parameters govern the selection of the above-mentioned scenarios is still undergoing investigation.It is believed that one possible candidate that controls plasma behavior during PP is the magnetic field topology of the magnetron used (a balanced vs. unbalanced magnetic field configuration) [31].
In recent years, Particle-in-Cell (PIC) simulations have been carried out to shed light on plasma processes taking place during PP [25,26].The sputtering plasma must be simplified considerably to reduce the actual discharge's complexity and to achieve reasonable computation times.Only electrons, argon atoms, and argon ions are traced during these PIC simulations, and a reduced set of reactions among these particles is considered.Despite these substantial simplifications, the PIC simulations provide valuable information on plasma properties very close to the target, e.g., the plasma potential and the resulting electric field, the local breach of charge quasineutrality, and the mapping of particle fluxes, where diagnostics methods cannot be easily applied.
Kozák et al. [24] showed that for longer PP (a duration of 200 µs), a third broad peak in IEDF appears, with energy reduced by roughly 15 eV compared to the highest peak.This third peak corresponds in time with the drop of the plasma potential (measured by a Langmuir probe) and a sudden increase of plasma emission from Ar atoms (detected by fast-camera imaging) in a form resembling a "light bulb" located at the discharge centerline near the target.This "reignition" of discharge was named "reverse discharge" (RD).Similar drops in the plasma and/or floating potentials were also detected by other authors in different BP-HiPIMS systems [22,25,32,33].Usually, the drop in potentials is, after some time, followed by their increase to new, almost stationary values, which are lower than those measured after the PP initiation.This event was named drop and rise (D&R) [25].Similar light-emission structures at the discharge centerline near the target were also confirmed by other authors in the discharge whole light [34] or in the emission of Ar atoms [29].This shows that the presence of RD in BP-HiPIMS may be common.Since RD may significantly influence the energy and composition of ion fluxes onto the growing film, this phenomenon is worth a detailed study.
Experimental details and data processing
Our experimental system for diagnosing BP-HiPIMS is depicted schematically in figure 1.A more detailed description of this system and the processing of Langmuir probe data can be found in Ref. [31].Here, we will mention information that is needed for understanding of presented results.
The vacuum vessel was made from a DN 200 ISO-K 6-way cross piping, and it was evacuated by a turbomolecular pump backed up by a scroll pump down to 5 × 10 −5 Pa.A self-built BP-HiPIMS power unit operating in the constant voltage mode was used for the powering of a planar circular magnetron with a Ti target (a diameter of 100 mm and a thickness of 6 mm).The magnetron voltage and discharge currents were monitored by voltage and current probes connected directly to the magnetron.Only one regime with a NP duration of t − = 100 µs, a PP duration of t + = 500 µs, a delay between the NP end and the PP initiation of t D = 20 µs, a PP voltage amplitude of U + = 100 V, and a process gas (argon) pressure of p = 1 Pa is examined here.The magnetic field of the magnetron (see figure 2) was configured as a type-2 unbalanced field with the magnetic null at the axis of symmetry at the distance of 50 mm from the target.The approximative position of the magnetic funnel, where electrons are not strongly confined near the target by the magnetic field during NP, is also depicted in the figure .The tip of the Langmuir probe (a length of 10 mm and a diameter of 0.15 mm) Figure 2. The magnetic field of the magnetron, where r is the horizontal distance from the discharge centerline and z is the vertical distance from the target surface.
The positions of the target, protruding grounded anode, grounded substrate, magnetic funnel, and magnetic null are also depicted.
made of tungsten was parallel with the target surface.It was positioned at the discharge centerline at the distances z = 35 mm, 60 mm, and 100 mm from the target (see figure 1).A PC-controlled voltage source biased the probe via a MOSFET switch, which allowed us to avoid the probe tip overheating during NP owing to high electron currents at the high probe bias voltages (up to 200 V to the ground).The switch was turned on 5 µs before the PP initiation and turned off 100 µs after the PP end.From the waveforms of the probe current and voltage (measured between the switch and the probe tip) recorded by an oscilloscope for 128 periods of discharge pulses, the probe current-voltage (IV) characteristics were reconstructed every 0.1 µs during PP.These IV characteristics were averaged to achieve a time resolution of 0.5 µs and 1 µs during PP and RD, respectively.The effects of the magnetic field on the electron current collected by the probe tip may be neglected as the magnetic field lines are almost perpendicular to the surface of the probe tip, and the magnetic field is relatively weak even at the distance of 35 mm from the target.Figure 3 shows the measured probe electron currents at the selected times of measurement t m = 130 µs (after the PP initiation), 205 µs (during the first floating potential drop), 290 µs (during the RD ignition), and 600 µs (before the PP end) after the NP initiation (t = 0 µs) at the distance of 100 mm from the target.All the characteristics had the usual shape and were smooth, except several of them measured around the RD ignition (see the IV characteristic at t m = 290 µs in figure 3), which were deformed.Only the floating potential could be determined in these cases.Similarly deformed IV characteristics were also detected in Ref. [25] around the RD ignition.The floating potential, V f , was determined as the probe potential U p where the current of the probe was zero (I p (V f ) = 0 A).As the IV characteristics are dominated by a group of almost Maxwellian electrons (except the first few µs at the very beginning of PP, where also hot electrons with approximately 10× lower density are detected, see t m = 130 µs in figure 3), we determine the plasma potential, V p , the electron temperature, T e , and the electron density, n e , by fitting the measured probe current in the transient region (U p in the upper two-thirds of an interval between V f and V p ) to the classical formula [35] where I * e = 0.25Aen e (8kT e /πm e ) 0.5 is the electron saturation current, A is the probe tip area, e is the elementary charge, k is the Boltzmann constant, and m e is the electron mass.For better results, the natural logarithm of equation ( 1) was fitted to the natural logarithm of the measured probe current with a removed ion current component.The initial value of V p , around which the fitting procedure is performed, is determined as the U p value where the dI s p /dU p is maximum.The quantity I s p is the smoothed probe current produced by the second-order Savitzky-Golay filter with an automatically determined number of points.This procedure of IV characteristics evaluation was automated by a Python script, which gave us consistent results.The average and maximum relative differences of fitted I e to the measured values were lower than 2 % and 5 % during the initial part of PP, 3 % and 7 % before the RD ignition, and 7 % and 12 % during RD, respectively.
The light emission of plasma during PP was recorded by an emICCD camera (PI-MAX 4 with SR Intensifier, Princeton Instruments), which can amplify the light signal by a light intensifier and by multiplying electrons on the chip simultaneously.The camera was equipped with a UV lens (a focal length of 25 mm) with an appropriate band-pass filter in front of it: 520 nm with the FWHM of 10 nm for Ti atoms (dominant transmitted emission lines at 517.37 nm, 519.30nm, and 521.04 nm), 334 nm with the FWHM of 10 nm for Ti + ions (dominant transmitted emission lines at 332.29 nm, 334.19 nm, 334.94 nm, 336.12 nm and 337.28 nm), 811 nm with the FWHM of 3 nm for Ar atoms (dominant transmitted emission lines at 810.37nm and 811.53 nm), and 488 nm with the FWHM of 3 nm for Ar + ions (dominant transmitted emission lines at 487.99 nm and 488.90 nm).During PP, both amplification methods of the camera were used (a theoretical amplification of 10000) to capture images each 5 µs with a gate width of 5 µs to cover the whole PP.Moreover, for Ti atoms, Ti + ions, and Ar + ions, charge accumulation on the chip had to be used as the radiation of RD is very weak.For each measured atomic and ionic species, the light intensity during PP was normalized independently so that 1 corresponds to the most intense light emission from a given species during PP.This allows us to emphasize the structure of light patterns in the discharge for all monitored species.
Figure 4 shows an illustration image from the emICCD camera.The positions of the target, the grounded anode, and the substrate are shown.We want to note the position of a screw holding the anode plate, which may be misleadingly considered the target center.The approximate positions of the probe tip are also marked together with the location of the magnetic null.
Discharge characteristics
where J d = I d /A t is the discharge current density and A t = 78.54cm 2 is the total area of the target.The peak target power density in NP is S peak = 1.16 kWcm −2 at Here, it should be noted that the decrease in U d during NP is caused mainly by a protective resistor (around 2 Ω) connected up in the output circuit of the pulsing unit (see figure 1).
After the PP initiation, a steep increase in U d leads to a quick rise in I d magnitude and the creation of a short I d peak.After that, U d slightly reduces, and the I d magnitude decreases.This behavior is qualitatively similar to those observed in Ref. [31], but the values differ.We attribute these differences to a higher erosion of the target as the other discharge parameters are as close as possible to those in Ref. [31].This implies that the waveforms of U d and I d during PP may be susceptible to relatively small variations in the plasma state near the target (the density and spatial distribution of plasma species) before the PP initiation.They are natural consequences of different plasma evolutions during NP as the magnetic field geometry changes with racetrack erosion.After t ≈ 150 µs, the decrease of I d magnitude slows down, and it continues to decrease up to the end of PP.After the initial overshoot, the value of U d increases monotonically during the whole PP due to decreased plasma conductivity and reduced voltage drop across the protective resistor.
Local plasma parameters
Figure 6 shows time evolution of the plasma potential, V p , floating potential, V f , electron temperature, T e , and electron density, n e , measured at the discharge centerline at the distances of 35 mm, 60 mm, and 100 mm from the target.In this subsection, all numerical values of quantities presented in the text are close approximates unless otherwise stated, and the values of both potentials (plasma and floating) are referenced to the ground (see the connection of chamber and all equipment to the ground in figure 1).
Before the analysis, it should also be noted that even during the pause between the NP end and the PP initiation, which is relatively long in our case (t D = 20 µs), the target has a gradually decreasing positive voltage, and a negative current is flowing to the target with a decreasing magnitude.This may influence the initial plasma density [31], and it may also significantly change the composition of the ionic particles (Ar + , Ti + , and Ti 2+ ) near the target before the PP initiation due to the outflow of these ions.
Evolution after the PP initiation
After the PP initiation, a high difference between V p and V f (up to several tens of volts) persisting for 1 µs is detected at all positions of measurements.This is accompanied by high values of T e and temporary decreases in n e .Similar fast reactions of plasma parameters on the PP initiation were also registered in Refs.[22,32,33].After the initial part, the difference between V p and V f is relatively small.During this period, T e is low, and n e almost monotonically decreases.This evolution is similar to the results described in more detail in Ref. [31] where the plasma parameters in PP with the duration of 50 µs were studied for the same NP parameters.But there is one exception.In this case, the values of V p are higher at z = 60 mm compared to those at z = 35 mm.Note, that in Ref. [31] V p was higher at z = 35 mm than at z = 60 mm.This again shows that small changes may relatively extensively influence the plasma parameters in the discharge (we have tried to keep the parameters of NP as close as possible to those in Ref. [31]).It may indicate that the exact reproduction of phenomena during PP may be problematic.Around t m = 195 µs, a simultaneous decrease in V f with a duration of 20 µs at the distances of z = 35 mm and 100 mm (near the target and the substrate, respectively) is detected.At the distance of z = 60 mm, the decrease in V f is also registered, but with a delay of 3 µs.These decreases are accompanied by an increase in T e , a decrease in n e , and a slight increase in V p at the corresponding times.At t m = 215 µs, V f and V p return to their gradual increase from the period before this event.Similarly, T e and n e also return to the evolution before the V f decrease.
A simultaneous decrease in V f and rise in T e correspond to the classical (equilibrium) theory of plasma floating wall potential [36] considering the elevated values of V p (owing to the applied positive potential to the target) leading to the positive V f .The time evolution of U d and I d (figure 5) does not exhibit any sign of change in their trends.It means that the origin of this V f decrease comes from the internal change in the plasma.Moreover, the flux of the electrons out of the plasma (see the reduction in n e ) does not flow through the target (no visible peak in I d ).
Drop & Rise event
Around t m = 230 µs, V f near the target (z = 35 mm) quickly falls.Other researchers also registered similar behavior of potentials [23,25,32,33,37], sometimes described as a drop and rise (D&R) event.The D&R event moves to higher distances from the target with time.Around t m = 240 µs, V f drops at z = 60 mm, and at t m = 265 µs, the drop of V f is observed at z = 100 mm.This movement of the D&R event is not common in the literature because in Refs.[25,32] the authors see a movement in the opposite direction (from the substrate to the target), and in Ref. [23], the D&R event appears at all measured distances simultaneously.Taking the time differences between the points where the V f fall stops and the measurement distances from the target, we can obtain that the average speed between the distances 35 mm and 60 mm is roughly 1300 ms −1 , and between 60 and 100 mm the speed is around 2300 ms −1 .It indicates that the D&R event may accelerate during its movement from the target to the substrate.
The drop in V f is always followed after a time delay (between 17 µs and 21 µs) by a decrease in V p .The duration of the D&R events increases with the distance from the target.It is roughly 45 µs, 105 µs and 115 µs at distances z = 35 mm, 60 mm and 100 mm, respectively, as estimated from the duration of elevated values of the potential differences V p −V f (not shown).After the end of the D&R events, both potentials return to their gradual increase, which persists up to the end of PP, but their values are lower than those before the D&R events by 17 V on average.This value corresponds well with the energy decrease of the third peak in IEDF measured in the same system [24].
The D&R events are accompanied by peaks in T e .At distances z = 60 mm and 100 mm, the second peaks in T e with lower values are registered approximately 36 µs after the first peaks.Their occurrence corresponds well with the second decrease in V f during the D&R events at these distances.The occurrence of the D&R events also decreases n e values.Contrary to the previously detected decreases around t m = 205 µs where n e rises back to almost the values before the drop, after the D&R events, n e stays low.Only at z = 100 mm, n e partially recovers to higher values.
Here, it should be mentioned that an elevation of T e and a decrease in n e during PP were also proven by laser Thomson scattering [32,34] during BP-HiPIMS of tungsten target when high PP voltages were used (U + ≥ 200 V).Contrary to our case, they saw the increase of T e to move in the opposite direction (from the substrate to the target) [34].
Stabilization of the reverse discharge
After the D&R events, T e values stay elevated.Around t m = 490 µs, T e forms a peak in the plasma bulk (z = 60 mm).This event also manifests itself in the evolution of V p .From now on, T e at z = 60 mm is higher than T e at z = 35 mm.
Here, it should be noted that the spatial structure of the potentials changes.Before the D&R events, the highest V p and V f values are measured at z = 60 mm and the lowest ones at z = 100 mm.After the D&R events, the highest values of potentials are detected at z = 35 mm, and the lowest are again measured at z = 100 mm.
The D&R events slow the plasma decay times down approximately four times.The decay times of n e were determined from the fits of the equation n e (t) = n e (t 0 ) exp[−(t − t 0 )/τ ] to the measured n e values, where τ is decay time and t 0 is the time from which the fit is performed.The decay times between the PP initiation and the D&R events (the decreases around t m = 205 µs are not included) are τ 1 = 121 µs, 112 µs and 113 µs for the distances z = 35 mm, 60 mm and 100 mm, respectively, so the average value is τ1 = 115 µs.After the D&R events (till the PP end), the decay times increase to τ 2 = 466 µs, 539 µs and 423 µs for the distances z = 35 mm, 60 mm and 100 mm, respectively, giving the average decay time τ2 = 476 µs.The increase in the decay time is counterintuitive as the magnitude of the discharge current decreases a little bit faster after the D&R events compared with its evolution closely before the D&R event occurrence (see figure 5).
Light emission of plasma species
Figure 7 shows captured distributions of light intensity originating from the emission lines transmitted by the band-pass filter used for the given specific plasma species (shortly light emission of species) for the selected times t m = 130 µs, 205 µs, 245 µs, 270 µs, 290 µs, and 600 µs, which correspond to the vertical broken lines a-f in figure 6.A video constructed from all the images captured by the emICCD camera during PP is embedded in figure 8 and referenced during the following analysis.It should be mentioned that the highest intensity during the PP was registered from Ar atoms, which is consistent with the results in the literature [20,23,24,29,34].The calibrated intensities of other species were lower 31.7,20, and 11.2 times for Ar + ions, Ti atoms, and Ti + ions, respectively, compared to the intensity of Ar atoms.
Evolution after the PP initiation
At the beginning of PP (from t m = 120 µs to 150 µs), an increased light emission is visible (see figure 7a) in the plasma bulk between the magnetic null and the substrate, and close to the target surface as a light ring shifted from the racetrack (RT) position towards the anode.Around t m = 145 µs, the intensity of Ti atom-light emission increases in the whole observed volume.It is accompanied by a flash in the intensity of Ti + ions between t m = 150 µs and 160 µs and a hard-to-see decrease in the intensity of Ar + .During this phase, the distinctly visible rings of ions change to a diffusive light above the target, which lasts up to the D&R event (see figure 7b).The temporary decrease in V f around t m = 205 µs does not produce any visible change in the light pattern except that blinking of the species light is evident in the whole plasma volume, which persists up to the D&R event.
Drop & Rise event
Between t m = 235 µs and 250 µs, when V f attains its minimum at z = 35 mm, no apparent change in the light emission of species is visible (see figure 7c).But, at t m = 255 µs, intensity of Ar + increases in the whole volume.When the D&R event moves to the distance of z = 60 mm at t m = 270 µs (see figure 7d), a light pattern resembling a "jet" from the target center is visible in the light emission of Ti + ions (see figure 7d).The beginning of the light pattern formation in the Ti + ion emission can be seen at t m = 265 µs (see the video in figure 8).Considering the next video frame (t m = 275 µs), it looks like the light pattern expands in the direction from the target center to the magnetic null.According to similar figures acquired in a discharge with a magnetically constricted anode [38], this is anode light shaped by the presence of magnetic field into the "jet"-like shape.Even though the term anode light is usually used for the full light emitted near the anode, we will use the term anode light pattern (ALP) to describe this shaped light pattern emitted by a given plasma species near the target during PP.Similarly, the inception of ALP is also visible at t m = 270 µs in the light emission of Ti atoms.At t m = 275 µs, the ALP of Ti + ions elongates and becomes more intense.At the same time, the ALP of Ar + ions appears.Around t m = 280 µs, the ALPs of Ti atoms, Ti + ions, and Ar + ions are well developed, but the ALP of Ar atoms appears 5 µs later.The appearance of ALPs is also connected with the lowering of the light emission of the corresponding ionic species in the rest of the discharge.Moreover, when the ALP of Ar + ions develops, the light emission from Ti + ions in the discharge volume increases again.Between t m = 280 and 290 µs, ALPs start to change their shapes.The body of ALPs becomes wider and flattened near the magnetic null.The light emission from "arcs" connecting the top of the "jets" near the magnetic null and the grounded anode becomes visible.The "arcs" and the "jets" together form ALP resembling an "umbrella".The full development of an ALP for given species (more apparently visible for ions) leads to the diminishing (Ti atoms at t m = 280 µs) or almost disappearance (Ti + ions at t m = 275 µs and Ar + ions at t m = 280 µs) of the diffusive light near the target.Similar light structures were also registered during PP in BP-HiPIMS in Refs.[24,29,34] in the full discharge light or the light emission of Ar atoms.
Stabilization of the reverse discharge
Around t m = 360 µs, when the potentials at all distances from the target return to their monotonic increase after D&R events, the emission from the "arcs" between the top of the "jets" and the grounded anode disappears, and ALPs gradually stabilize.Despite the gradual decrease in the discharge current, the Ar and Ar + ALP intensity stays relatively unchanged.After the PP end, the ALPs of all species disappear during 10 µs and the remaining light from all species gradualy decreases (the fastest decrease is observed in Ar atoms and the remaining Ti + ions, not shown).
Discussion
During the analysis of emICCD images, it is important to keep in mind that the registered higher light emission of species provides several pieces of information that must hold simultaneously: (1) the specific species are present in that volume, (2) in the same volume, there are also electrons, and (3) these electrons have enough energy to excite those species.The interpretation of species' light emission lowering is more complicated (it may be a result of ground state density decrease, a decrease of T e or n e , or a combination of causes mentioned above), but it may be unraveled if the light emission decrease of one species is compared with the light emission from the other species.For instance, the decrease in light emission of Ti + ions and almost constant light emission of Ar + ions in the same area indicates that the ground state density of Ti + ions decreases as the electrons populating monitored excited levels of Ar + ions (upper-level energies of 19.68 eV and 19.80 eV) will also populate the monitored excited levels of Ti + with much lower excitation energies (upper-level energies from 3.69 eV to 4.28 eV for the lines with dominant emission) when the electron density and temperature does not exhibit substantial changes (they are relatively stable after the stabilization of RD, see figure 6).
Let us note that a typical lifetime (considering all transitions leading to the energy level depopulation) of excited radiative states (i.e., excited levels from which spontaneous light emission is allowed) producing high intensities, that are predominantly detected by the emICCD camera, is usually under several hundreds of ns.Typical transition times (connected only with one transition) for the observed emission lines can be calculated from the equation τ ki = 1/A ki , where A ki is the atomic transition probability from the higher energy level k to the lower energy level i.For Ti atom emission lines around 520 nm these τ ki are roughly 30-270 ns, for Ti + ion lines around 334 nm they are 6-30 ns, for Ar atom lines around 811 nm they are 30-40 ns, and for Ar + ion lines around 488 nm the transition times are 12-70 ns [39].So, generally speaking, the transition times of emission lines observed in our measurements are sufficiently short to imply that the registered light emission originates almost from the same place where the species were excited, even for ions that are subject to acceleration by the present electric field.
Figure 9 schematically shows the dominant area of the plasma emission during the different phases of PP together with the proposed dominant fluxes of charges.The red full arrows (denoted by i + ) represent the fluxes of ions, the blue broken arrows (marked by e − ) represent the fluxes of electrons, and the green broken arrows with lower thickness (denoted by sec.e − ) illustrate the places where the electrons are created by the secondary electron emission (SEE) after the ion impacts.In the plasma volume, the flux of one charge is connected with the flux of the charge with the opposite sign owing to the ambipolar diffusion.So, these arrows must be regarded as illustrations of the origin of charge fluxes and the directions in which they are flowing.Also depicted are areas of the discharge where ionization of atoms is possible during the RD ignition and its sustaining.
Evolution after the PP initiation
The presence of the light ring near the target edge in the light emission of ions (see figure 9a) is consistent with the results of ion density measurements after the NP end near the target in HiPIMS discharges [40,41].A high number of ions and weakly magnetically confined electrons near the target outflow ambipolarly along the discharge centerline, where the magnetic field lines are almost perpendicular to the target surface, to distances beyond the magnetic null already during the pause between the NP end the PP initiation (as shown in figure 9a).Only the ions held by magnetically confined electrons stay near the target and create the ring shifted from the position of the deepest RT to the target edges.As the electrons flow to the target, these ions flow to the anode.It was observed for similar pressures in Ref. [33] and predicted by PIC simulations [26].Ar + ions located near the substrate (the density of Ti 2+ , which may also induce SEE, is much lower than the density of Ti + ions [42]) induce SEE from the substrate surface.Owing to the relatively long delay between NP and PP, the density of ions able to produce secondary electrons (Ar + and Ti 2+ ) is low near the target.Moreover, the density of Ar + ions near the target was reduced by argon rarefaction (the Ar atom density reduction owing to the momentum transfer from the sputtered metal atoms and their ionization [2]) during the high-power NP.Thus, the production of secondary electrons near the target is low.
The video (see figure 8) shows that the Ti + ion ring near the target edge increases its intensity after the PP initiation, which is connected with the increased density of electrons in this location.The ion ring is the most probable location where the electron current flows to the target (see figure 9a), as the missing emission from all species near the target center supports this scenario.The electrons flowing to the target also drag the ambipolar flow of ions with them, and the plasma region starts to extend back to the target (figure 9b).The ion ring near the target edge starts to disappear, which is also observed in the PIC simulation [26].This is also accompanied by a widening of the volume of positive plasma potential near the target.Despite the positive plasma potential close to the target, Ti + ions are held in the magnetic trap by the magnetically confined electrons, but the ions are repelled to higher distances from the target, and they may also, to some extent, escape in the direction toward the substrate (more visible for Ar + ions, see figure 7b).The widening of the positive V p volume also increases the volume from which electrons are drawn to the target vicinity.This also leads to the enlargement of the target area, where its surface absorbs electrons (see figure 9b).It should be noted that these changes in light emission coincide with the change in the decrease rate of I d , but these changes are not visible in the Langmuir probe measurements.
As the temporary decrease in V f around t m = 205 µs does not produce any changes in the registered light emission, we have to state here that the origin of this decrease is unknown.
Drop & Rise event
As the target potential is still close to U + = 100 V (see figure 5), but the plasma potential at z = 35 mm drops down to 59 V, the DL structure must be formed at closer distances to the target.The PIC simulations [25,26] show that the volume of more positive V p concentrates around the discharge centerline near the target in the form of a "hat" with the crown oriented in the direction toward the substrate and with the brim settling on the target.This volume of the positive V p extends roughly 10 mm from the target surface [25], and beyond this distance, V p quickly decreases.
ALP is formed around the discharge centerline (see figures 7d-e and 9c) firstly in the light emission of Ti + ions, then it is also found in the light emission of other species.As mentioned above, similar light patterns (in the full discharge light) were found in the discharges with the magnetically constricted anode [38], where a positive voltage is applied on the magnetron (the anode) and the chamber walls are grounded (the cathode).The authors explain this light pattern as anode light (sometimes also called a fireball) deformed by the presence of the magnetron's magnetic field that constricts the anode area.As the area at the anode, where the electron current is drawn, is substantially smaller than the area of the cathode, the electron sheath [43,44] is formed above this anode spot.The electron sheath then allows the electron current to reach values greater than the thermal electron current (given by the electron saturation current in ( 1)).
It should be noted here that the magnetic field geometry forms a magnetic mirror for the electrons flowing to the target through the magnetic funnel.It is complicated to correctly describe (in all detail) its effect in BP-HiPIMS, as the magnetic mirrors are usually studied without the presence of electric field and particle collisions.We can speculate that the electric field accelerates the electrons in the direction toward the target, so the velocity component, which is parallel with the magnetic field lines (they are here almost perpendicular to the target), increases.This pushes the electron velocity inside the loss cone [45] (defined for a given charged particle as the maximum angle between its velocity and the magnetic field line for which the particle can go through the magnetic mirror), and the electrons more easily reach the target through the magnetic funnel.On the other hand, the collisions of electrons with atoms and ions scatter the electron velocity orientation, which results in a higher number of electron reflections.Generally speaking, some electrons go through the magnetic mirror and form the electron current through the center of the target.Others bounce back, increasing the local electron density in the magnetic funnel.This increase may be so high that the plasma quasineutrality may be violated [25].It should be noted that a small dimple in the central part of the positive potential "hat" is visible in simulation results [26], which supports our explanation.The presence of the magnetic mirror can also explain why the electron current flows through the target edge after the PP initiation (see figure 9a) as the flow through the target center is suppressed by the magnetic mirror.For the full description, the theoretical treatment or detailed simulation of the electron sheath in the magnetic mirror configuration would be needed, but this is out of the scope of this paper.
Although the electron sheath can collapse into a DL structure itself as numerically proven [46] without the magnetic field presence, the magnetic mirror configuration may significantly accelerate the creation of the DL structure.The formation of the DL structure originates from the plasma's tendency to shield the target's positive potential.Usually, the electrons are the species that start the shielding owing to their very low mass, but in the case of the magnetic field present, their mobility across magnetic field lines is limited much more than for ions, and the magnetic mirror effect limits their mobility through the magnetic funnel.Thus, the electrons surround the volume where the magnetic trap is located.Ions may, to some extent, freely outflow in the direction of the substrate, which strengthens the electron density behind the magnetic trap, and the plasma potential is lowered here.It leads to the formation of the DL structure.
Since the electrons gain energy when they cross the DL boundary from its L-side to the H-side and the magnetic mirror increases their density close to the target central position, the probability of atom ionization in the magnetic funnel increases.Because the ionization energy of Ti atoms is much lower than that of Ar, the ALP is first seen in the light emission of Ti + ions.Let us note that the density of Ti atoms is relatively high near the target.The ionization of Ti atoms increases the density of electrons in the magnetic funnel even more.The increased density of electrons makes it possible to start the ionization of Ar atoms with much higher ionization energy than Ti atoms.Moreover, the density of Ar atoms had enough time to recover from its decrease caused by their rarefaction during NP as the times for the density replenishing near the target are roughly 100 -150 µs [47,48].Thus, the ALP of Ar atoms and ions is visible later.The growth of the ALP from the target to the magnetic null is caused partly by the ambipolar flow of the newly created ions, partly by the reflection of the new electrons by the magnetic mirror, and partly by the inflow of new electrons from the plasma bulk.Here it should be noted that narrower and sharper ALPs registered in the case of ionic species (see figure 7d-f) cast doubt on the proposed radial outflow of ions in the electron sheath of the discharge with the magnetically constricted anode [38].All these phenomena lead to an increased density of electrons from the target surface up to the position of the magnetic null.This was also observed in PIC simulations [25].These electrons excite species located around the discharge axis, which then emit light.Moreover, these electrons follow the magnetic field lines and create "arcs" in the "umbrella" shape light pattern.Some electrons in the "arcs" also originate from the secondary emission after the newly created Ar + ions hit the anode.The inflow of new ions to the plasma gradually increases the plasma potential from inside the magnetic funnel up to the substrate.That explains the rise of V p after its decrease during the initial part of the D&R event.From this time, the RD can be regarded as fully developed because the newly created Ar + ions can produce secondary electrons from the grounded surfaces.4.0.3.Stabilization of the reverse discharge Now, the RD is burning between the substrate, which is the source of secondary electrons emitted mainly after Ar + ion impacts, and the target center, where the electrons are collected (see figure 9(d)).Afterward, the density of Ti + ions gradually decreases (see the stable intensity of Ar + in figure 7e-f and the video) as they outflow to the grounded substrate and walls.The ionization of Ti atoms in the ionization zone of RD (see figure 9d) cannot replenish the decrease of Ti + density as the target is not sputtered.Ar + ions, created by the ionization of Ar atoms in the ionization zone, start to play a dominant role in RD.
This also means that the substrate is dominantly bombarded by Ar + ions with elevated energies during RD.Despite the gradual decrease in the discharge current, Ar and Ar + ALP intensities stay relatively unchanged.It shows that RD can generate a sufficient density of Ar + ions to sustain RD by the secondary electrons emitted mainly from the substrate after the Ar + ions impact it.
Conditions for the creation of a double-layer structure and reverse discharge
Let us summarize which conditions would accelerate the formation of the DL structure and the ignition of RD in BP-HiPIMS discharges during PP.One condition is an effective magnetic mirror in front of the target, but this condition is fulfilled in almost all magnetron discharges.Magnetrons with a balanced magnetic field would be more effective in the electron accumulation behind the magnetic trap.In those cases (if the construction of the magnetron is classical with the protruding anode ring around the target), electrons behind the magnetic trap cannot easily flow to the target edge as they are repelled by the grounded anode, which is close to or even in the path of the magnetic field lines leading electrons to the target edge (see figure 9a).In these cases, electrons must diffuse across the magnetic field lines or go directly through the magnetic mirror in the magnetic funnel.The second condition is to have ionic species capable of effective SEE from the grounded surfaces near the target to build up the electron density by the SEE in front of the magnetic trap.This means that PP must be initialized almost immediately after the NP end as it prevents the outflow of Ar + ions and doubly-ionized metal atoms to higher distances from the target or even onto the grounded surfaces near the target.The shorter NP duration can also increase the probability of the DL structure formation as a lower Ar density reduction (weaker rarefaction) during NP allows Ar + ions to have a higher density close to the target at the PP initiation.Moreover, if the power during the NP is high, a relatively high density of doubly-ionized metal ions may be accumulated in front of the target.Smaller magnetron sizes also boost the DL formation since the diffusion lengths of the ions inducing SEE from the grounded surfaces near the target are shorter.This is why the DL formation during relatively short times is often observed in small-size balanced magnetrons driven by a relatively short NP with a minimal delay between the NP end and the PP initiation, e.g., as seen in Refs.[15,21,30].
The creation of DL and the ignition of RD during PP may be beneficial for film formation if the ignition of RD is done almost immediately after the PP initiation.Sputtered metallic atoms during NP would be ionized in the RD ionization zone, and they may gain energy in the DL structure when they cross from the H-side with a higher potential to the L-side with a lower potential.Unfortunately, as the metallic atoms are not replenished by sputtering from the target, RD relatively quickly starts to ionize Ar atoms, and the deposited film is bombarded by energetic Ar + ions that may induce defects in the growing film.
Conclusions
The time-resolved Langmuir probe diagnostics and optical emission spectroscopy imaging for different plasma species (Ti and Ar atoms as well as singly ionized Ti and Ar ions) have been carried out during the long positive voltage pulse (duration of 500 µs) in bipolar HiPIMS discharge (with the positive voltage of 100 V).Comparing our results with those in the literature makes it possible to identify the main phenomena leading to the formation of the double-layer structure and the reverse discharge ignition.Our findings may be summarized in these points: • After the positive voltage pulse initiation, the high difference between the values of the plasma potential and the floating potential is registered, which persists only around 1 µs, and it is accompanied by a high electron temperature (almost up to 50 eV).After that, both potentials monotonically increase (except for a short time when the temporal decrease of the floating potential is registered).The light emission from the plasma species does not show any unexpected phenomena.
• Roughly at the second quarter of the positive voltage pulse, a decrease in the floating potential (by up to 40 V) followed after a few µs by a decrease in the plasma potential (by up to 26 V) in a large volume of the discharge plasma is observed.This is accompanied by large peaks in the electron temperature (up to 20 eV) and an elongation of the electron density decay times (from 115 to 476 µs).
The changes in the plasma parameters are followed by the presence of anode light patterns located between the target center and the magnetic null of the magnetron at the discharge centerline.The reverse discharge is ignited.
• After between 45 µs and 115 µs depending on the distance from the target, the plasma, and floating potentials return to their increases from the times before their sudden decrease, but the values of the potentials are lowered compared to those before the decrease (by 17 V on average).The electron temperatures rise (up to 2 eV in comparison to values up to 0.2 eV from the initial part of the positive pulse.).The anode light patterns stabilize and persist until the positive voltage pulse ends.
The intensity of the light patterns of Ti ions and atoms gradually decreases as their densities decrease and are not replenished by the sputtering from the target.After the positive voltage pulse ends, these light patterns vanish entirely.
• The secondary electron emission induced dominantly by Ar + ions striking the grounded surfaces and the mirror effect of the magnetron magnetic field were identified as probable causes of the charge double-layer structure creation and the maintenance of the reverse discharge.The influx of the secondary electrons to the target vicinity induces the plasma potential decrease behind the magnetic trap, which results in a quicker formation of the double-layer structure in front of the target.The increased electron density near the target at the discharge centerline, owing to the magnetic mirror effect and the energy that electrons gain during their transition through the double-layer structure, allows starting the ionization of atoms and the ignition of the reverse discharge.
• Since the reverse discharge is burning mainly due to the creation of Ar + ions that can supply the reverse discharge with a sufficient number of secondary electrons emitted from the grounded substrate after the Ar + ion impact, the growing film on the substrate is bombarded during the reverse discharge mainly by these ions with elevated energies.It may lead to the creation of defects or even resputtering of the growing film.Thus, the reverse discharge should be avoided when high-quality, densified films are requested.On the other hand, when porous films should be deposited, reverse discharge may be beneficial.
Figure 1 .
Figure 1.A schematic diagram of the experimental system used.
Figure 3 .
Figure 3.The natural logarithms of the electron probe currents, I e , (in amps) measured at the probe potentials, U p , for selected times during PP at the distance of 100 mm from the target.The bold broken lines show fits of equation (1) in the logarithmic form to the measured data.The short horizontal lines mark the natural logarithm of the electron saturation currents determined by the fitting procedure.
Figure 4 .
Figure 4.An illustration of the target, anode, and grounded substrate positions on the OES images.The approximate locations of the Langmuir probe tip and the position of the magnetic null are also marked.
Figure 5
Figure 5 shows magnetron voltage, U d , and discharge current, I d , waveforms during NP and PP.The averaged target power density in NP is S da = 0.96 kWcm −2 , and it is calculated from the recorded U d and I d waveforms by the equation
Figure 5 .
Figure 5. Waveforms of the magnetron voltage, U d , (blue line), and the discharge current, I d , (red line) during NP and PP.The left scales hold in front of the black vertical line.Behind this line, the right scales hold.
Figure 6 .
Figure 6.Time evolutions of local plasma parameters during PP at the discharge centerline at the distances z = 35 mm, 60 mm, and 100 mm from the target.Panel (a) shows the waveforms of magnetron voltage, U d , and discharge current, I d , for comparison purposes.Panel (b) shows time evolutions of the plasma potential, V p , (full line) and the floating potential, V f , (broken line).Panels (c) and (d) show time evolutions of the electron temperature, T e , and the electron density, n e , respectively.Vertical broken lines a-f mark the OES measurement times of t m = 130 µs, 205 µs, 245 µs, 270 µs, 290 µs and 600 µs (see also figure 7).
Figure 7 .
Figure 7.Light emissions from Ti and Ar atoms, and Ti + and Ar + ions recorded by emICCD camera for the selected times t m = 130 µs, 205 µs, 245 µs, 270 µs, 290 µs and 600 µs, which correspond to the vertical broken lines a-f in figure 6.The value of 1 corresponds to the most intense emission during PP, and it was calculated independently for each species to emphasize the structure of light patterns.The calibrated intensities of Ar + ions, Ti atoms, and Ti + ions are 31.7,20, and 11.2 times lower than the intensity of Ar atoms.
Figure 8 .
Figure 8. Embedded video constructed from all the images captured by the emICCD camera during OES imaging of plasma species (Ti and Ar atoms and singly charged ions) during PP.The multiplication factors for the light intensity are also given for all species to achieve the normalized intensity.Time evolution of the plasma (V p , full lines) and floating (V f , broken lines) potentials are given at the probe tip distances from the target z = 35 mm (red lines), 60 mm (green lines), and 100 mm (blue lines) for reference purposes.The time of measurement is denoted t m .
Figure 9 .
Figure 9. Schematic depiction of the dominant area of plasma radiation and proposed dominant flows of Ti + and Ar + ions (red full lines, marked by i + ), electrons (blue broken lines, marked by e − ), and secondary electrons (green slim broken lines, marked by sec.e − ).The volume of atom ionization is also schematically illustrated in panels (c) and (d). | 12,957 | 2024-03-27T00:00:00.000 | [
"Physics",
"Engineering"
] |
Prion protein inhibits fast axonal transport through a mechanism involving casein kinase 2
Prion diseases include a number of progressive neuropathies involving conformational changes in cellular prion protein (PrPc) that may be fatal sporadic, familial or infectious. Pathological evidence indicated that neurons affected in prion diseases follow a dying-back pattern of degeneration. However, specific cellular processes affected by PrPc that explain such a pattern have not yet been identified. Results from cell biological and pharmacological experiments in isolated squid axoplasm and primary cultured neurons reveal inhibition of fast axonal transport (FAT) as a novel toxic effect elicited by PrPc. Pharmacological, biochemical and cell biological experiments further indicate this toxic effect involves casein kinase 2 (CK2) activation, providing a molecular basis for the toxic effect of PrPc on FAT. CK2 was found to phosphorylate and inhibit light chain subunits of the major motor protein conventional kinesin. Collectively, these findings suggest CK2 as a novel therapeutic target to prevent the gradual loss of neuronal connectivity that characterizes prion diseases.
Introduction
Prion diseases include a number of fatal sporadic, familial and infectious neuropathies affecting humans and other mammals [1]. As observed in most adult-onset neurodegenerative diseases [2], neurons affected in prion diseases follow a dying back pattern of degeneration, where synaptic dysfunction and loss of neuritic connectivity represent early pathogenic events that long precede cell death [3,4]. Toxic effects of prion protein (PrP) have been shown in various cellular and animal models [5][6][7]. An intriguing characteristic of prion diseases is the PLOS ONE | https://doi.org/10.1371/journal.pone.0188340 December 20, 2017 1 / 24 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 nature of prion, a pathogen devoid of nucleic acid [8]. The infectious form of prion disease involves a conformation-related conversion of the cellular form of PrP (PrP c ) to a mildly protease-resistant aggregated, and self-propagating species termed PrP scrapie (PrP Sc ) [1,9]. However, genetic and experimental evidence suggest that additional factors affecting PrP conformation may similarly promote neuronal pathology. For example, mutant PrP-related familial forms of prion diseases have been identified which do not involve the PrP Sc conformation [1,10]. In addition, aggregated, non-infectious oligomeric PrP has also been shown to induce neurotoxicity [4,9,11,12]. Further, results from our prior work indicate that intracellular accumulation of full-length PrP c (PrP-FL) alone suffices to induce progressive neuronal toxicity in cultured neurons and severe ataxia in mice [5,[13][14][15][16]. Collectively, these observations suggest that a variety of factors, including increased PrP c dosage and conformation-dependent conversion of PrP c to various neurotoxic species may underlie prion disease pathology, thus providing a common framework for seemingly diverse prion disease variants. The dying-back pattern of degeneration observed in neurons affected in prion diseases strongly suggests that pathogenic forms of PrP may interfere with cellular processes relevant to the maintenance of neuronal connectivity, such as fast axonal transport (FAT). The unique dependence of neuronal cells on FAT has been documented by genetic findings that link loss of function mutations in molecular motors to dying back degeneration of selected neuronal populations [17][18][19][20][21][22][23]. Significantly, microscopic analysis documented deficits in anterograde and retrograde FAT in PrP sc -inoculated mice concurrent with the development of prion disease symptoms [24,25]. However, whether pathogenic PrP c directly affects FAT has not yet been evaluated, and mechanisms underlying the FAT deficits observed in prion diseases remain largely unknown.
A large body of experimental evidence indicates that various misfolded neuropathological proteins compromise FAT by promoting alterations in the activity of protein kinases involved in the regulation of microtubule-based motor proteins [26][27][28]. Consistent with findings in a variety of adult-onset neurodegenerative diseases, aberrant patterns of protein phosphorylation represent a well-established hallmark of prion diseases. Further, several kinases known to affect FAT are reportedly deregulated in prion diseases, including GSK3 [29], PI3K [30], JNK [31], and casein kinase 2 (CK2) [32,33], Based on these precedents, we set out to determine whether PrP-FL inhibits FAT directly and, if so, determine whether specific protein kinases mediate such effects.
Cell culture
Hippocampal neuronal cultures were prepared from wild type B6SJL mouse embryos at day 16 of gestational age [34]. After dissection, the cortical or hippocampal tissue was incubated in 0.25% trypsin in Hank's for 16 min at 37˚C, followed by dissociation and plating of the cell suspension in culture dishes or glass coverslips covered with poly-D-lysine (0.5 mg/ml), at a density of 53 cells/cm 2 for immunocytochemistry or 350 to 1050 cell/cm 2 for biochemical analysis. The cultures were plated in DMEM plus 10% iron-supplemented calf serum (HyClone, Logan, UT) for 2 hours, and then replaced with Neurobasal media supplemented with B27 (Life Technologies, Grand Island, NY).
Animals were housed in the University of Illinois at Chicago Biological Resource Laboratory. All animal work was done according to guidelines established by the NIH and are covered by appropriate institutional animal care and use committee protocols from the University of Illinois at Chicago Animal Care Committee (ACC). Committee functions are administrated through the Office of Animal Care and Institutional Biosafety (OACIB) within the Office of the Vice Chancellor for Research. All procedures are within guidelines established by the NIH for use of vertebrate animals and were approved by our institutional animal use committee prior to the execution of experiments. For all procedures with mice, they were anesthetized with halothane. All methods for euthanasia are consistent with recommendations of the NIH, the American Veterinary Medical Association and have been approved by our institutional animal use committee (ACC). For all experiments, animals were first anesthetized with halothane, and then sacrificed by decapitation on a guillotine without being allowed to regain consciousness. In all cases, tissues were removed for analysis after sacrifice.
Atomic force microscopy
Peptide solutions were characterized using a Nano-Scope IIIa scanning probe work station equipped with a MultiMode head using a vertical engage E-series piezoceramic scanner (Veeco, Santa Barbara, CA). AFM probes were single-crystal silicon microcantilevers with 300-kHz resonant frequency and 42 Newton/meter spring constant model OMCL-AC160TS-W2 (Olympus). A 10μl of 0.1M NaOH was spotted onto mica, rinsed with 2 drops of deionized H 2 O, then a 10-μl sample solution of PrP or PrP-FL (From a 20μM stock solution) were spotted on freshly cleaved mica, incubated at room temperature for 3 minutes, rinsed with 20μl of filtered (Whatman Anotop 10) MilliQ water (Millipore), and blown dry with tetrafluoroethane (CleanTex MicroDuster III). Image data were acquired at scan rates between 1 and 2 Hz with drive amplitude and contact force kept to a minimum. Data were processed to remove vertical offset between scan lines by applying zero order flattening polynomials using Nanoscope software (Version 5.31r1,Veeco).
100nM staurosporine, 100nM K252a, 50nM okadaic acid, 50nM microcystin, 100mM potassium phosphate and mammalian protease inhibitor cocktail [Sigma]), lysates were clarified by centrifugation and proteins were separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) on 4-12% Bis-Tris gels (NuPage minigels, Invitrogen), using Mops Running Buffer (Invitrogen) and transferred to polyvinylidene fluoride (PVDF) membranes as previously described [38]. Immunoblots were blocked with 5% nonfat dried milk, in phosphate-buffered saline, pH 7.4, and probed with appropriate polyclonal or monoclonal antibodies. When phosphorylation sensitive antibodies were used, 50mM sodium fluoride was added to the blocking and primary antibody solutions to prevent dephosphorylation. Primary antibody binding was detected with horseradish peroxidase-conjugated anti-mouse and anti-rabbit secondary antibody (Jackson Immunoresearch) and visualized by chemiluminescence (ECL, Amersham). For relative quantification the level of immunoreactivity was determined by measuring the optical density (average pixel intensity) of the band that corresponds using ImageJ software (ImageJ 1.42q,NIH, http://rsbweb.nih.gov/ij). Isolation of membrane vesicle fractions from axoplasms was done as described before [39]. Two axoplasms from the same squid were prepared and incubated with appropriate effectors (PrP 106-126 in perfusion buffer or perfusion buffer alone) and vesicle fractions evaluated by immunoblot using H2 and Trk antibodies. Trk served as protein loading control and vesicle fraction marker.
Motility studies in isolated squid axoplasm
Axoplasms were extruded from giant axons of the squid, Loligo pealeii, at the Marine Biological Laboratory (MBL) as described previously [36,[40][41][42]. Squid axoplasms were extruded at the Rowe building of the MBL (Woods Hole, MA). Squid were handled in accordance with procedures dictated by the MBL Laboratory Animal Facility. Our laboratory located at the MBL has the proper authorization from the manager of the Marine Resources Department at MBL for the housing and euthanasia of squid. The MBL Laboratory Animal Facility is a USDA registered, and the MBL has an approved animal welfare assurance (A3070-01) from the Office for the Protection of Research Risks. The constitution of the Institutional Animal Care and Use Committee (IACUC) is in accordance with USPHS policy. In brief, a healthy translucent squid of approximately 30 cm in length is held by its mantle and the head severed above its eyes using a scissors followed immediately by destruction of the brain without sedation [43]. The mantle is cut open along the midline and the viscera and pen are removed carefully to avoid damaging the giant axons. The fins are removed with scissors and peel the skin off with tissue forceps. Identify the pair of axons lying parallel to the midline on each side of the open mantle. Dissect both axons very carefully to avoid touching the giant axons as it may damage the axolemma. Tie off the proximal end of the giant axon (near the stellate ganglion) and distal end with two different color cotton thread to help assure the orientation of the axons. Once both extremes are tied off tight, cut the giant axons 5 mm away from the knots to release the giant axons (pair of sister axoplasms). Gently tease away any connective tissue with extreme care not to damage the axonal membrane. Place the axon on a coverslip and cut the proximal end (white thread) hold the axon by the black thread and press the polyethylene tube near the distal end (black thread). Pull the axon steadily by the black thread to extrude the axoplasm. Then place spacers on both sides of the extruded axoplasm and place a coverslip on top without shearing the axoplasm to create a chamber where to perfuse the axoplasm with the effectors diluted in buffer X/2. Extruded isolated axoplasms were 400-600 mm in diameter and provided approximately 5μl of axoplasm. Synthetic PrP peptides and recombinant full length PrP (PrP) and inhibitors were diluted into X/2 buffer (175 mM potassium aspartate, 65 mM taurine, 35 mM betaine, 25 mM glycine, 10 mM HEPES, 6.5 mM MgCl2, 5 mM EGTA, 1.5 mM CaCl2, 0.5 mM glucose, pH 7.2) supplemented with 2-5 mM ATP; 20 ml of this mix was added to perfusion chambers. Preparations were analyzed on a Zeiss Axiomat with a 100, 1.3 NA objective, and DIC optics. Hamamatsu Argus 20 and Model 2400 CCD camera were used for image processing and analysis. Organelle velocities were measured with a Photonics Microscopy C2117 video manipulator (Hamamatsu) as described previously [44]. Approximately 20 squids were sacrificed.
Live imaging analysis of mitochondria axonal transport Hippocampal neurons from 3 DIV cultures were transfected (Lipofectamine 2000, Invitrogen) with a plasmid encoding a yellow fluorescent protein attached to a mitochondrial targeting sequence (mitoYFP, OriGene) to allow in vivo organelle visualization. 4 hours after transfection, cultures were treated as indicated in each case and placed on a recording chamber at 37˚C and 5% CO2 with phenol red-free Neurobasal medium (Gibco). Time-lapse images of axonal mitochondria were acquired in an Olympus IX81 inverted microscope equipped with a Disk Spinning Unit (DSU), epifluorescence illumination (150W Xenon Lamp) and a microprocessor. Fast image acquisition was achieved with a 60X oil immersion objective and an ORCA AG (Hamamatsu) CCD camera. Time-lapse images were recorded over 10 min, at a rate of 1 frame every 3 sec. Mitochondrial movement was analyzed visually with the Multi Kymograph plugin of Fiji (http://fiji.sc/) and by counting the proportion and direction of fragments that move for more than 3 μm over an axonal segment of 30μm. We consider axons, the major processes, to be those processes that were at least 40-50 μm longer that any other process in a given hippocampal neuron. Typically the axons we measured were between 120-150 μm in length. To confirm the identity of axonal processes we stained 3 DIV hippocampal neurons with an antibody against the axonal resident protein Tau (tau-1) and alfa tubulin (S1 Fig). We consider the movement towards the tip of the axon the anterograde direction and the movement towards the cell body the retrograde direction. Instantaneous velocities of mobile mitochondria was calculated over 3 frames during 10 seconds in the anterograde and retrograde direction. Data correspond to three independent experiments per condition.
Purification of membrane vesicle fractions from squid axoplasms by lodixanol vesicle flotation assay
After incubating axoplasms with appropriate effectors (10μM Prion 106-126 or PrP scrambled) for motility assays in X/2 buffer plus 1mM ATP in 25μl final volume, after 50 minutes the axoplasms were moved to a low-protein binding 1.5ml centrifuge tube containing 200μl of homogenization buffer [10mM HEPES, pH7.4, 1mM EDTA, 0.25M sucrose, 1/100 protease inhibitor cocktail for mammalian tissue (Sigma; No. P8340), 1/100 phosphatase inhibitor cocktail set II (Calbiochem; No. 524627), 2sM K252a, and 1μ PKI], and homogenized by three passages through a 27G needle and two passages through a 30G needle attached to a 1ml syringe. Axoplasm homogenates were adjusted to 30% iodixanol by mixing 200μl of axoplasm homogenates with 300μl of solution D (50% (w/v) Iodixanol (Sigma), 10mM MgCl2, 0.25M sucrose). A 500μl layer of solution E (25% (w/v) Iodixanol, 10mM MgCl2, 0.25M sucrose) was gently loaded on top of the lysate adjusted to 30% Iodixanol, followed by a 100il layer of solution F (5% (w/v) Iodixanol, 10mM MgCl2, 0.25M sucrose. Samples were centrifuged at 250,000g for 30 minutes at 4˚C in RP55S Sorval rotor. Following the centrifugation, 200μl was removed from top, which contained the vesicles/membranes and transferred to a new 1.5ml centrifuge tube. 1.2ml cold methanol was added and incubated on ice for 60 minutes, centrifuged at 14,000RPM in a tabletop centrifuge for 30 minutes. We resuspended the precipitated vesicles/membrane fraction pellets in 40μl of 1% SDS using orbital rotor for 1 hour at 300RPM. 10μl of 6x Laemmli buffer was added, and 15μl of each sample was analyzed by immunoblotting.
CK2 in vitro kinase assay
The in vitro kinase assay mixture contained in a 50μl final volume: 100μM CK2 synthetic R 3 A 2 D 2 SD 5 peptide, 2U (1.05ng) CK2αβ from NEB Cat# P6010S, 1X reaction buffer (20mM Tris-HCl, 50mMKCl, 10mM MgCl 2 , pH 7.5 at 25˚C) 100μM cold ATP containing 1.5mCi [γ 32 P] ATP; 1Ci = 37 GBq, and brought to a final 50μl with 20mM hopes, pH 7.4. We added the different PrP constructs (PrP-FL and PrP 106-126 ) at 2μM final concentration. Incubation was carried out for 20 minutes at 30˚C. Reactions were stopped by the transfer of 10μl of the reaction to P81 phosphocellulose circles and washed three times in 75mM phosphoric acid, dried, and analyzed by scintillation counting.
Statistical analysis
Statistical comparisons were obtained by using GraphPad Prism 6 software. All experiments were repeated at least three times, using different brain specimens, extruded axoplasms or cell cultures derived from embryos from at least three different rat or mice and at least 3 different axoplasms. Data represents mean ± SEM. Mean differences were considered significant at the p 0.05. Multiple group comparisons were performed by one-way ANOVA with post-hoc Tukey. For pair comparisons, Student's t-tests were used.
PrP inhibits fast axonal transport
Several reports document FAT deficits in animal models of prion diseases, consistent with the dying back pattern of degeneration observed in these diseases [11,24,25,45,46]. Various neurotoxic effects were associated with intracellular accumulation of wild type, non-infectious PrP-FL [5], but whether PrP-FL could directly affect FAT was not previously tested. Towards this end, we performed vesicle motility assays in isolated squid axoplasms. By using videoenhanced contrast DIC microscopy, the isolated axoplasm preparation allows for accurate quantitation of anterograde (conventional kinesin-dependent) and retrograde (cytoplasmic dynein-dependent) FAT rates [40,47]. Because the plasma membrane is removed from the axon, both recombinant forms of PrP and PrP-derived synthetic peptides ( Fig 1A) can be perfused into the axoplasm and their effect on FAT directly evaluated [40].
Perfusion of PrP-FL protein in axoplasm (2μM) triggered a significant reduction in both anterograde and retrograde FAT (Fig 1B), and a similar inhibitory effect was also observed when PrP-FL was perfused at much lower concentration (PrP-FL 100nM, S2 Fig). This finding prompted us to map specific PrP-FL domains mediating the toxic effect. The positively charged central domain (CD, amino acids 94-134) has been shown to play a role in the neurotoxic effects elicited by pathogenic forms of PrP [49][50][51]. Li and coworkers showed that the PrP residues 105-125 may constitute a neurotoxic functional domain [49]. Furthermore, Simoneau and coworkers determined that the 106-126 hydrophobic domain at the surface of oligomeric full length PrP was essential for toxicity [12]. Extending these findings, experimental data documented toxic effects of a PrP peptide encompassing residues 106-126 on primary hippocampal, cortical and cerebellar cultured neurons [52][53][54][55][56][57]. Together, these findings prompted us to evaluate whether the CD domain may mediate the toxic effect of PrP-FL on FAT. Consistent with this possibility, recombinant PrP-ΔCD did not affect FAT when perfused in axoplasm ( Fig 1C). Further, a synthetic peptide comprising amino acids 106-126 of PrP (PrP 106-126 ) triggered a dramatic inhibition of FAT (Fig 1D), whereas a scrambled version of this peptide (PrP-Scram) did not (Fig 1E). Quantitative analysis of FAT average rates obtained from 30-50 minutes after perfusion demonstrated a significant reduction in both anterograde and retrograde FAT rates induced by PrP-FL and PrP 106-126 , but not by control PrP-Scram or PrP-ΔCD (Fig 1F and 1G, and S1 Table). Collectively, these experiments indicate that PrP-FL inhibits FAT, and that the CD of PrP c is both necessary and sufficient to trigger this toxic effect.
PrP induces alterations in mitochondrial axonal transport
Based on results from experiments in Fig 1, we next evaluated whether PrP alters FAT of mitochondria in mammalian cultured neurons. Because labeling of mitochondria with Mito Tracker Red or tetramethylrhodamine ethyl ester dyes can interfere with mitochondrial mobility [58,59], we transfected primary mouse embryonic hippocampal neurons in culture with a plasmid encoding a mitochondrial resident protein fused with yellow fluorescent protein (mito-YFP). At day 3 in vitro (3 DIV), we incubated transfected neurons for one hour with 3μM PrP 106-126 (Fig 2A) or with control PrP-Scram (Fig 2B), and analyzed mitochondrial motility for 10 minutes using time-lapse microscopy.
Consistent with the marked reduction of FAT observed in axoplasms treated with PrP 106-126 peptide, kymograph analysis revealed a marked reduction of mitochondria mobility in neurons treated with PrP 106-126 (Fig 2A), compared to neurons treated with PrP-Scram ( Fig 2B). Specifically, the average distance traveled in the anterograde direction was significantly reduced in PrP 106-126 (1.22± 0.33μm) compared to PrP-Scram treated cell (5.17± 1.23μm) (Fig 2C). Similarly, the percentage of motile mitochondria in either direction was significantly reduced in
The protein kinase CK2 mediates PrP-induced FAT inhibition
Several phosphotransferases have been identified that regulate FAT by modifying functional specific motor protein subunits [60][61][62][63]. Among protein kinases tested in the isolated axoplasm preparation, casein kinase 2 (CK2) inhibited FAT with an inhibitory profile similar to that induced by PrP-FL and PrP 106-126 (Fig 1B and 1C) [27], prompting us to evaluate whether the inhibition of FAT induced by PrP-FL or PrP 106-126 was mediated by CK2. To this end, we co-perfused PrP-FL and PrP 106-126 with Dimethylamino-4,5,6,7-tetrabromo-1H-benzimidazole (DMAT), a highly specific and powerful ATP-competitive CK2 inhibitor [64] that effectively inhibits CK2 activity in the axoplasm preparation [27]. Remarkably, co-perfusion of either PrP-FL or PrP 106-126 with DMAT completely prevented the inhibitory effect on FAT (Fig 3A and 3B). Quantitation of average FAT rates 30 to 50 minutes after perfusion confirmed perfusion, compared to perfusing X/2 buffer alone [48] (data not shown in this manuscript). (C) Perfusion of a PrP full length construct lacking amino acids 111 to 134 (PrP-ΔCD) showed not effect on FAT (D) Perfusion with PrP 106-126 , a 21 amino acid peptide corresponding to the PrP CD inhibited bidirectional FAT with a profiles of inhibition almost identical to the one induced by PrP-FL. (E) Perfusion of the PrP 106-126 -Scram control peptide encompassing the same amino acids but arranged in a scrambled order did not alter FAT. Graphs showing quantitation of average rates of anterograde (F) and retrograde (G) FAT obtained 30-50 minutes after PrP perfusion indicating that when PrP-FL and its 21 amino acid peptide corresponding to the central domain of PrP-FL are perfused they induce bidirectional FAT inhibition. Letter "n" represents the number of independent axoplasms perfused per construct. Light blue and green dots in graphs F and G represent outlier values.
Interestingly, these results were consistent with prior reports showing that PrP interact with CK2 and modulate its activity [32,33]. To evaluate whether PrP-FL could directly activate CK2, we conducted in vitro kinases assays using recombinant CK2 and a highly specific CK2 peptide substrate R 3 A 2 DSD 5 as radioactive phosphate acceptor [65]. Remarkably, PrP-FL induced significant activation of CK2 tetramer (Fig 3E), and similar activation was triggered by PrP 106-126 (Fig 3E), suggesting that the inhibitory effect of PrP-FL and PrP 106-126 on FAT may result from directly activating CK2.
Next, we evaluated whether CK2 mediated the inhibition of mitochondria mobility induced by PrP 106-126 in mammalian neurons. To this end, we simultaneously treated primary neurons in culture with both PrP 106-126 (Fig 3F) and the CK2 inhibitor DMAT (2μM; Fig 3G) and measured mitochondrial mobility as done in Fig 2. Kymograph analysis showed a consistent increase of mitochondria mobility for neurons co-incubated with PrP106 -126 plus DMAT ( Fig 3G lower panel) compared to neurons treated with PrP 106-126 and DMSO vehicle (Fig 3F lower panel). As expected, treatment of neurons with DMAT alone did not alter mitochondria FAT (S3 Fig). Quantitative analysis confirmed that average distances traveled by individual mitochondria in either anterograde (6.56±2.09μm) or retrograde (11.05±2.74μm) direction were significantly higher in neurons co-treated with DMAT compared to average distances of individual mitochondria from PrP 106-126 treated neurons (anterograde: 1.21±0.38μm; retrograde 3.97±1.44μm) (Fig 3H). Additionally, the percentage of moving mitochondria (Fig 3I) was significantly higher in DMAT-treated neurons (45.51±7.29μm) than in PrP 106-126 treated neurons (14.72±8.17μm). Collectively, results from these experiments indicated that the inhibitory effects of PrP-FL and PrP 106-126 on FAT are mediated by CK2.
PrP-induced CK2-activation promotes conventional kinesin phosphorylation and release from vesicular cargoes CK2 has been shown to directly phosphorylate kinesin light chain (KLCs) subunits of the major motor protein conventional kinesin [27, [66][67][68][69]. Based on that precedent and on results from experiments in Fig 3, we evaluated whether PrP toxicity involves alterations in KLC phosphorylation. To this end, we perfused axoplasms with PrP 106-126 (Fig 4A and 4B) and also incubated primary hippocampal neurons with PrP 106-126 (Fig 4C and 4D).
Discussion
The molecular basis for prion disease (PrD) pathology remains unclear. Although the infectious form of PrP has received the most attention [75], prion infection is quite infrequent in humans. The majority of human cases (99%) are associated with mutations in the gene encoding PrP or occur sporadically [4]. In cases where PrP Sc conformation is not required to induce pathogenesis, genetic and experimental studies suggest that the spontaneous accumulation of either mutant or wild type PrP can induce neuronal dysfunction and toxicity [4,10,[76][77][78]. In addition, aggregated, non-infectious oligomeric PrP has also been shown to induce neurotoxicity without involving PrP Sc [4,9,11,12]. Atomic force microscopy-based structural analysis of PrP-FL and PrP 106-126 indicate that both PrP constructs present a globular or oligomeric conformation (S4 Fig). Furthermore, Chiesa and collaborators demonstrated that 5 to 10-fold overexpression of wild type PrP can cause neuronal dysfunction and synaptic abnormalities through an aggregated, non-infectious PrP species [11]. Results from multiple laboratories indicate that cytosolic accumulation of full-length cellular PrP (PrP-FL) suffices to induce progressive neuronal toxicity and severe ataxia in cultured neurons and in living mice [5,[13][14][15][16]. This observed toxic phenomenon associated with increased dosage of PrP is not restricted to prion diseases, as other human progressive neuropathies including Alzheimer´s and Parkin-son´s diseases are associated with aggregation of proteins induced by overexpression of wild type polypeptides [79][80][81]. Collectively, these observations suggest that a variety of molecular factors, including increased PrP dosage and conformation-dependent conversion of PrP to a neurotoxic species may underlie prion disease pathology, thus providing a common pathological framework for seemingly diverse prion disease variants.
There is no consensus on the specific cause of neuronal degeneration in PrD, but various pathological mechanisms have been postulated, including mitochondrial dysfunction and activation of neuronal apoptosis [82]. Although activation of apoptotic pathways will damage neurons in prion diseases [55,83,84], this is a generic explanation that does not provide insight into pathogenic mechanisms. Moreover, Chiesa and collaborators demonstrated that abolishing neuronal apoptosis in a transgenic model of familial prion disease effectively prevents neuronal loss, but does not prevent dying-back axonopathy and synaptic loss or delay the clinical symptoms [85]. These evidence suggests that, while apoptosis may be a component of prion diseases, changes in other vital neuronal processes may trigger the loss of synapses and clinical symptoms characteristic of these diseases.
PrP-Scram treated axoplasms and neurons respectively. Note a significant reduction (B) of immunoreactivity when neurons were treated with PrP 106-126 compared to PrP-Scram; (n = 5, number of independent experiments. p = 0.0313, significance was assessed at P < 0.05). (D) Significant reduction of 63-90 immunoreactivity in axoplasms incubated with PrP 106-126 compared to control PrP-Scram, (n = 3; number of independent experiments. p = 0.0355, significance was assessed at P < 0.05). (E) Vesicles purified from sister axoplasms by vesicle flotation assays perfused with control PrP-Scram and PrP 106-126 synthetic peptide were assayed by Western blot for KHC and TrkB. TrkB was used as membrane protein marker and for loading control. (F) Quantitation graph bars shows a significant reduction of kinesin-1 association to purified vesicles in PrP 106-126 incubated extruded axoplasms compared to control PrP-Scram treated axoplasms, (n = 3, number of independent experiments; significance was assessed at P < 0.05). Taken together, these experiments suggest that PrP 106-126 increases the intracellular activity of CK2, which in turn results in KLCs phosphorylation and kinesin-1 release from its cargo vesicles. https://doi.org/10.1371/journal.pone.0188340.g004 Experimental evidence suggests that alterations in FAT might represent an early pathogenic event in prion diseases [45,46,85,86] as well as other disorders linked to misfolded proteins [26,73,74,[87][88][89][90][91]. Sanchez-Garcia and co-workers showed that neurons expressing the PrP-M205,212S mutant form exhibit disrupted FAT and reduced synaptic accumulation of specific synaptic proteins important for axonal growth, vesicular fusion, secretion and neurotransmission [92,93]. Similarly, Senatore and coworkers showed that mutant PrP suppressed neurotransmission in cerebellar granule neurons by altering the delivery of voltage-gated calcium channels [94]. Finally, Ermolayev and coworkers recently showed a direct and early link between prion clinical symptoms and FAT inhibition induced by different prion strains [24]. However, these studies did not provide mechanisms by which PrP affects FAT. Data from Ma and collaborators showed that accumulation of full length PrP (PrP-FL) within the cytosol is neurotoxic in vitro and in vivo [5,7]. Evidence from other groups showed that intracellular accumulation of wild type PrP leads to neuronal dysfunction and synaptic abnormalities [11,56]. To test whether the neurotoxicity of intracellular PrP-FL was directly associated with FAT inhibition, we took advantage of the isolated squid axoplasm preparation. This unique ex vivo model facilitated the initial discovery of kinesin-1 [95], and kinase-based regulatory mechanisms for FAT [96,97].
Here we present direct experimental evidence that PrP-FL, at physiologically plausible concentrations (100nM to 2μM), is a strong inhibitor of FAT. Previous work had mapped the toxicity of PrP primarily to the central domain (CD) [75,[98][99][100]. Consistent with these studies, the observed effects of PrP on FAT required the central domain (CD), as perfusion of a PrP construct lacking most of the central domain (PrP-ΔCD) does not affect FAT. Furthermore, perfusion of a cell permeable 20mer synthetic peptide corresponding to the positively charged CD (PrP 106-126 ) [101] showed a toxic inhibitory effect comparable to that elicited by PrP-FL. Together these ex vivo experiments suggest that the PrP CD is required and sufficient to inhibit FAT.
Under normal physiological conditions, the concerted activity of kinases and phosphatases regulate FAT by controlling the functional activities of molecular motors [60][61][62][63]. Under pathological circumstances, misregulated signaling pathways can alter motor functions, leading consequently to altered FAT and dysfunctional synaptic transmission [26-28, 36, 39, 48, 67, 71, 73, 74, 87, 89, 102-108], harmful events that result in progressive synaptic dysfunction and dying back neuropathy. The ability of PrP c to inhibit anterograde FAT at concentrations lower than of conventional kinesin [109] abrogates the possibility that PrP effects resulted from steric interference. Instead, alterations in regulatory signaling pathways for FAT appeared a more plausible mechanism.
Cell biological and pharmacological data shows that toxic PrP can affect the activity of various phosphotransferases capable of regulating FAT, including GSK3β Here we showed that co-perfusion of inhibitors of CK2 with either PrP-FL or PrP 106-126 completely abolishes the inhibitory effects of PrP on FAT. Although the CK2 inhibitor DMAT abolished the inhibition of mitochondria anterograde FAT induced by PrP, it also showed activation of retrograde FAT in combination with PrP 106-126 , an unexpected result that deserves future analysis. Although the precise mechanism by which PrP activates CK2 remains to be determined, in vitro CK2 kinase activity showed that recombinant PrP-FL and the synthetic peptide PrP 106-126 are potent CK2 activators both in vivo, and in vitro, suggesting that the activation of CK2 may result from a direct interaction between PrP and CK2 in vivo.
Deficits in FAT of membrane-bounded organelles (MBOs) are responsible or at least significant contributors to multiple human neuropathies displaying dying back degeneration of neurons including hereditary spastic paraplegia, Alzheimer´s, Parkinson´s, Huntington´s, amyotrophic lateral sclerosis, and prion diseases [23,24,26,27,36,73,74,87,106,107,[110][111][112]. Mitochondria are critical MBOs transported in neuronal cells, as these generate ATP needed for a wide variety of vital metabolic processes including FAT, neuronal growth, regeneration, and survival [113,114]. A large body of experimental data documented deficits in FAT of mitochondria in various human neuropathies associated with altered kinase activities [73,[115][116][117]. Consistent with these reports, we showed a consistent reduction of mitochondria motility in PrP 106-126 treated neurons versus PrP-Scram treated ones. Both the percentage of moving mitochondria and the average distance traveled were reduced. Although the regulatory mechanisms that govern the transport of membranous vesicles and mitochondria might not be the same [117] the effects were not restricted to mitochondria alone, as there was marked reduction in the bulk of vesicles moving in PrP-treated squid axoplasms. Interestingly, we observed a different profile of inhibition between mitochondria and squid axoplam vesicles. While PrP affected both directions of transport for these vesicles, it inhibited anterograde FAT to a greater extent than retrograde FAT for mitochondria, likely due to differences in the regulatory mechanisms between these MBOs. Consistent with evidence suggesting that PrP can activate CK2 both in vitro and in vivo and that activation of CK2 results in FAT inhibition, the potent and specific pharmacological CK2 inhibitor DMAT prevented PrP-induced FAT inhibition in both isolated axoplasm and in cultured neurons.
The remaining challenge was to determine how PrP and CK2 can compromise FAT. Previous studies had indicated that phosphorylation of KLCs by GSK3β and CK2 promotes release of conventional kinesin from MBOs and FAT inhibition [27,34,60]. Accordingly, biochemical experiments in this work revealed increased KLC phosphorylation in squid axoplasms and cultured mammalian neurons treated with PrP, as revealed by reduced immunoreactivity for 63-90, a monoclonal antibody that recognizes a CK2 dephosphoepitope within KLCs.
GSK3β and CK2-mediated phosphorylation of KLCs was shown to promote detachment of conventional kinesin from MBOs [27,39,60]. Consistent with these precedents, levels of kinesin-1 associated with axonal MBOs was significantly reduced in axoplasms perfused with PrP 106-126 , relative to PrP-Scram-perfused ones. Similar results were observed upon perfusion of recombinant CK2 and oligomeric amyloid beta (0Aβ), which induces endogenous CK2 activation [27]. That PrP-FL inhibited both anterograde and retrograde FAT suggests that CK2 may also affects cytoplasmic dynein [66] but further studies are needed to address this possibility.
Experiments in this work are in agreement with the idea that PrP-FL and its central peptide domain PrP 106-126 inhibit FAT through a molecular mechanism involving abnormal activation of endogenous CK2, phosphorylation of KLCs, and release of conventional kinesin from transported MBO cargoes (Fig 5). In all likelihood, CK2 substrates other than molecular motors may contribute to prion pathology. CK2 has hundreds of reported substrates [118] many of which may contribute to axonal degeneration and synaptic loss in the context of prion disease [119]. Accordingly, oAβ-mediated increases in CK2 activity disrupt synaptic transmission and CK2 inhibitors restore it [105].
Results from this work demonstrate that pharmacological inhibition of CK2 prevents FAT inhibition induced by PrP. Thus, CK2 inhibition may represent a novel therapeutic intervention for prionopathies and other progressive neuropathies associated with abnormal CK2 activity and defects in FAT [27], [105], [120]. The recent development of blood-brain barrier permeable and highly selective CK2 inhibitors capable of accessing the brain makes this notion particularly compelling [121,122]. Finally, Our results provide a basis for exploring the more complex pathology of a PrP transgenic mouse model in the near future. | 7,756 | 2017-12-20T00:00:00.000 | [
"Biology"
] |
Single Cell Analysis of Gastric Cancer Reveals Non-Defined Telomere Maintenance Mechanism
Telomere maintenance mechanisms (TMMs) are important for cell survival and homeostasis. However, most related cancer research studies have used heterogenous bulk tumor tissue, which consists of various single cells, and the cell type properties cannot be precisely recognized. In particular, cells exhibiting non-defined TMM (NDTMM) indicate a poorer prognosis than those exhibiting alternative lengthening of telomere (ALT)-like mechanisms. In this study, we used bioinformatics to classify TMMs by cell type in gastric cancer (GC) in single cells and compared the biological processes of each TMM. We elucidated the pharmacological vulnerabilities of NDTMM type cells, which are associated with poor prognosis, based on molecular mechanisms. We analyzed differentially expressed genes in cells exhibiting different TMMs in two single-cell GC cohorts and the pathways enriched in single cells. NDTMM type cells showed high stemness, epithelial–mesenchymal transition, cancer hallmark activity, and metabolic reprogramming with mitochondrial abnormalities. Nuclear receptor subfamily 4 group A member 1 (NR4A1) activated parkin-dependent mitophagy in association with tumor necrosis factor-alpha (TNFA) to maintain cellular homeostasis without TMM. NR4A1 overexpression affected TNFA-induced GC cell apoptosis by inhibiting Jun N-terminal kinase/parkin-dependent mitophagy. Our findings also revealed that NR4A1 is involved in cell cycle mediation, inflammation, and apoptosis to maintain cell homeostasis, and is a novel potential therapeutic target in recalcitrant GC.
Background
Cell immortality, one of the main characteristics of cancer cells, is mediated through telomere maintenance mechanisms (TMMs) [1], which are activated dependently or independently of the telomerase (TEL) enzyme [2]. In approximately 85% of human tumors, TEL ensures telomere maintenance, whereas the ALT mechanism only occurs in 10-15% [3].
The mechanism underlying the maintenance of telomere has been widely investigated in pan-cancer studies [4]. A previous study using bulk RNA-sequencing (RNA-seq) analysis of pan-cancer data of the Cancer Genome Atlas (TCGA) to classify TMMs into four types was the first to identify non-defined TMM (NDTMM) [5]. NDTMM is associated with a poor prognosis in glioblastoma [6]. Although TMM is essential for the immortalization of cancer cells, NDTMM has be shown to frequently occur [7] where a TMM was not observed in the actual bulk tumor tissue. In particular, alternative lengthening of telomere (ALT) is associated with poor prognosis of gastric cancer (GC) and a stem-like molecular mechanism [8]. However, most previous studies used bulk RNA-seq, which limits their relevance and applicability because the specific cell types were not considered [5]. Each TMM differs in its immortalization process and progression into aggressive drug-resistant cancer cells. Although ALT-positive (ALT+) cells have been reported in fibroblasts, few studies have investigated cell types and TMM in other immune or stromal cells. Previous studies investigating the maintenance of telomere in ALT cells analyzed the proliferative potential and telomere dynamics of GM847 ALT cells (SV40 immortalized human skin fibroblasts) co-cultured with normal fibroblasts or TEL+ immortalized human cells [7]. The results revealed that ALT phenotypic repressors were present in normal and some TEL+ immortal cells [9]. Another study identified APBs in TERC+ keratinocytes and squamous cell carcinomas of mice, demonstrating that ALT and TEL coexist in the same cell (SCC) [10]. In the current study, we used a bioinformatics approach to comprehensively classify TMM by cell type at the single-cell level, and comparatively analyzed the biological processes of each TMM type.
Data Preprocessing and Pathway Enrichment Analysis
Two GC single-cell cohorts were used in this study [11]. Data for eight original tumor samples from single cells collected from six patients with stomach cancer were downloaded from the Ji research group website (https://dna-discovery.stanford.edu/research/datasets/ (accessed on 1 January 2022) [11]. Seurat software v. 4.0.3 was used for the single-cell analysis [12]. Quality filters were applied to 12,422 cells from the eight samples. We screened cells that met the following criteria for downstream analysis: cells with a (1) unique feature count of ≥200 and (2) mitochondrial count >5%.
We identified eight different cell types. Single-cell libraries were created using the Chromium Single-Cell 3 Library & Gel Bead kit v2 (10× Genomics) according to the manufacturer's instructions, and then sequenced using Illumina sequencers with a singlecell RNA-seq (scRNAseq) technique (i.e., NovaSeq, HiSeq, and NextSeq). We examined 84 metabolic pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG) database to determine their activity. Markov affinity-based graph imputation of cells (MAGIC) was used to impute missing values and restore the structure of the single-cell data [13]. We used the single-sample gene set enrichment plugin in the R package gene set variation analysis (GSVA) [14] to acquire information regarding the pathway activity of each cell.
We calculated the GSVA score to analyze the TMM pathway enrichment, and attempted 10,000,000 runs to judge the accuracy and significance (false discovery rate [FDR] < 0.01). The TMM-related gene signature was obtained from a previous study [15]. The second single-cell cohort (5927 cells) was downloaded from the Gene Express Omnibus (GEO) database (GSE150290) from tumor samples from 23 patients [16]. The method used to select the marker gene for cell type classification was described in a previous study [11,16].
Analysis of DEGs and Single-Cell Meta-Analysis
DESingle was used to examine the differentially expressed genes (DEGs) in cells based on their TMM type [20]. Pathways enriched with METASCAPE [21] for significantly affected genes selected using DEG, protein-protein interaction (PPI), molecular complex detection (MCODE), transcription factors (TF), and other software were analyzed. The R packages RaceID and StemID were used to identify single-cell stemness [22]. Signatures related to stemness were obtained from previous studies [23] and cell types were categorized from microarray Y497 bulk samples [18] using the CIBERSORT software [24].
Drug Target Analysis
In order to investigate gene regulatory interactions with genes overexpressed in the NDTMM type as input, we analyzed the DEGs [20] to generate a list of genes with high or low expression in each TMM type and ConsensusPathDB (CPDB; http://cpdb.molgen. mpg.de/ (accessed on 29 March 2022)) [25]. The target gene and drug were discovered through a search of the Cancer Cell Line Encyclopedia (CCLE) [26].
Single-Cell Analysis Revealed TMM Type in GC
We analyzed single-cell data from two GC cohorts (12,422 and 5927 cells). By comparing cell type categorization and TMM type in single-cell data subjected to imputation, we determined the TMM types of single cells. TMM types were classified into four categories in order to determine the most prevalent type in each cell, and six pathways were assessed using the gene profile from our previous pan-cancer TMM study [5].
In adenocarcinoma cells, telomerase (TEL) and ALT activity were observed simultaneously, and ALT activity was high in B, T, and NK cells, which are related to innate immunity. Telomerase activity was relatively higher in endothelial cells and granulocytes than it was in other cells ( Figure 1A). This trend was similar to that observed in the second GC single-cell cohort ( Figure 1B), and telomerase activity was higher in endothelial cells than it was in other cells (FDR < 0.01). In gland mucous cells (GMC) and fibroblasts, ALT activity was relatively high. According to analysis of the frequency of TMM types in different cells, 39% and 10% of adenocarcinoma cells showed ALT-like and 10% NDTMM, respectively, whereas very few cells exhibited telomerase activity ( Figure 1C).
This was similar to a study that reported approximately 30-40% and 30% ALT activity in a GC cohort and in tumor cells of another cohort, respectively [5]. Specifically, in endothelial cells, granulocytes, and macrophages, the number of cells exhibiting ALT and telomerase activities was low and relatively high, respectively. These results suggest that the TMM of each cell type was different and the preferred type was used for maintenance. In addition, the various cell types also included those with a previously unknown TMM (NDTMM).
We also analyzed the relationship of the four TMM and specific cell types to EMT. NDTMM showed the highest EMT activity in adenocarcinoma cells. In fibroblasts from the same cohort, telomerase exhibited the highest EMT activity. In another cohort, pit mucous cells (PMC) showed a similar tendency to that of adenocarcinoma cells, and although it was not significantly different from the ALT-like type, EMT was highest in the NDTMM type. In GMCs, ALT-like and NDTMM types exhibited similar trends ( Figure 1D,E). We considered TMM to be closely related to cell growth, maintenance, differentiation, and proliferation, and a clear difference existed in MKI67 expression.
The number of ALT-like and NDTMM type cells was relatively low in adenocarcinoma; however, telomerase (TEL) and TEL+ALT-like type cells showed high proliferation. A similar trend was observed in PMC cells, and TEL+ALT-like type cells predominately showed high proliferation ( Figure 1F). Next, we analyzed the correlation between immune cells and TMM in a 497 Yonsei Hospital cohort (Y497) from the bulk transcriptome data, and the correlation was clearly separated (positively/negatively). Our analysis revealed that ALT chromatin-like type-related cells were T regulatory, naïve B, and memory B cells, and the remaining the immune cells were positively correlated with both TEL and ALT pathways (Supplementary Figure S1A).
We stratified patients by TMM type using bulk transcriptome data. In both the 497 Yonsei [18] and TCGA-STAD cohorts, the four TMM types were not significantly distinguishable; however, the same pattern showed the best prognosis in the TEL only group, and the poorest prognosis was observed in the NDTMM type patients (Supplementary Figure S1B). Therefore, these results indicate that the TMM of each cell type, which was not identified in the bulk sample, was detected at the single-cell level. Thus, TMM heterogeneity was observed by cell type. Figure S1B). Therefore, these results indicate that the TMM of each cell type, which was not identified in the bulk sample, was detected at the single-cell level. Thus, TMM heterogeneity was observed by cell type.
Cancer Hallmark Activity and TMM at Single-Cell Level
We divided adenocarcinoma cells into four TMM types and analyzed the differences in each cell type with respect to the characteristics observed in the bulk (Figure 2A). Initially, 924 adenocarcinoma cells from the first cohort were classified into four TMM types, and 50 cancer hallmark pathways were analyzed. Contrary to our expectations, >60% of the cancer hallmark activity was confirmed in the NDTMM type cells. In the ALT-like type cells, both the KRAS and Wnt-catenin signaling pathways were highly enriched ( Figure 2B). TEL type was high but the NDTMM type was low, similarly to findings with the ALT-like type ( Figure 2H). However, with the necrosis signature, the NDTMM and ATL-like types showed the highest and lowest activity, respectively ( Figure 2H). These results confirmed that the NDTMM type cells had short telomeres just before death, possessed high EMT and stemness, and were highly related to cancer hallmarks. Conversely, the ALT-like type cells possessed elongated telomeres, and exhibited low necrosis and entropy.
Differentially Expressed Biological Pathways for TMM Type
Currently, information about other biological pathways for each TMM type in GC is limited, and we previously reported ALT-like type TMM and epigenetic characteristics on the basis of bulk RNA-seq analysis in GC [8]. In this study, differently enriched pathways for each TMM type at the adenocarcinoma single-cell level were analyzed. The ALTlike type was enriched for NABA ECM-affiliated pathways and regulation of lymphocyte activation, and the NDTMM type was enriched for Parkinson's disease, neutrophil degranulation, and vascular endothelial growth factor-alpha (VEGFA) signaling pathways ( Figure 3A). Interestingly, for NDTMM, pathways such as multicellular organismal and cellular maintenance homeostasis were enriched.
In contrast, for the TEL+ALT-like type, pathways such as RNA metabolism, Huntington's disease, and translation were enriched ( Figure 3B). We analyzed the PPI of genes that were differently expressed between the NDTMM and ALT-like type using MCODE, and the genes were classified into nine MCODE clusters. Genes were mainly enriched in We predicted the telomere length based on the expression of the promyelocytic leukemia (PML) gene, and determined that the higher the PML expression, the shorter the telomere length and the higher the ability to maintain telomeres. In our adenocarcinoma data, TEL+ALT-like was the most common TMM type and, interestingly, the prevalence of the NDTMM type was higher than that of the ALT-like type. In the ALT-like type cells, the telomeres were predicted to be long, whereas they were assumed to be shortened in the NDTMM type cells ( Figure 1C).
We analyzed the stemness of cells in the PMC using StemID [22] through entropy calculation using the stemness analysis. Cluster 9 showed the highest entropy, whereas clusters 1 and 7 showed low entropy (FDR < 0.05, Figure 2D). The classification of the four TMM types for each cluster according to the degree of entropy revealed numerous TEL+ALT-like type cells in the high-entropy cluster, and ALT-like and NDTMM type cells were relatively common in clusters 1 and 7. In cluster 1, ALT-like and NDTMM type cells accounted for 50% and 20%, respectively ( Figure 2E).
We performed a stemness marker analysis of adenocarcinomas in order to determine the stemness marker genes associated with TMM. The Yamanaka factor was mainly high in the TEL+ALT-like type, whereas the expression level of the normal stem cell marker genes (PROM1, ABCG2, and CD34) was high in the NDTMM type. Furthermore, the expression of the cancer stem cell marker gene was high in the NDTMM and TEL+ALT-like types, with similar ratios ( Figure 2F). Next, we analyzed the expression of the oncogenes and tumor suppressor genes. In the case of ALT-like type cells, most genes exhibited low activity. In the NDTMM, TEL, and TEL+ALT-like type cells, the opposite trend was observed ( Figure 2G).
TMM is related to cell survival, and autophagy signature analysis confirmed that the TEL type was high but the NDTMM type was low, similarly to findings with the ALT-like type ( Figure 2H). However, with the necrosis signature, the NDTMM and ATL-like types showed the highest and lowest activity, respectively ( Figure 2H). These results confirmed that the NDTMM type cells had short telomeres just before death, possessed high EMT and stemness, and were highly related to cancer hallmarks. Conversely, the ALT-like type cells possessed elongated telomeres, and exhibited low necrosis and entropy.
Differentially Expressed Biological Pathways for TMM Type
Currently, information about other biological pathways for each TMM type in GC is limited, and we previously reported ALT-like type TMM and epigenetic characteristics on the basis of bulk RNA-seq analysis in GC [8]. In this study, differently enriched pathways for each TMM type at the adenocarcinoma single-cell level were analyzed. The ALT-like type was enriched for NABA ECM-affiliated pathways and regulation of lymphocyte activation, and the NDTMM type was enriched for Parkinson's disease, neutrophil degranulation, and vascular endothelial growth factor-alpha (VEGFA) signaling pathways ( Figure 3A). Interestingly, for NDTMM, pathways such as multicellular organismal and cellular maintenance homeostasis were enriched.
In contrast, for the TEL+ALT-like type, pathways such as RNA metabolism, Huntington's disease, and translation were enriched ( Figure 3B). We analyzed the PPI of genes that were differently expressed between the NDTMM and ALT-like type using MCODE, and the genes were classified into nine MCODE clusters. Genes were mainly enriched in neutrophil degranulation, generation of precursor metabolites and energy, and ATP metabolism ( Figure 3C). It has been recognized that these genes create an environment for NDTMM type cells to generate a large amount of energy for survival. The maintenance of the gastrointestinal epithelium and epithelial structure of adenocarcinoma cells to maintain survival in a TMM-deficient environment is also of interest.
Gene ontology (GO) term analysis of the nine MCODE clusters revealed that neutrophil degradation, multicellular organismal homeostasis, and tissue homeostasis were enriched. REG3A, H1F0, and KRT7 were highly expressed in the ALT-like type cells (Supplementary Table S1). We compared genes that were significantly different between NDTMM and TEL+ALT groups using DEG analysis. In the NDTMM type, the expression of INPP1, TRIM15, FAM83H, FUOM, and ASL was high, and in the TEL+ALT-like type, the expression of NOP56, C8orf59, NUDC, and CD44 was high ( Figure 3D).
Based on these results, we conducted an MCODE analysis of the PPI of genes that were highly significantly expressed in the TEL+ALT-like type, which were classified into 12 clusters. These clusters were enriched for RNA metabolism, Huntington's disease, and human immunodeficiency virus (HIV) infection ( Figure 3E). We determined that the TFs in ALT-like and NDTMM types, general transcription factor IIE subunit 2 (GTF2E2), and PSMB5, were ALT-like, whereas proteasome 20S subunit beta 5 (PSMB5) was also found in NDTMM ( Figure 3F). Specifically, early growth response 2 (EGR2) [27] was identified as a TEL inhibitory TF.
Landscape of Metabolic Reprogramming of TMM Types
In ALT+ cells, telomere length is increased and PGC-1, a key regulator of mitochondrial biogenesis and function, is amplified or overexpressed in ALT+ tumors, which are very sensitive to PGC-1 or SOD2 knockdown. In order to improve anti-TEL cancer therapy, genetic modeling of TEL elimination exposes vulnerabilities that accidentally suppress mitochondrial homeostasis and oxidative defense systems [28]. However, currently, few studies have reported metabolic reprogramming based on other TMM types. We systematically analyzed the metabolic reprogramming of adenocarcinoma and other cell types at the single-cell level using 84 metabolic pathways from KEGG [29].
In the ALT-like type of adenocarcinoma cells, linoleic acid metabolism, nitrogen, glycosphingolipid biosynthesis, and glycosaminoglycan-related pathway activities were high. Tricyclic acid (TCA) and oxidative phosphorylation (OXPOHS) showed high activity in the TEL type cells; whereas in the NDTMM type cells, high activity was observed in >50% of the pathways. The metabolic pathway activity was not as high as expected in the TEL+ALT-like group, which also showed higher activity in riboflavin metabolism, glycosaminoglycan biosynthesis, and heparan sulfate than in other processes.
The innate immune cells also demonstrated a pattern of metabolic reprogramming that was clearly distinguished according to TMM type ( Figure 4A). NDTMM and TEL type B cells showed high metabolic reprogramming and used various energy sources, whereas the ALT-like type cells showed high ether lipid metabolism activity ( Figure 4B). NDTMM type T cells showed high nitrogen, linoleic acid, histidine, and phenylalanine metabolism, and the TEL+ALT-like type demonstrated high OXPHOS and TCA activity ( Figure 4C). NK cells showed three types of TMM.
In the ALT-like type cells, nitrogen metabolism, steroid metabolism, and arginine activity were high, and in the NDTMM type, glycosaminoglycan biosynthesis and heparan sulfate, thiamine, and biotin metabolism were high ( Figure 4D). The master regulators of mitochondrial bioenergetics, PGC1A and PGC1B, showed the highest activity in NDTMM type adenocarcinoma cells ( Figure 4E). However, in the mitochondrial bioenergetics signature analysis, the TEL+ALT-like type and TEL type showed high activity, whereas the ALT type exhibited the lowest activity. This trend was also observed in the PMC of other GC single-cell cohorts, showing the highest and lowest activity in the TEL+ALT-like and NDTMM types, respectively ( Figure 4F). We analyzed the metabolic reprogramming of GMC, intestinal metaplasia (IM, MSC), and PMC in single-cell cohorts of different GCs.
The ALT-like type GMC showed high nitrogen metabolism activity, similar to that of T and NK cells. Taurine and hypotaurine metabolism, as well as mucin type o-glycan biosynthesis, were high in both the ALT-like and NDTMM type cells ( Figure 4G). In the IM (MSE), TCA was high in the TEL+ALT-like group, and the ALT-like and NDTMM groups showed similar trends ( Figure 4H). In PMC, NDTMM and ALT-like types showed similar trends, and TEL+ALT-like type showed high activity in one-carbon metabolism, fatty acid biosynthesis, phenylalanine metabolism, and cysteine and methionine metabolism, unlike other types ( Figure 4I).
These results demonstrate that the energy metabolism pathways changed according to TMM type at the single-cell level, and differed within the same cell type. Cells select TMM to maintain cellular homeostasis and, thus, different metabolic reprogramming methods are used as energy sources for survival. In summary, these results may provide insights into cell type-specific TMM inhibition and the development of metabolic inhibitory drugs.
Vulnerabilities of NDTMM Type for Cancer Therapy
Our categorization of adenocarcinoma cells into four TMM types and examination of their features confirmed that NDTMM and ALT-like types were vital to the development of aggressive GC cells. We analyzed gene regulatory and drug-target interactions using the induced network modules of CPDB to identify a drug target for the NDTMM type [25] ( Figure 5A).
the induced network modules of CPDB to identify a drug target for the NDTMM type [25] ( Figure 5A).
When the intermediate node z-score threshold was set to 100, the drug targets and drugs at the upper level were as follows: chembl1449836, nuclear receptor subfamily 4 group A member 1 (NR4A1), MCL1/sodium benzoate, sodium phenylacetate, ASL, ASS1/GRASSYSTATIN A, PMID8410973C3, chembl1088572, CTSD, CASP4, PGC, and PTP4A1 ( Figure 5B). We investigated NR4A1 as the drug target because it plays an important role in the development of mitochondrial abnormalities in NDTMM type cells, as well as in cell maintenance in the absence of TMM. We also confirmed the association of NR4A1 with survival in the TCGA-STAD dataset ( Figure 5C). We also identified prospective therapeutics for NR4A1 (nilotinib, AZD6244, PF-2341066, and paclitaxel; in stomach cancer cell line, Figure 5D). When the intermediate node z-score threshold was set to 100, the drug targets and drugs at the upper level were as follows: chembl1449836, nuclear receptor subfamily 4 group A member 1 (NR4A1), MCL1/sodium benzoate, sodium phenylacetate, ASL, ASS1/GRASSYSTATIN A, PMID8410973C3, chembl1088572, CTSD, CASP4, PGC, and PTP4A1 ( Figure 5B). We investigated NR4A1 as the drug target because it plays an important role in the development of mitochondrial abnormalities in NDTMM type cells, as well as in cell maintenance in the absence of TMM. We also confirmed the association of NR4A1 with survival in the TCGA-STAD dataset ( Figure 5C). We also identified prospective therapeutics for NR4A1 (nilotinib, AZD6244, PF-2341066, and paclitaxel; in stomach cancer cell line, Figure 5D).
Tumor necrosis factor (TNF) causes mitochondrial dysfunction in GC, as demonstrated by Cyt-c leakage, mitochondrial membrane potential (MMP) collapse, and energy metabolism disturbance, which all activate cellular death processes. TNF therapy resulted in GC cell death, but also induced protective mitophagy. TNF boosted the activity of c-JNK, which raised parkin expression, subsequently triggering mitophagy to remove damaged mitochondria and prevent cell death. Mitophagy was reduced by overexpression of NR4A1, making GC cells more susceptible to TNF-induced cell death [30].
Discussion
Telomere maintenance is essential for cancer cell survival and proliferation [31]. However, our study demonstrates that cancer cells maintain their own survival and proliferation by changing and using the cancer microenvironment [32] to their advantage, even where TMM is absent. We defined four types of TMM, but the current knowledge about the NDTMM type in GC is sparse. Although the ALT-like mechanism was associated with a poorer survival prognosis in GC, the NDTMM type was associated with a more deleterious condition than the ALT-like type was. Overall, the NDTMM type displayed strong cancer hallmark activity, as well as the highest EMT and stemness among the four TMM types.
This feature of the NDTMM type led us to infer the length of telomeres based on PML expression. NDTMM type cells had telomeres that were shorter than those of the ALT-like type cells, but longer than those of the TEL+ALT-like type cells. NDTMM type cells were previously thought to be cancer cells immediately before death due to autophagy or high necrosis that could not maintain their telomeres. However, contrary to our hypotheses, the pathways that were enriched in this cell type were mostly metabolic and cellular homeostasis and maintenance pathways. Despite a lack of TMM, cancer cells can continuously attempt to avoid death.
ALT-like cells were previously considered to possess a strong mitochondrial bioenergetic activity. However, our findings revealed that NDTMM type cells which were more active than ALT-like cells were present in more than half of the metabolic pathways. This tendency was observed in adenocarcinoma and other innate immune cells, and cellspecific metabolic reprogramming of the NDTMM type was exploited as an energy source. NR4A1 was determined to be a potential therapeutic target based on our observation of the regulatory interactions of the differential genes that were substantially expressed in the NDTMM type cells. In TMM-free cancer cells, NR4A1 prevents cell death via a JNK/parkindependent mitophagy. Overexpression of NR4A1 causes a mitochondrial energy imbalance by suppressing the expression of the mitochondrial respiratory complex.
Our findings also demonstrated that TNF therapy stimulated parkin-dependent mitophagy, which, in excess, inhibits mitochondrial apoptosis, reducing the cytotoxicity of TNF. In contrast, NR4A1 overexpression reduced parkin-dependent mitophagy by inhibiting JNK. In addition, our study demonstrates the potential of NR4A1 as a novel drug target for killing NDTMM type cancer cells by inhibiting JNK/parkin-dependent mitophagy, making GC cells more susceptible to TNF-induced apoptosis. To the best of our knowledge, this is the first study to show that the NDTMM type is more detrimental in GC than the ALT-like type. Our findings will contribute to the development of precision medicine strategies for patients with refractory GC and provide unique insights into the development of anticancer therapies.
Conclusions
This study demonstrates the survival, maintenance, and proliferation of GC cells in absence of a TMM. Cancer cells were classified into four types on based on TMM types, and the NDTMM type has been sparingly studied. We comparatively analyzed TMM activity, DEGs, PPI, and enriched GO terms among individual TMM type cells in GC, and demonstrated that the NDTMM type cells maintained their survival, proliferation, and homeostasis by altering their cell environments. Furthermore, we identified the NR4A1 gene as a potential therapeutic target for the development of efficient anticancer therapies.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cells11213342/s1, Figure S1: Correlation of immune cells and TMM pathway in Y497 cohort; Table S1: List of differential expressed genes (DEGs) between ALT-like type and NDTMM type. | 5,962.6 | 2022-10-23T00:00:00.000 | [
"Biology"
] |
Group-Invariant Solutions for the Generalised Fisher Type Equation
In this paper, we construct the group-invariant (exact) solutions for the generalised Fisher type equation using both classical Lie point and the nonclassical symmetry techniques. The generalised Fisher type equation arises in theory of population dynamics. The diffusion term and coefficient of the source term are given as the power law functions of the spatial variable. We introduce the modified Hopf-Cole transformation to simplify a nonlinear second Order Ordinary Equation (ODE) into a solvable linear third order ODE.
Introduction
In this paper, the focus is on the generalised Fisher type equation arising in population dynamics.The analysis of the generalised Fisher equation has been carried out using Lie point symmetries (see e.g.[1]) and construction of conservation laws see e.g.[2]).These types of equations have appeared in many fields of study, for example, the reaction-diffusion equations arise in heat transfer problems [3], biology [4] [5], and transmission of nerve signals [6].The reaction-diffusion equations such as the generalized Fisher equation describe how the concentration of a substance is distributed in space changes, whereby the diffusion term causes the spread over the surface.Fisher [4] had used a nonlinear reaction-diffusion equation to model the population growth of mutant genes over a period of time.One can take Fisher's equation [4] and with simple modifications, and can derive the Fitzhugh-Nagumo equation [7].Moreover, one can make a modification to the Fitzhugh-Nagumo equation in order to obtain Huxley's equation [4].
A significant amount of work has been done in the process of studying the reaction-diffusion equations.In particular, from the classical Lie symmetry analysis point of view (see e.g.[8]), and nonclassical symmetry techniques [7] [9] [10].It turns out that reaction diffusion equations such as the generalised Fisher equation admit the genuine nonclassical symmetries if the source term is given by a cubic (see e.g.[7] [10]).In a recent work [3] [11], the authors assume a diffusivity which depends on space variable.In this case, the diffusivity may be given as a power law function of space variable for the given reaction-diffusion equation to admit nonclassical symmetries.This paper is arranged as follows.In Section 2, we provide the mathematical models for problems arising in population dynamics.In Section 3, we provide a brief account of the symmetry methods.In Sections 4 and 5, we provide the nonclassical and classical Lie point symmetry reductions, respectively.In Section 6, we briefly provide remarks on the conservation laws of the equation in question.The discussions and concluding remarks are given in Section 7.
Mathematical Description
For a diploid population having two available alleles at the locus in question ( 1A and 2 A ), there are three possible genotypes; 1 1 A A , 1 2 A A and 2 2 A A , where 1 A is the allele is under observation.Then, the following three equation describe the change in the genotype frequencies ( ) where ij γ are the reproductive success rate of genotype , i j A A µ is the common death rate, ( ) A is given by ( ) Differentiating Equation ( 2) with respect to t, then the three genotype equations (1) collapse into a single equation that describes the change in frequency of the new mutant gene , the total population density is constant in space, Equation (3) reduces to the Fitzhugh-Nagumo equation.When deriving the models with the continuous method, there is an extra convective term due to the assumption that total population density is not uniform spatially.
If one is to consider the conditions that Fisher had examined, so that the allele in question is completely recessive.This implies that the genotypes AA and Aa have the same phenotype, therefore they have the same reproductive success rate.Let the alleles represented by A and a be set as the alleles 2 A and 1 , A respectively.Hence, 1 A is the allele under consideration.
where 11 22 h γ γ = − .One can see that Equation ( 4) is also a reaction-diffusion-convection equation.If then Equation (4) reduces to a generalized Fisher type equation, which models the propagation of impulses along nerve axons.In this paper we focus on the reaction diffusion equation of the form , , , We refer to Equation (6) as the governing equation.The variable u may be viewed as representing population.
The diffusivity ( ) k x may be given by quadratic in x for equations such as Equation ( 6) to admit genuine nonclassical symmetries (see e.g.[3] [11]).In this paper the coefficient of the cubic source term ( ) q u is given by the power law 2n x where n is a real constant.
Symmetry Methods for Differential Equations
In this section, we restrict discussions to symmetry analysis of second order differential equations.Given for example, a second order differential equation of the form , , , , 0, where the subscripts denote all possible first and second derivatives of u with respect to t and x.Finding classical Lie point symmetries of Equation ( 7) implies seeking infinitesimal transformations of the form generated by the vector field Note that the transformations in (8) are equivalent to the one-parameter Lie group of transformations that leaves the Equation ( 7) unchanged or invariant.The action of X is extended to all derivatives appearing in the equation in question through the appropriate prolongation.The infinitesimal criterion for invariance of the given equation is given by Equation ( 10) yields an overdetermined system of linear homogeneous equation which can be solved algorithmically.Note that the solution of Equation ( 10) yield the classical Lie point symmetries admitted by Equation (7).Full theory of determination of Lie point symmetries may be obtain in among other texts [12]- [14].If the invariance is sought subject to a further constraint known as the invariant surface condition (ISC), that is given then one obtain a system of nonlinear determining equations which may yield the nonclassical symmetry generators [15].
Nonclassical Symmetry Reductions
In this section, we consider nonclassical symmetry reductions of the generalised Fisher type equation given in Equation ( 6).Here, ( ) q u is given as a cubic function of u (see e.g.[7]) and both the diffusivity and the coefficient of the source term are given by power law functions of the space variable.
Nonclassical Symmetry Reduction Given n = 1
In this subsection, we consider Equation ( 6) with 1 n = and 3 .q u = Assuming 1 1 ξ = , the infinitesimal criterion for invariance of the form ( 12) results in a system of overdetermined nonlinear determining equations which is split in the powers of in powers of x u as given below; 1: ( ) The solution of these determining equations yields the admitted nonclassical symmetry generator given by The associated ISC is given by Using governing equation and the ISC (14) We introduce the "modified" Hopf-Cole transformation given by Substituting backwards for ( ) Solving for the arbitrary functions, ( ) 21) is substituted back into the ISC given by Equation ( 14).Hence, we obtain in terms of the original variables the general nonclassical symmetry exact solution given by where 1 a and 2 a are arbitrary constants.Solutions ( 22) is depicted in Figures 1-3.
In this case, the exact (group-invariant) solution will be given by, (24)
Nonclassical Symmetry Reduction Given n ≠ 1
Suppose that one considers a more general case, ( ) ( ) where n ∈ ℜ .The governing equation then becomes Following the steps above, the admitted genuine nonclassical symmetry is given by The associated ISC is given by In this case, it turns out that the "modified" Hopf-Cole transformation should be given by as such the transformed Equation (29) becomes a solvable linear third order ODE ( ) In terms of the original variables the exact solution is given by ( ) and If 0 n = , then the problem is equivalent to the one considered in [3] and 1 n = yields the problem discussed in the previous section.
Classical Lie Point Symmetry Reductions Given n = 1
We consider the case where 1 n = and ( ) q u u = in Equation (6).In this case the admitted Lie algebra is three dimensional and spanned by the vector fields It is easy to show that this Lie algebra is closed.Reductions are possible by any linear combination of these symmetries.Usually one may determine the optimal system of subalgebras of these classical Lie point symmetries to determine reductions which are not connected by any point transformation.However here we restrict analysis to three cases only.Note that symmetry generator 3 X led to hard to solve reductions and thus its use is omitted.
Reduction by X 2
The corresponding characteristic equations corresponding to this scaling symmetry are given by and the functional form of thew group-invariant solutions is given by where f satisfies the ODE ( ) ( ) We obtain ( ) and so, the group-invariant solution for the governing equation is given by ( ) The solution in Equation ( 37) is depicted in Figure 4 and Figure 5.Given the source term ( ) ( ) with k being an arbitrary constant we obtain ( ) The transformation, u u k = − can be made to Equation (38) to give ( ) In terms of the original variables the exact (group-invariant) solution for Equation ( 38) is given by ( )
Reduction by X 1
The time translation symmetry leads to the steady state problems with the model given by the modified Emden-Fowler equation of the form The transformation which is hard to solve exactly.Note that ODE (41) admits the scaling symmetry which may be used to reduce the order this equation by one.
Reduction by the Combination c X c X 1 1 2 2
+ To construct the exact (group-invariant) solution we consider the linear combination of symmetries 1 X and 2 .X The characteristic equation corresponding to this combination is given by The basis of invariants is given by F I γ = Thus the functional form of the group-invariant solution is given by where F satisfies the second order ODE ( ) ( ) ( ) ( ) Equation (43) admits the following rotation symmetry, We implement the method of differential invariants to reduce the order of Equation ( 43) by one.The first prolongation of X is given by [ ] The characteristic equations are given by, The invariants are therefore, 1 ( ) G G z = we obtain the reduced equation ( ) A further simplification can be made to Equation (47), whereby G G z = + .Therefore Equation (47) can be written as, Suppose 2 1 c c = , then Equation (48) becomes variable separable and the solution for G can be found.Substituting back for G gives, ( ) wherein an integration constant vanished for simplicity.
In terms of the original variables we obtain a group-invariant (particular) solution given by where 1 k is an arbitrary constant.The ODE (47) is difficult to solve exactly when 1 2 c c ≠ .Solution (50) is depicted in Figure 6.
Classical Lie Point Symmetry Reduction Given n ≠ 1
In this case, Equation (25) admits a two dimensional Lie symmetry algebra spanned by the base vectors
Reduction by X 1
The time translation led to the steady state problem given by the Emden-Fowler type ODE Which is harder to solve exactly.
Reduction by X 2
Reduction by this symmetry generator led to a functional form of the group-invariant solution given by ( ) ( )
A Note on Conservation Laws of Equation (25)
It is worth noting that Equation (25) has the conservation laws given by the conserved vectors ( ) which implies a linear heat equation with spatial dependent diffusion term.These combination of conserved vectors is obtained by both direct and multiplier methods.
Some Discussions and Concluding Remarks
In this paper, we have used both classical and nonclassical symmetry methods to construct the exact solutions.Some new group-invariant (exact) solutions for reaction diffusion equation with spatially dependent diffusivity and the coefficient of the source term have been constructed using both classical and nonclassical symmetry techniques.Figures 1-6 depict the change in mutant population with respect to either time or space or both.The effects of the parameters appearing in the plotted exact solutions on the population are displayed.We have introduced the modified Hopf-Cole transformation to transform a nonlinear second order ODE to a simpler to solve linear third order ODE.To the best of our knowledge, this transformation has never been used in the recorded literature.
is a reaction-diffusion-convection equation with cubic nonlinearities referred to here as a generalised Fisher equation.
2 1
arbitrary functions of t.Without loss of generality, set ( ) c t = and then solving for the rest of the arbitrary functions by substituting Equation (30) into the ISC corresponding to this case.Then, | 2,983.4 | 2015-12-10T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Artificially engineered nanostrain in FeSexTe1-x superconductor thin films for supercurrent enhancement
Although nanoscale deformation, such as nanostrain in iron-chalcogenide (FeSexTe1−x, FST) thin films, has attracted attention owing to its enhancement of general superconducting properties, including critical current density (Jc) and critical transition temperature, the development of this technique has proven to be an extremely challenging and complex process thus far. Herein, we successfully fabricated an epitaxial FST thin film with uniformly distributed nanostrain by injection of a trace amount of CeO2 inside an FST matrix using sequential pulsed laser deposition. By means of transmission electron microscopy and geometric phase analysis, we verified that the injection of a trace amount of CeO2 forms nanoscale defects, with a nanostrained region of tensile strain (εzz ≅ 0.02) along the c-axis of the FST matrix. This nanostrained FST thin film achieves a remarkable Jc of 3.5 MA/cm2 under a self-field at 6 K and a highly enhanced Jc under the entire magnetic field with respect to those of a pristine FST thin film. The maximum amount of current that can be carried by iron-based superconductors can be improved by introducing deliberate defects during fabrication. Iron–selenium–tellurium thin films have recently been identified as high-temperature superconductors that may pass large quantities of current. Sanghan Lee from the Gwangju Institute of Science and Technology in South Korea and co-workers have now used lasers to deposit small amounts of cerium oxide atoms between layers of an iron–selenium–tellurium thin film. The cerium oxide particles create defects, such as selenium-deficient sites, that slightly strain the crystal lattice and let more current move through the film. Mapping these nanostrain effects with electron microscopy enabled the team to find conditions needed to enhance superconductivity while minimizing cerium oxide by-products. To enhance supercurrent of iron-chalcogenide (FST) superconductor thin films, we induced nanostrain in FST thin films. The nanostrain was generated around nanoscale defects which were formed by the inserted a trace amount of oxide artificially inside FST matrix during the growth of FST thin film using sequential pulsed laser deposition. In particular, the critical current density (Jc) of the nanostrained FST thin films was significantly improved without dominant degradation of critical transition temperature.
Introduction
Superconductors are essential materials for high magnetic field applications, such as those in nuclear fusion energy devices, magnetic resonance imaging, and superconducting magnetic energy storage systems. In recent years, iron-based superconductors (FeSCs) have attracted attention for use in high magnetic field applications because of their high upper-critical field (H c2 ) and low magnetic anisotropy (γ) 1,2 . Moreover, FeSC epitaxial thin films have demonstrated enhanced overall superconducting properties compared with those of the corresponding bulk materials [3][4][5][6][7][8][9] . Among several FeSCs, iron chalcogenides (FeSe x Te 1−x , FST), which are simple PbO-type chalcogenides with layered-like structures 10 , are excellent candidates for use as practical superconducting materials for several reasons. First, the critical transition temperature (T c ) of FST abruptly increases due to an Se ratio that increases with the suppression of phase separation, which is generally observed in Se-rich FST bulk, when FST is fabricated as an epitaxial thin film 11,12 . In addition, an FeSe monolayer can achieve a T c of 100 K, which is the maximum T c for FeSCs 13 . Second, FST thin films exhibit promising critical current densities (J c ), greater than 1 MA/cm 2 , under self-field regardless of the substrate, including coated conductor substrates 5,14 . This result indicates that these films can potentially be used as superconductor tape. However, J c enhancement via an artificial pinning center is a critical requirement for use of FST in high magnetic field applications.
Several approaches have been used to improve the J c of FST to date, such as the use of a buffer layer 14 , oxygen annealing 15 , and ion irradiation 16,17 . In particular, low-energy proton irradiation (190 keV) is an effective method for this purpose because this method causes nanoscale cascade defects accompanied by nanostrain that simultaneously enhances both the T c and J c in an FST thin film 16 . However, proton irradiation is a complicated ex situ process that is not suitable for practical applications. Therefore, a straightforward in situ process for forming artificially controlled nanostrain is necessary to improve the J c of FSTs.
Nanostrain has been generated via the introduction of various defect formations to date. For example, the insertion of a desired material with a slightly different lattice constant can induce strain through the formation of a secondary phase 18,19 ; further, the doping of certain elements can generate lattice changes with nanoscale strain 20 . The formation of cascades 16 or point defects 21 by ion irradiation induces deformation of a lattice through nanoscale defects. In FST thin films, since large-scale and excessive numbers of defects can degrade the entire superconducting matrix 22 and FST has a short coherence length (~2 nm) 23 , the formation of minimally sized defects is required for inducing nanostrain to prevent Cooper pair breaking while improving J c .
Herein, we report that nanostrain was successfully formed in an epitaxial FST thin film through the formation of minimal nanoscale defects in an FST thin film using sequential pulsed laser deposition (S-PLD), which can artificially insert the desired material while fabricating an epitaxial thin film 4,24,25 . We injected precisely controlled trace amounts of CeO 2 to minimize the residual insertion of materials to form nanoscale defects. CeO 2 was used as an insertion material because CeO 2 exhibits not only good chemical stability but also an in-plane lattice constant compatible with that of FST 14,[26][27][28] , and hence, the degradation of superconductivity can be minimized rather than inserting other oxides, even if residual CeO 2 exists in FST. The crystallinity and structure of the CeO 2 -injected FST (Ce-FST) were confirmed using X-ray diffraction (XRD) measurements. The nanostrain was analyzed using atomic-resolution scanning transmission electron microscopy (STEM) with geometric phase analysis (GPA). The nanostrained Ce-FST thin film exhibits a significantly enhanced J c compared with that of a pristine FST (P-FST) thin film.
Sample preparation
We fabricated both P-FST and Ce-FST thin films on a (001)-oriented CaF 2 substrate by PLD using a KrF (248 nm) excimer laser (Coherent, COMPEX PRO 205F) in a vacuum of 2 × 10 −5 Pa at 400°C. We used an FeSe 0.45 Te 0.55 target made by an induction melting method. The FST thin films were grown using a laser energy density of 3 J/cm 2 , a pulse repetition rate of 3 Hz, and a target-to-substrate distance of 4 cm.
The method for fabricating the Ce-FST thin films is as follows. We first deposited a 20-nm (445 laser pulses) FST layer on the CaF 2 substrate. Then, CeO 2 was deposited on the FST layer.
These processes were repeated four times in total, and finally, an FST layer was deposited on the top surface. The total thickness of all the FST thin films was 100 nm (2225 laser pulses).
CeO 2 was deposited between the FST layers with a dependence on p (2, 5, 10, and 20), where p is the number of laser pulses for the inserted CeO 2 (laser energy density of 1.5 J/cm 2 and a repetition rate of 1 Hz). The target changing time to switch between the FST and CeO 2 targets was 10 s, which was the drive time when the laser was turned off and then on again. The composition of the FST thin film was considered to be approximately FeSe 0.7 Te 0.3 , based on our previous report 11 .
Characterization
To characterize the crystal structure, the θ-2θ, azimuthal phi, and rocking curve were measured using a four-circle XRD (PANalytical, X'Pert pro, λ = 1.5406 Å). We also performed an additional θ-2θ scan in beamline 3 A of the Pohang Accelerator Laboratory with a six circle XRD (λ = 1.148 Å). The STEM images and energy dispersive spectroscopy (EDS) maps were obtained by a C scorrected FEI Titan Themis G2 at an accelerating voltage of 300 kV with a beam current of 70 pA, a convergence semiangle of 15 mrad, and a collection semiangle snap in the range of 80-379 mrad. To obtain GPA maps of the Ce-FST and P-FST thin films, the same parameters were used for all calculations (same Fourier vector, resolution: 5 nm, smoothing: 10 nm, color scale: −0.1-0.12). The resistivity-temperature measurements were performed using a physical property measurement system (Quantum Design). T onset c and T zero c were determined using the 0.9 ρ n criterion and 0.01 ρ n criterion, respectively, where ρ n is the resistivity at 23 K. The magnetization J c was measured using a vibrating sample magnetometer (VSM, Oxford) by applying a magnetic field perpendicular to the film. This parameter was estimated using the Bean model for a thin film: J c = 15ΔM/Vr, where V is the thin film volume in cubic centimeters, r is the equivalent radius of the sample size (πr 2 = a × b; a and b are the width and length of the sample, respectively), and ΔM is the width of the magnetic moment from the M-H loop (for further information, see Supplementary Fig. S1). The transport J c is obtained by direct transport measurement of the patterned FST sample using a standard four-probe method.
Results and discussion
Crystalline phases Figure 1 shows a schematic diagram of two different Ce-FST thin films. First, if the amount of inserted CeO 2 is very small (2 p, smaller than 0.5 unit cell), nanoscale defects can be formed inside the FST thin film, not the CeO 2 layer, given that the inserted 2 p CeO 2 is an infinitesimal amount that is insufficient to form nucleation clusters or layers inside an FST. These nanoscale defects can generate nanostrain inside an FST thin film (left side of Fig. 1). The mechanism is discussed later in detail. If the inserted CeO 2 (20 p, 2.5 unit cell) is sufficient to form a CeO 2 layer in an FST thin film, a CeO 2 layer is formed between the FST layers without nanostrain (right side of Fig. 1). Thus, a 20 p Ce-FST thin film forms a superlattice FST thin film with CeO 2 .
We performed a θ-2θ scan using XRD to identify the out-of-plane crystalline qualities of the Ce-FST thin films. Figure 2a shows the out-of-plane θ-2θ XRD spectra of the P-FST and Ce-FST thin films dependent on p (2, 5, 10, and 20). The θ-2θ scans clearly show only (00l) peaks for Ce-FST thin films with CaF 2 (00l) peaks. However, there are no other phase peaks present despite the periodically injected CeO 2 because the amount of CeO 2 inserted is too small to be measured by XRD. Figure 2b shows an enlarged section of Fig. 2a close to the (001) peak of the Ce-FST thin films. The (001) peak of 2 p Ce-FST is noticeably shifted more to the left than that of P-FST, indicating that 2 p Ce-FST experiences tensile strain along the c-axis. Intriguingly, the degree of shift of the (001) peak returns to zero with increasing p. This result indicates that the strain relaxes in Ce-FST thin films with increasing p. Additionally, the same shift tendency is observed in other 00l peaks in Ce-FST thin films (for further information, see Supplementary Fig. S2).
We additionally measured the θ-2θ of the 2 p Ce-FST and 20 p Ce-FST thin films using a synchrotron-based XRD to further verify the crystalline structure (for further information, see Supplementary Fig. S3). As shown in Fig. S3, only (00l) peaks are observed in both the 2 p Ce-FST and 20 p Ce-FST thin films, and the (001) peaks display satellite peaks, which have been generally observed in superlattice thin films 4 . Since 20 p Ce-FST thin films can have a superlattice structure with CeO 2 , satellite peaks can be observed. However, in the 2 p Ce-FST thin film, it is difficult to form a superlattice structure with the formation of an intact CeO 2 layer in the FST matrix because a trace amount of CeO 2 is injected into the FST matrix. Thus, we speculate that the satellite peaks of the 2 p Ce-FST thin film are due to small changes, such as nanostrain at the interfaces between the FST layers.
Additionally, we measured the rocking curve of the (001) reflection of both the P-FST and 2 p Ce-FST thin films using four circles of XRD to compare the out-ofplane crystalline qualities and the mosaicity (Fig. 2c). The calculated full-width at half-maximum (FWHM) of the (001) reflection is 0.67°for the 2 p Ce-FST and 0.55°for the P-FST. The difference between the FWHMs of the P-FST and 2 p Ce-FST thin films is minimal, and the FWHM of 2 p Ce-FST is similar to those of other reported FeSe x Te 1−x thin films 5,9,11 . This result indicates that the 2 p Ce-FST thin film grew well along the c-axis despite the insertion of oxide materials into the FST matrix.
To confirm the in-plane texture and epitaxial quality, we performed an azimuthal phi scan using four circles of XRD. Figure 2d indicates the azimuthal ɸ scan of the (113) peak from the CaF 2 substrate and the (112) peak from the 2 p Ce-FST thin film.
Strain analysis
To determine the nanoscale strain caused by the infinitesimal CeO 2 injection at the interface of each FST layer, we analyzed atomic-resolution-STEM images of the 2 p Ce-FST thin film. Figure 3a shows a cross-sectional atomic-resolution-STEM image of the 2 p Ce-FST thin film. As shown in Fig. 3a, no other dominant phases, such as CeO 2 particles, are observed except for the FST phase. Although we double checked for the presence of CeO 2 particles in the 2 p Ce-FST thin film, there are no CeO 2 layers, CeO 2 particles, or large-scale defects present (for further information, see Supplementary Fig. S4). However, fine bright lines are observed at 20-nm vertical intervals in the 2 p Ce-FST thin film. To analyze the fine bright lines in the 2 p Ce-FST thin films, we performed GPA based on the atomic-resolution-STEM image in Fig. 3a. GPA is generally used to show strain distributions and to determine the deformation of the lattice constant in crystalline structures 29 . Figure 3b shows an extracted strain map of the out-of-plane strain (ε zz ) of the identical region in Fig. 3a. The GPA map undoubtedly displays a strained region with 20-nm vertical intervals; the thickness of the nanostrained region is approximately 5-10 nm. To further analyze the strain, we plotted the line profile of the strain of the 2 p Ce-FST thin film based on the GPA results. As shown in Fig. 3c, nanostrains are observed with 20-nm vertical intervals. This nanostrain is tensile strain (ε zz ≅ 0.02) along the c-axis, and the position of the nanostrain agrees well with the location of the site where we intentionally inserted CeO 2 .
For a more accurate comparison, we performed STEM analysis of the P-FST and 20 p Ce-FST thin films. The P-FST thin film exhibits a relatively clear phase, as shown in Fig. 3d; there is no particular strain field in the out-ofplane GPA strain map from the atomic-resolution-STEM image of the P-FST film (Fig. 3e). Figure 3f shows the line profile of the out-of-plane strain of the P-FST thin film. As shown in Fig. 3f, the strain in the P-FST thin film fluctuates around zero.
The 20 p Ce-FST thin film exhibits a clear CeO 2 layer between the 20-nm intervals of FST layers (Fig. 3g). Figure 3h shows an out-of-plane GPA image of the 20 p Ce-FST thin film. Figure 3i shows a line profile of the out-of-plane strain in the 20 p Ce-FST thin film. The large strain contrast at the CeO 2 layer in Fig. 3h, i is an artifact that is caused by the structural difference between FST and CeO 2 . Relatively small strain fields (<0.02) are irregularly observed near the CeO 2 layer in the GPA map of the 20 p Ce-FST thin film. Interestingly, nanostrain is observed in the STEM image of the 2 p Ce-FST thin film, although there are no CeO 2 layers or particles visible. Thus, it is important to demonstrate why the injected trace amount of CeO 2 forms nanostrain in the FST matrix and why there are no CeO 2 particles in the 2 p CeO 2 FST thin film.
In general, nanostrain is induced at various types of defect perimeters 16,[18][19][20][21] . Interestingly, lattice distortion points (dashed circle in GPA maps of Fig. 3) such as dislocation cores and damaged FST layers are prominently observed in the nanostrain region in the GPA image of the 2 p Ce-FST thin film. Additionally, there are a few nanoscale defects that are formed irrespective of CeO 2 insertion in the FST thin films, as shown in Fig. 3d. These nanoscale defects can cause nanostrain in FST thin films. However, it is difficult to form nanostrain over a broad region by means of only these nanoscale defects because these defects form a localized strain field 18 .
To further understand the origin of the nanostrained region, we analyzed an enlarged STEM image of the nanostrained region with no lattice distortion points using EDS mapping. Figure 4a-c shows different STEM images (Fig. 4a-c).
One interesting discovery is that not only fine decreases in both the Se and Fe ratios but also a fine increase in the Te ratio are observed in the nanostrained region in the EDS maps of the 2 p Ce-FST thin film when these EDS maps are analyzed using plot profiles (for further information, see Supplementary Fig. S5). The decrease in the Se ratio can cause an increase in the lattice constant 11 . This result indicates that nanostrain can be induced near the Se-deficient region that is generated by infinitesimal CeO 2 insertion.
Thus, it is important to demonstrate why Se deficiency is observed in nanostrained regions without residual CeO 2 particles. In the PLD system, the laser ablation of the target forms a plume that contains ionized species with high kinetic energy. These ionized energetic species cause resputtering and the formation of fine defects on the surface in the initial stage before these species form clusters or layers 30 . In this resputtering stage, it is impossible for the inserted CeO 2 to form an intact CeO 2 layer; instead, resputtering forms nanoscale defects and damaged FST layers (or a transition layer). These phenomena are observed in not only our 20 p Ce-FST thin film, as shown in Fig. 4d, but also in other studies when a CeO 2 layer is deposited into or onto an FST thin film 26,27,31 . Furthermore, Se deficiency can be generated in the resputtering stage, provided that the atomic ratio of FST abnormally changes during thin film growth because of instability in the Fe-Se bonding 11 . Collectively, nanostrain can be induced by nanoscale defects, such as lattice distortion points, a damaged FST layer, and Se deficiency, that are formed by inserting an infinitesimal amount of CeO 2 .
Furthermore, we examined whether the formation of nanostrain is affected by pausing (10 s) for an exchange of the FST and CeO 2 targets because Se and Te are sensitive and volatile in FST thin films 11 . The paused FST thin film was fabricated following the same fabrication process as that of the 2 p Ce-FST thin film except for CeO 2 injection; the CeO 2 plume was screened during the laser ablation of the CeO 2 target. The θ-2θ scan of the paused FST thin film using synchrotron-based XRD shows well-oriented (001) peaks without satellite peaks, indicating that the pausing time has a negligible effect on the formation of nanostrain in the FST matrix (for further information, see Supplementary Fig. S3).
Superconductivity measurements
We measured the temperature dependence of the resistivity to obtain the T c to compare the superconducting properties of the P-FST and Ce-FST thin films (Fig. 5a). The measured T onset c values are 21.3, 20.4, 19.0, 16.9, and 16.7 K for the P-FST, 2 p Ce-FST, 5 p Ce-FST, 10 p Ce-FST, and 20 p Ce-FST thin films, respectively. In particular, the T c values of the FST thin films decrease with increasing p of CeO 2 . The primary reason for the T c degradation in the Ce-FST thin films is the degradation of the crystalline quality with increasing amounts of inserted CeO 2 (for further information, see Supplementary Fig. S6). Figure 5b, c, and S7 (for further information, see Supplementary S7) show the resistivity as a function of temperature up to 9 T with H//c for the 2 p Ce-FST, P-FST, and other Ce-FST thin films, respectively. Interestingly, the suppression of the T c of the 2 p Ce-FST thin film (ΔT zero c; field = 2.6 K), which is dependent on the magnetic field (ΔT zero c; field = T zero c; 9T −T zero c; 0T ), is lower than that of the P-FST thin film (ΔT zero c; field = 3.2 K); the measured T zero c; 0T and T zero c; 9T are 19.8 and 16.6 K, respectively, for the P-FST thin film and 18.9 and 16.3 K, respectively, for the 2 p Ce-FST thin film. This result indicates that the 2 p Ce-FST thin film has a lower magnetic field dependence than the P-FST thin film, although the T c of 2 p Ce-FST is lower than that of the P-FST thin film.
We estimated the H irr and H c2 of the Ce-FST and P-FST thin films using the 0.01 ρ n criterion and the 0.9 ρ n criterion with ρ n = ρ(23 K) as a function of the normalized temperature (t = T/T onset c ) to characterize the temperature dependence of the characteristic fields. (Fig. 5d). The improved H irr of 2 p Ce-FST is indicative of the beneficial effect of the periodic nanostrained region with nanoscale defects as pinning centers under high magnetic fields. In contrast, the H c2 and H irr of other Ce-FST (5, 10, and 20 p) are degraded after CeO 2 insertion, indicating that CeO 2 particles and layers can degrade H irr and H c2 in an FST thin film.
The J c of both the 2 p Ce-FST and the P-FST thin films was measured to verify the effect of the nanostrain as a pinning center on the supercurrents in the FST thin films (Fig. 6). Figure 6a, b shows the magnetic field dependence magnetization J c of the 2 p Ce-FST and the P-FST thin films at various temperatures (4.2, 7, 10, and 12 K) up to 13 T (H//c). The magnetization J c of the 2 p Ce-FST thin film has a value of 3.2 MA/cm 2 in a self-field and 0.44 MA/cm 2 under 13 T at 4.2 K. The self-field J c of the 2 p Ce-FST thin film is the highest value for an ironchalcogenide superconductor to the best of our knowledge 15,32 . The magnetization J c of the P-FST thin film has a value of 2.3 MA/cm 2 in a self-field and 0.23 MA/cm 2 under 13 T at 4.2 K. The magnetization J c of the P-FST thin film is similar to and higher than other reported values 15,16,27 . The transport J c of both the P-FST and Ce-FST thin films was measured at 6 and 10 K to verify the magnetization J c derived using the Bean model (Fig. 6c). The 2 p Ce-FST shows a transport J c of 3.5 MA/cm 2 in a self-field and of 0.44 MA/cm 2 under 13 T at 6 K, which is reasonably similar to the magnetization J c of the Ce-FST thin film at 4.2 K. The P-FST shows a transport J c of 0.91 MA/cm 2 in a self-field and of 0.10 MA/cm 2 under 13 T at 6 K, which is similar to the magnetization J c of the P-FST thin film at 7 K.
Additionally, the J c enhancement was calculated to confirm the effect of nanostrain in detail based on the magnetization J c (for further information, see Supplementary Fig. S8). The J c enhancement of 2 p Ce-FST compared with that of P-FST increases from 40% to 120% up to 5 T and gradually decreases under a high magnetic field. These results clearly demonstrate that 2 p Ce-FST maintains a high J c under a high magnetic field and a low magnetic field. Furthermore, we measured the angular dependence of the transport J c of the 2 p Ce-FST and P-FST at a constant reduced temperature T/T c~0 .6 to understand the pinning effects of nanostrain with nanoscale defects. As shown in Fig. 6d, the in-plane J c is 48% higher than the perpendicular J c in the P-FST thin films. The 2 p Ce-FST film shows an enhanced in-plane J c of 1.6 MA/cm 2 , which is 60% higher than the perpendicular J c of 1.0 MA/cm 2 due to the in-plane pinning effect by the lateral nanostrain. Above all, nanostrain with other nanoscale defects improves the J c of the 2 p Ce-FST film under all directions of magnetic field.
Relationship between nanostrain and J c
Additionally, we plotted the lattice constant, c, and magnetization, J c , at 4.2 K as a function of the p of inserted CeO 2 to further understand the relationship between nanostrain and J c (Fig. 7). The lattice constants were obtained by the Nelson Riley method based on the XRD results and by fast Fourier transform analysis based on the STEM images (for further information, see Supplementary Figs. S9 and S10). The magnetization J c of the Ce-FST thin films (2, 5, 10, and 20 p) was measured at 4.2 K (for further information, see Supplementary Fig. S11). As shown in Fig. 7, the change in J c follows the same tendency as the change in the lattice constant, c, which represents the change in strain. This tendency demonstrates that nanostrain is responsible for the enhanced J c in the 2 p Ce-FST thin film.
To understand the pinning mechanism of the 2 p Ce-FST thin film, we plotted J c (t)/J c (0) versus t for both the 2 p Ce-FST and P-FST thin films based on the magnetization J c of both the 2 p Ce-FST and P-FST thin films (for further information, see Supplementary S12). Both the 2 p Ce-FST and P-FST thin films show the δl-pinning type, which is caused by fluctuations in the charge-carrier mean free path. We also calculated the scaled-volume pinning force (f p ) as a function of the normalized field (h); f p is F p / F p_max , and h is H/H irr (for further information, see Supplementary Fig. S12). In general, h values of 0.2 and 0.33 indicate a surface pinning geometry and point pinning geometry, respectively 33 . If the J c of the 2 p Ce-FST thin film is improved by CeO 2 particles or other defects that cause point and volume pinning, the h value is shifted to 0.33. However, the h of f p is approximately 0.2 in both the P-FST and 2 p Ce-FST thin films, indicating that the main pinning type in both the P-FST and 2 p Ce-FST thin films is surface pinning.
In a P-FST thin film that has a pure FST phase without defects, the interlayer spacing between Fe-Se(Te) planes can be an intrinsic pinning center due to the short coherence length of FST 34 . Since this interlayer spacing has a two-dimensional lateral geometry, the pinning type of the interlayer spacing in the P-FST thin film is surface pinning geometry. Interestingly, the 2 p Ce-FST thin film also has a surface pinning geometry even though its J c is relatively improved compared with that of the P-FST thin film. This result means that the 2 p Ce-FST thin film has a pinning type similar to that of the P-FST thin film. The difference between the P-FST thin film and the 2 p Ce-FST thin film is that the c-lattice of the 2 p Ce-FST thin film is expanded by the nanostrain compared with that of the P-FST thin film, indicating that the interlayer spacing of the 2 p Ce-FST thin film at the nanostrained region is larger than that of the P-FST thin film. Thus, the interlayer spacing of 2 p Ce-FST can be a more effective pinning center than that of P-FST.
To evaluate the efficiency of our method, we compared the pinning characteristics with those of previous papers [14][15][16][35][36][37][38] . Figure 8a shows the transport J c of 2 p Ce-FST and P-FST at 6 K for H//c together with the J c of other reported superconductors. The 2 p Ce-FST thin film exhibits a J c higher than the other reported J c of FST thin films [14][15][16][35][36][37][38] . We also estimated the pinning force (F p ) to characterize the effect of nanostrain with nanoscale defects. Figure 8b shows the magnetic field dependences of the vortex pinning force (F p = J c × B) of both the 2 p Ce-FST and the P-FST thin films up to 13 T (H//c) at 6 K together with the reported F p of other superconductors [14][15][16][35][36][37][38] . The 2 p Ce-FST and P-FST thin films show maximum pinning forces (F p,max ) of 57.8 GN/ m 3 under 11.5 T and 14.2 GN/m 3 under 11 T at 6 K, respectively. In particular, the 2 p Ce-FST thin film shows an F p,max~4 00% higher than that of the P-FST thin film at 13 T (H//c). In addition, the 2 p Ce-FST thin film exhibits a higher F p than the other reported FST, even though our samples were measured at a relatively higher temperature of 6 K. Fig. 7 Relationship between J c and nanostrain. As a function of p in the FST thin films, c-lattice constants were calculated from STEM and XRD results, magnetization J c under self-field, and magnetization J c under 13 T. | 7,071.4 | 2020-01-24T00:00:00.000 | [
"Physics"
] |
Multi-Evidence and Multi-Modal Fusion Network for Ground-Based Cloud Recognition
: In recent times, deep neural networks have drawn much attention in ground-based cloud recognition. Yet such kind of approaches simply center upon learning global features from visual information, which causes incomplete representations for ground-based clouds. In this paper, we propose a novel method named multi-evidence and multi-modal fusion network (MMFN) for ground-based cloud recognition, which could learn extended cloud information by fusing heterogeneous features in a unified framework. Namely, MMFN exploits multiple pieces of evidence, i.e
Introduction
Clouds are collections of very tiny water droplets or ice crystals floating in the air. They exert a considerable impact on the hydrological cycle, earth's energy balance and climate system [1][2][3][4][5]. Accurate cloud observation is crucial for climate prediction, air traffic control, and weather monitoring [6].
In general, space-based satellite, air-based radiosonde and ground-based remote sensing observations are three major ways for cloud observation [7]. Satellite observations are widely applied in large-scale surveys. However, they have deficiencies in providing sufficient temporal and spatial resolutions for localized and short-term cloud analysis over a particular area. Although air-based radiosonde observation is excellent in cloud vertical structure detection, its cost is considerably high. As a result, the equipment of ground-based remote sensing observations are rapidly developed, such as total-sky imager (TSI) [8,9] and all sky imager [10,11], which can provide high-resolution remote sensing images at a low cost so as to promote local cloud analysis. The visual information contained in ground-based cloud image represents cloud only from the visual perspective, which cannot describe the cloud accurately due to the large variances in cloud appearance. It should be noted that cloud formation is a mutual process of multiple natural factors, including temperature, humidity, pressure and wind speed, which we name as multi-modal information. Clouds have a great correlation with multi-modal information [29,30]. For example, humidity influences cloud occurrence and cloud shape is affected by wind. Hence, instead of only focusing on cloud visual representations, it is more reasonable to enhance the recognition performance via combining ground-based cloud visual and multi-modal information. Liu and Li [31] extracted deep features by stretching the sum convolutional map obtained from pooling activation at the same position of all the feature maps in deep convolutional layers. Then the deep features are integrated with multimodal features with weight. Liu et al. [32] propounded a two-stream network to learn ground-based cloud images and multi-modal information jointly, and then employed a weighted strategy to fuse these two kinds of information. In spite of these efforts, recognizing ground-based cloud using both cloud images and multi-modal information still remains an open issue.
Furthermore, existing public ground-based cloud datasets [26,33,34] lack data richness, which restricts the research of multi-modal ground-based cloud recognition. Specifically, none of these datasets contain both the ground-based cloud images and multi-modal information. Moreover, in practice, meteorological stations have been installed with equipment for collecting cloud images and multi-modal information, and therefore this information can easily be acquired.
In this paper, considering the above-mentioned issues, we propose the multi-evidence and multi-modal fusion network (MMFN) to fuse heterogeneous features, namely, global visual features, local visual features, and multi-modal information, for ground-based cloud recognition. To this end, the MMFN mainly consists of three components, i.e., main network, attentive network and multi-modal network. The main and attentive networks could mine the multi-evidence, i.e., global and local visual features, to provide discriminative cloud visual information. The multi-modal network is designed with fully connected layers to learn multi-modal features.
To optimize the existing public datasets, we release a new dataset named multi-modal ground-based cloud dataset (MGCD) which contains both the ground-based cloud images and the corresponding multi-modal information. Here, the multi-modal information refers to temperature, humidity, pressure and wind speed. It contains 8000 ground-based cloud samples constructed from a long-time period and larger than any of the existing public ones.
The contributions of this paper are summarized as follows: • The proposed MMFN could fuse the multi-evidence and multi-modal features of ground-based cloud in an end-to-end fashion, which maximizes their complimentary benefits for ground-based cloud recognition.
• The attentive network refines the salient patterns from convolutional activation maps which could learn the reliable and discriminative local visual features for ground-based clouds.
• We release a new dataset MGCD which not only contains the ground-based cloud images but also contains the corresponding multi-modal information. To our knowledge, the MGCD is the first public cloud dataset containing multi-modal information.
Methods
The proposed MMFN is used for the multi-modal ground-based cloud recognition by fusing ground-based cloud images and multi-modal information. As depicted in Figure 2, it comprises three networks, i.e., main network, attentive network and multi-modal network, two fusion layers (concat1 and concat2) and two fully connected layers ( f c5 and f c6). In this section, we detail the main network, the attentive network and the multi-modal network, respectively. Then, the fusion strategy between visual and multi-modal features is elaborated.
Main Network
The main network is used to learn global visual features from the entire ground-based cloud images, and it evolves from the widely-used ResNet-50 [20]. Figure 3 summaries the framework of ResNet-50 which mainly consists of six components, i.e., conv1, conv2 x ∼ conv5 x and a fully connected layer. Additionally, conv2 x ∼ conv5 x are constituted by 3, 4, 6 and 3 residual building blocks, respectively. Taking conv3 x as an example, it contains 4 residual building blocks, each of which is made up of three convolutional layers, where the convolution kernels are with the size of 1 × 1, 3 × 3 and 1 × 1, respectively. Note that the final fully connected layer of ResNet-50 is discarded in the main network. The output of conv5 x is aggregated by the average pooling layer (avgpool1) resulting in a 2048-dimensional vector which is treated as the input of the fusion layer (concat1).
Attentive Network
CNNs tend to pay more attention to local regions where the structure and texture information of clouds is reflected. Figure 4 visualizes the features of CNN implying that the salient parts or regions in ground-based cloud images play a decisive role in the recognition process. Hence, there is an inevitable need to extract local features from ground-based cloud images so as to complement global features. The attentive network is inspired by the great potential of the attention mechanism and used for exploiting local visual features for ground-based cloud recognition. Attention is the process of selecting and gating relevant information based on saliency [35] and it has been widely investigated in speech recognition [36,37], object detection [38], image captioning [39] and many other visual recognition works [40][41][42][43]. Meanwhile, the convolutional activation maps in shallow convolutional layers contain rich low-level patterns, such as structure and texture. Hence, we design the attentive network which consists of the attentive maps, two convolutional layers (conv2 and conv3) and one average pooling layer (avgpool2) to extract local visual features from convolutional activation maps. Specifically, we first refine salient patterns from convolutional activation maps to obtain attentive maps that contain more semantic information. We then optimize the local visual features from attentive maps. To learn reliable and discriminative local visual features, we propose attentive maps as the first part of the attentive network, which are generated by refining the salient patterns from the convolutional activation map. Explicitly, we treat the convolutional activation maps of the first residual building block in conv3 x as the input of attentive maps. Let X i = {x i,j | j = 1, 2, · · · , h × w} denote the i-th convolutional activation map, where x ij is the response at location j, and h, w denote the height and width of convolutional activation map. Herein, there are 512 convolutional activation maps, and h = w = 28 in the first block of conv3 x. For the i-th convolutional activation map, we sort x i,1 ∼ x i,h×w in descending order and select the top n × n responses. Afterward, we reconstruct them into an n × n attentive map and maintain the descending order. Figure 5 illustrates the process of obtaining an attentive map, where n is set to 5. We exert the same strategy to all the convolutional activation maps, and therefore obtain 512 attentive maps. Hence, the attentive maps gather higher responses for the meaningful content and eliminate the negative effects caused by the non-salient responses. Subsequently, the attentive maps are followed by a dropout layer. conv2, conv3 and avgpool2 are used to transform the attentive maps to a high dimensional vector by non-linear transformations. The convolution kernels of conv2 and conv3 are with the size of 3 × 3 and 1 × 1, and with the stride of 2 and 1, respectively. Notice that, the target of using the convolution kernel with the size of 1 × 1 in conv3 is to increase the output dimension. In addition, the numbers of convolution kernels in conv2 and conv3 are 512 and 2048, respectively. Both conv2 and conv3 are normalized by the batch normalization followed by the Leaky rectified linear unit (Leaky ReLU). The output of avgpool2 is a 2048-dimensional vector and is fed into concat2.
Hence, with the help of the main and attentive networks, the proposed MMFN could mine the multi-evidence to represent ground-based cloud images in a unified framework. This provides discriminative cloud visual information.
Multi-Modal Network
The multi-modal information indicates the procedure of cloud formation, and thus we apply the multi-modal network to learn multi-modal features for completed cloud representation. As the input of the multi-modal network is the multi-modal information, which is represented as a vector, it is designed with four fully connected layers, i.e., f c1 ∼ f c4, where the neuron numbers of them are 64, 256, 512 and 2048, respectively. Additionally, the batch normalization and the Leaky ReLU activation are connected to each of the first three. The output of f c4 is passed through the Leaky ReLU activation, and then it is treated as the input of concat1 and concat2 at the same time. Herein, we denote the input of concat1 and concat2, namely the output of the multi-modal network, as f m .
Heterogeneous Feature Fusion
Feature fusion has been proved to be a robust and effective strategy to learn rich information in various areas, such as scene classification [44], facial expression recognition [45], action recognition [46] and so on. Especially, feature fusion methods based on deep neural networks are deemed to be extremely powerful and they are roughly divided into homogeneous feature fusion [47][48][49][50] and heterogeneous feature fusion [51][52][53]. For the former, many efforts concentrate on fusing the homogeneous features extracted from different components of CNN for recognition tasks. Compared with the homogeneous feature fusion, the heterogeneous feature fusion is rather tough and complex, because heterogeneous features possess significant different distributions and data structures. Herein, we focus the emphasis on the heterogeneous feature fusion.
The outputs of the main network, the attentive network and the multi-modal network, i.e., f g , f l and f m , are treated as global visual features, local visual features and multi-modal features respectively, each of which is a 2048-dimensional vector. f g is learned from the entire cloud images, and it contains more semantic information because of being extracted from the deeper layer of the main network. While f l is learned from salient patterns in the shallow convolutional activation maps and it contains more texture information. Different from the visual features, f m describes the clouds from the aspect of multi-modal information. Hence, these features describe the ground-based clouds from different aspects, and they contain some complementary information. To take full advantage of the complementary strengths among them, we combine the multi-modal features with the global and local visual features, respectively.
In this work, we propose two fusion layers concat1 and concat2 to fuse f m with f g and f l respectively. In concat1, the integration algorithm for f g and f m can be formulated as where g(·) denotes the fusion operation. In this work, g(·) is represented as where [·, ·] means to concatenate two vectors, and λ 1 and λ 2 are the coefficients to trade-off the importance of f g and f m .
Similarly, the fusion features of f l and f m for concta2 can be expressed as where λ 3 and λ 4 are the coefficients to balance the importance of f l and f m . The final fully connected layers f c5 and f c6 are used for the recognition task and are connected to concat1 and concat2, respectively. Each of them has K neurons, where K refers to the number of ground-based cloud categories. The output of f c5 is fed into the softmax activation, and a series of label predictions over K categories are obtained to represent the probability of each category. The softmax activation is defined as where x k and y k ∈ [0, 1] are the value of the k-th neuron of f c5 and the predicted probability of the k-th category, respectively. The cross-entropy loss is employed to calculate the loss value where q k denotes the ground-truth probability, and it is assigned with 1 when k is the ground-truth label, otherwise, it is assigned with 0. As for f c6, it is similar to f c5. Namely, its output is activated by the softmax and then evaluated by the cross-entropy loss L 2 . The total cost of the proposed MMFN is computed as where α and β are the weights to balance L 1 and L 2 . Hence, the optimization target of MMFN is to minimize Equation (6), and the training of MMFN is an end-to-end process, which is beneficial to the fusion of the multi-modal features with the global and local visual features under a single network. After training the MMFN, we extract the fused features F gm and F lm from ground-based cloud samples according to Equation (2) and Equation (3). Finally, F gm and F lm are directly concatenated as the final representation for ground-based cloud samples. In short, the proposed MMFN has the following three properties. Firstly, the attentive network is utilized to refine the salient patterns from convolutional activation maps so as to learn the reliable and discriminative local visual features for ground-based clouds. Secondly, the MMFN could process the heterogeneous data. Specifically, the three networks in MMFN transform the corresponding heterogeneous data into the uniform format, which provides conditions for the subsequent feature fusion and discriminative feature learning. Thirdly, the MMFN could learn more extended fusion features for the ground-based clouds. It is because the heterogeneous features, i.e., global (local) visual features and the multi-modal features, are fused by two fusion layers which are optimized under one unified framework.
Comparison Methods
In this subsection, we describe comparison methods that are conducted in the experiments including variants of MMFN, and hand-crafted and learning-based methods.
Variants of MMFN
A unique advantage of the proposed MMFN is the capability of learning supplementary and complementary features, i.e., global visual features, local visual features and multi-modal features, from ground-based cloud data. Successful extraction of the expected features is guaranteed by several pivotal components, i.e., three networks, two fusion layers and the cooperative supervision of two losses. For the purpose of demonstrating their effectiveness on MGCD, we list several variants of the proposed MMFN as follows.
variant1. The variant1 is designed to only learn the global visual features of the ground-based cloud. It directly connects a fully connected layer with 7 neurons to the main network and the output of avgpool1, a 2048-dimensional vector, is used to represent the ground-based cloud images. Furthermore, the concatenation of the global visual features with the multi-modal information is denoted as variant1 + MI.
variant2. The variant2 is utilized to simply learn the local visual features. It maintains the architecture of the main network before the second residual building block in conv 3 and the attentive network. Then, it adds a fully connected layer with 7 neurons after the attentive network. The output of avgpool2, a 2048-dimensional vector as well, is regarded as the representation of ground-based cloud. Similarly, the concatenation of the local visual features and the multi-modal information is denoted as variant2 + MI.
variant3. The variant3 is designed for integrating global and local visual features. To this end, the multi-modal network, as well as the two fusion layers of MMFN, are removed. Furthermore, the variant3 adds a fully connected layer with 7 neurons after the main network and the attentive network, respectively. The outputs of avgpool1 and avgpool2 of the variant3 are concatenated resulting in a 4096-dimensional vector as the final representation of ground-based cloud images. The feature representation of the variant3 is integrated with the multi-modal information, which is referred to as variant3 + MI. variant6. For the purpose of demonstrating the effectiveness of two fusion layers, the variant6 integrates the outputs of three networks using one fusion layer which is followed by one fully connected layer with 7 neurons. The output of the fusion layer is treated as the ground-based cloud representation which is a 6144-dimensional vector.
variant7. To learn discriminative local visual features, the MMFN reconstructs the salient responses to form the attentive maps. The counterpart variant7 is employed to highlight the advantage of this innovative strategy. Instead of aggregating the top n × n responses, variant7 randomly selects n × n responses from each convolutional activation map.
The sketches of variant1 ∼ variant6 are shown in Figure 6.
Hand-Crafted and Learning-Based Methods
In this part, we provide descriptions of a series of methods for ground-based cloud classification involving hand-crafted methods, i.e., local binary patterns (LBP) [54] and completed LBP (CLBP) [55] and the learning-based method, i.e., bag-of-visual-words (BoVW) [56].
(a) The BoVW designs the ground-based cloud image representation with the idea of a bag of features framework. It densely samples SIFT features [57], which are then clustered by the K-means to generate a dictionary with 300 visual words. Based on the dictionary, each ground-based cloud image is transformed into the histogram of visual word frequency. To exploit the spatial information, the spatial pyramid matching scheme [58] is employed to partition each ground-based cloud image into an additional two levels with 4 and 16 sub-regions. Therefore, each ground-based cloud image with BoVW is represented by a 6300-dimensional histogram. This method is denoted as PBoVW.
(b)
The LBP operator assigns binary labels to the circular neighborhoods of a center pixel according to their different signs. In this work, the uniform invariant LBP descriptor LBP riu2 P,R is used as the texture descriptor of ground-based cloud images, where (P, R) means P sampling points on a circle with the radius of R. We evaluate the cases with (P, R) being set to (8, 1), (16,2) and (24,3), resulting into the descriptor being a feature vector with the dimensionality of 10, 18 and 26, respectively.
(c) The CLBP is proposed to improve LBP, and it decomposes local differences into signs and magnitudes. Besides, the local central information is considered as an operator. These three operators are jointly combined to describe each ground-based cloud image. The (P, R) is set to (8, 1), (16,2) and (24,3), resulting in a 200-dimensional, a 648-dimensional and a 1352-dimensional feature vector, respectively.
Implementation Details
We first resize the ground-based cloud images into 252 × 252 and then randomly crop them to the fixed size of 224 × 224. The ground-based cloud images are also augmented by the random horizontal flip. Thereafter, each of them is normalized by the mean subtraction. To ensure the compatibility, the values of multi-modal information, i.e., temperature, humidity, pressure and wind speed, are scaled to [0, 1].
The main network is initialized by the ResNet-50 pre-trained on the ImageNet dataset. We adopt the weight initialization method in [59] for the convolutional layers (conv2 and conv3) and the fully connected layers. The weights of the batch normalization layers are initialized by the normal distribution with the mean and the standard deviation of 1 and 0.02, respectively. All the biases in the convolutional layers, fully connected layers and the batch normalization layers are initialized to zero. For the attentive maps, we discard the top 5 responses in each convolutional activation map to alleviate the negative effects of noise or outliers. We then form the attentive maps.
During the training phase, the SGD [60] optimizer is employed to update the parameters of the MMFN. The total training epochs are 50 with a batch size of 32. The weight decay is set to 2 × 10 −4 with a momentum of 0.9. The learning rate is initialized to 3 × 10 −4 and reduced by a factor of 0.2 at epoch 15 and 35, respectively. The slope of Leaky ReLU is fixed to 0.1, and the drop rate in the dropout layer of the attentive network is a constant of 0.5. Furthermore, the parameters in the multi-modal network are restricted to the box of [−0.01, 0.01]. After training the MMFN, each cloud sample is represented as an 8192-dimensional vector by concatenating the fusion features F gm and F lm . Then, the final representations of training samples are utilized to train the SVM classifier [61].
Furthermore, several parameters introduced in this paper, i.e., the parameter n in the attentive network, the parameters λ 1 ∼ λ 4 in Equations (2) and (3)
Data
The multi-modal ground-based cloud dataset (MGCD) was the first one composed of groundbased cloud images and the multi-modal information. It was collected in Tianjin, China from March 2017 to December 2018 over a period of 22 months, at different locations and day times in all seasons, which ensured the diversity of cloud data. The MGCD contains 8000 ground-based cloud samples, which is the largest cloud dataset. Each sample was composed of one ground-based cloud image and a set of multi-modal information which had a one-to-one correspondence. The cloud images were collected by a sky camera with a fisheye lens, with a resolution of 1024 × 1024 pixels in JPEG format. The multi-modal information was collected by a weather station and stored in a vector with four elements, namely, temperature, humidity, pressure and wind speed.
We divided the sky conditions into seven sky types, as listed in Figure 7, including (1) cumulus, (2) altocumulus and cirrocumulus, (3) cirrus and cirrostratus, (4) clear sky, (5) stratocumulus, stratus and altostratus, (6) cumulonimbus and nimbostratus and (7) mixed cloud, under the genera-based classification recommendation of the World Meteorological Organization (WMO) as well as the cloud visual similarities in practice. The sky is often covered by no less than two cloud types, and this sky type is regarded as mixed cloud. Additionally, cloud images with cloudiness no more than 10% are categorized as clear sky. It should be noticed that all cloud images are labeled by meteorological experts and ground-based cloud-related researchers after much deliberation.
The ground-based cloud samples from the first 11 months constitutes the training set and these from the second 11 months are the test set, where each of the sets contains 4000 samples. Figure 7 presents the samples and the number from each cloud category in MGCD, where the multi-modal information is embedded in the corresponding cloud image. The MGCD is available at https://github. com/shuangliutjnu/Multimodal-Ground-based-Cloud-Database.
Results
In this section, we present the comparisons of the proposed MMFN with the variants of MMFN and other methods on MGCD followed by the analysis of classification results with different parameters
Comparison with Variants of MMFN
The recognition accuracy of MMFN and its variants based on MGCD are presented in Table 1 where several conclusions can be drawn. Firstly, both variant1 and the variant2 achieve promising recognition accuracy. It indicates that both global visual features and local visual features are essential for cloud recognition, in which global visual features contain more semantic and coarse information while local visual features contain more texture and fine information. The variant3 achieves a recognition accuracy of 86.25% which exceeds 3.1% and 4.02% over the variant1 and the variant2, respectively. It is because the variant3 combines the benefits of global visual features and local visual features. Table 1. The recognition accuracy (%) of the proposed multi-evidence and multi-modal fusion network (MMFN) as well as its variants. The notation "+" indicates the concatenation operation.
Methods
Accuracy ( Secondly, the methods (variant1 + MI, variant2 + MI, variant3 + MI, variant4 and variant5) which employ the multi-modal information are more competitive than those (variant1, variant2 and variant3) that do not. Specifically, compared with variant1, variant1 + MI has an improvement of 1.33%, and so for variant2 + MI and variant3 + MI which have improvements of 1.47% and 0.85%, respectively. More importantly, the improvements of the variant4, variant5 and MMFN are 2.75%, 1.47% and 2.38% over the variant1, variant2 and variant3, respectively. We therefore conclude that jointly learning the cloud visual features and the multi-modal features under a unified framework can further improve the recognition accuracy.
Thirdly, from the comparison between variant6 and the proposed MMFN, we can discover that the recognition performance of the latter is superior to the former even though both of them learn the global visual features, local visual features and the multi-modal features under a unified framework. It indicates that fusing the multi-modal features with the global and local visual features, respectively, can sufficiently mine the complementary information among them and exploit more discriminative cloud features.
Finally, the proposed MMFN achieves better results than variant7 because the attentive maps of MMFN learn local visual features from the salient patterns in the convolutional activation maps. While variant7 randomly selects responses from convolutional activation maps. Hence, the proposed attentive map could learn effective local visual features.
Comparison with Other Methods
The comparison results between the proposed MMFN and other methods, such as [32,62,63], are summarized in Table 2. Firstly, most results in the right part of the table are more competitive than those in the left part, which indicates that the multi-modal information contains useful information for ground-based cloud recognition. The visual features and the multi-modal information are supplementary to each other, and therefore the integration of them could obtain the extended information for ground-based cloud representation. Secondly, the CNN-based methods, such as CloudNet, JFCNN, DTFN and so on, are much better than the hand-crafted methods (LBP and CLBP) and the learning-based methods (BoVW and PBoVW). It is because the CNNs are with the nature of highly nonlinear transformations which enables them to extract effective features from highly complex cloud data. Thirdly, the proposed MMFN has an improvement over CNN-based methods, which verifies the effectiveness of the multi-evidence and multi-modal fusion strategy. Such a strategy thoroughly investigates the correlations between the visual features and the multi-modal information and takes into consideration the complementary and supplementary information between them as well as their relative importance for the recognition task.
Parameter Analysis
In this subsection, we analyse the parameter n in the attentive network, the parameters λ 1 ∼ λ 4 in Equations (2) and (3), and the parameters α and β in Equation (6).
We first analyse the parameter n which determines the size of the attentive map. The recognition accuracy with different n are displayed in Figure 8. We can see that the best recognition accuracy is achieved when n is equal to 7, namely 25% salient responses are selected from the convolutional activation map. While n is set to away from 7, the corresponding recognition accuracy is below the peak value of 88.63%. Then we analyse two pairs of parameters, i.e., λ 1 and λ 2 , λ 3 and λ 4 , in Equation (2) and Equation (3). λ 1 and λ 2 balance the significance of the global visual features and the multi-modal features respectively. Similarly, λ 3 and λ 4 balance the significance of the local visual features and the multi-modal features. Appropriate λ 1 ∼ λ 4 settings can optimize the recognition result. The recognition accuracies with different (λ 1 , λ 2 ) and (λ 3 , λ 4 ) settings are illustrated in Table 3 and Table 4. From Table 3 we can see that when (λ 1 , λ 2 ) is equal to (0.3, 0.7), the best recognition accuracy is obtained. Similarly, Table 4 shows that when (λ 3 , λ 4 ) is with the setting of (0.3, 0.7), the best recognition accuracy is achieved. Afterwards, we evaluate the parameters α and β which are a tradeoff between the losses L 1 and L 2 in Equation (6). The recognition performances with different α and β settings are summarized in Table 5. It is observed when α = β = 1, the best recognition accuracy is obtained. Besides, as more training data means better training of the network, we implement the experiment with the dataset being divided into different ratios, i.e., 60/40, 70/30 and 80/20, to evaluate the influences on recognition accuracies caused by the training data numbers, and the results are illustrated in Figure 9. As shown, more training samples lead to higher recognition performance.
Overall Discussion
Cloud classification is both a basic and necessary part of climatological and weather research and provides indicative knowledge about both short-term weather conditions and long-term climate change [64]. There are many algorithms developed for automatic cloud classification. Xiao et al. [65] aggregated texture, structure and color features which are extracted simultaneously from ground-based cloud images and encoded by the Fisher vector as a cloud image descriptor. Afterwards, a linear SVM was employed to group 1231 ground-based cloud images into six classes with an accuracy of 87.5%. Wang et al. [7] applied the selection criterion which is based on the Kullback-Leibler divergence between LBP histograms of the original and resized ground-based cloud images to select the optimal resolution of the resized cloud image. The criterion was evaluated on three datasets with a total of ground-based cloud images 550 and 5000 from five classes, and 1500 from seven classes, respectively. The overall classification results of these three datasets are around 65.5%, 45.8% and 44% respectively. Zhang et al. [26] employed the CloudNet composed of five convolutional layers and two fully connected layers to divide 2543 ground-based cloud images into 10 categories with an average accuracy of 88%. Li et al. [62] presented a deep tensor fusion network which fuses cloud visual features and multimodal features at the tensor level so that the spatial information of ground-based cloud images can be maintained. They obtained the classification result of 86.48% over 8000 ground-based cloud samples. Liu et al. [63] fused deep multimodal and deep visual features in a two-level fashion, i.e., low-level and high-level. The low-level fused the heterogeneous features directly and its output was regarded as a part of the input of the high-level which also integrates deep visual and deep multimodal features. The classification accuracy of the hierarchical feature fusion method was 87.9% over 8000 ground-based cloud samples.
Of the above-mentioned studies, most of them have achieved high accuracy, but the datasets used in these studies are either with a small number of ground-based images or not public. The availability of sufficient ground-based cloud samples is a fundamental factor to allow CNNs to work effectively. In addition, since cloud types change over time, appropriate fusion of multi-modal information and cloud visual information could improve the classification performance. The JFCNN [32] achieved excellent performance with the accuracy of 93.37% by learning ground-based cloud images and multi-modal information jointly. However, the dataset used in [32] only contains 3711 labeled cloud samples, and it is randomly split into the training set and the test set with the ratio of 2:1, which means there may exist high dependence between training and test samples. In this study, we create a more extensive dataset MGCD containing 8000 ground-based cloud samples with both cloud images and the corresponding multimodal information. All the samples are classified into the training set and the test set and both of them are with 4000 samples, where the training set is grouped from the first 11 months and the test set is grouped from the second 11 months. Hence, the training and test sets in the MGCD are independent. As salient local patterns play a decisive role in the decision-making procedure, we devise MMFN with three networks, i.e., main network, attentive network and multi-modal network, to generate global visual features, local visual features and multi-modal features, and fuse them at two fusion layers. The proposed MMFN obtains the best result of 88.63%. We first assess the rationality of each component of MMFN by comparing with its variants. Then, we exert comparisons between MMFN and other methods, including the hand-crafted methods (LBP and CLBP), the learning-based methods (BoVW and PBoVW) and the CNN-based methods (DMF, JFCNN, HMF and so on), where the accuracy gaps between the proposed MMFN and the hand-crafted and learning-based methods are over 18 percentage points, and the gap between the proposed MMFN and the second-best CNN-based method, i.e., HMF is 0.73 percentage points.
It is quite reasonable that the proposed MMFN has a competitive edge over other methods. Affected by temperature, wind speed, illumination, noise, deformation and other environmental factors, the cloud images are with the characteristic of volatility leading to intractability of cloud recognition. A more effective strategy is imperative and required to obtain extended cloud information. Hence, the proposed MMFN jointly learns the cloud multi-evidence and the multi-modal information and extracts powerful and discriminative features from ground-based cloud samples. Accordingly, the proposed MMFN makes a significant improvement over other methods to the recognition accuracy.
Potential Applications and Future Work
Generally, this research points out a new method to promote the accuracy of cloud classification using cloud images and multi-modal information, which is beneficial to the regional weather prediction. Furthermore, this research may provide a novel solution to other studies related to heterogeneous information fusion, for example, image-text recognition.
In this work, we utilized four weather parameters to improve the cloud classification, and we will investigate how to employ other measurements such as cloud base height for cloud classification in future work. Additionally, we cannot guarantee that the MMFN trained with the MGCD can be generalized well to another dataset, for example the cloud samples gathered from more windy, colder, warmer or on lower ground. Thus, we will utilize unsupervised domain adaptation to enhance the model generalization ability in the future work.
Conclusions
In this paper, we have proposed a novel method named MMFN for ground-based cloud recognition. The proposed MMFN has the ability of learning and fusing heterogeneous features under a unified framework. Furthermore, the attentive map is proposed to extract local visual features from salient patterns. To discover the complementary benefit from heterogeneous features, the multi-modal features are integrated with global visual features and local visual features respectively by using two fusion layers. We have also released a new cloud dataset MGCD which includes the cloud images and the multi-modal information. To evaluate the effectiveness of the proposed MMFN, we have conducted a range of experiments and the results demonstrate that the proposed MMFN can stand comparison with the state-of-the-art methods. | 8,183 | 2020-02-02T00:00:00.000 | [
"Computer Science",
"Environmental Science",
"Engineering"
] |
Robust Bayesian model selection for variable clustering with the Gaussian graphical model
Variable clustering is important for explanatory analysis. However, only few dedicated methods for variable clustering with the Gaussian graphical model have been proposed. Even more severe, small insignificant partial correlations due to noise can dramatically change the clustering result when evaluating for example with the Bayesian information criteria (BIC). In this work, we try to address this issue by proposing a Bayesian model that accounts for negligible small, but not necessarily zero, partial correlations. Based on our model, we propose to evaluate a variable clustering result using the marginal likelihood. To address the intractable calculation of the marginal likelihood, we propose two solutions: one based on a variational approximation and another based on MCMC. Experiments on simulated data show that the proposed method is similarly accurate as BIC in the no noise setting, but considerably more accurate when there are noisy partial correlations. Furthermore, on real data the proposed method provides clustering results that are intuitively sensible, which is not always the case when using BIC or its extensions.
Introduction
The Gaussian graphical model (GGM) has become an invaluable tool for detecting partial correlations between variables.Assuming the variables are jointly drawn from a multivariate normal distribution, the sparsity pattern of the precision matrix reveals which pairs of variables are independent given all other variables [Anderson, 2004].In particular, we can find clusters of variables that are mutually independent, by grouping the variables according their entries in the precision matrix.
However, in practice, it can be difficult to find a meaningful clustering due to the noise of the entries in the partial correlations.The noise can be due to the sampling, this is in particular the case when n the number of observations is small, or due to small non-zero partial correlations in the true precision matrix that might be considered as insignificant.Here in this work, we are particularly interested in the latter type of noise.In the extreme, small partial correlations might lead to a connected graph of variables, where no grouping of variables can be identified.For an exploratory analysis such a result might not be desirable.
As an alternative, we propose to find a clustering of variables, such that the partial correlation between two variables in different groups is negligibly small, but not necessarily zero.The open question, which we try to address here, is whether there is a principled model selection criteria for this scenario.
For example, the Bayesian Information Criteria (BIC) [Schwarz, 1978] is a popular model selection criteria for the Gaussian graphical model.However, in the noise setting it does not have any formal guarantees.As a solution, we propose here a Bayesian model that explicitly accounts for small partial correlations between variables in different clusters.
Under our proposed model, the marginal likelihood of the data can then be used to identify the correct (if there is a ground truth in theory), or at least a meaningful clustering (in practice) that helps analysis.The marginal likelihood of our model does not have an analytic solution.Therefore, we provide two approximations.The first is a variational approximation, the second is based on MCMC.
Experiments on simulated data show that the proposed method is similarly accurate as BIC in the no noise setting, but considerably more accurate when there are noisy partial correlations.The proposed method also compares favorable to two previously proposed methods for variable clustering and model selection, namely the Clustered Graphical Lasso (CGL) [Tan et al., 2015] and the Dirichlet Process Variable Clustering (DPVC) [Palla et al., 2012] method.
Our paper is organized as follows.In Section 2, we discuss previous works related to variable clustering and model selection.In Section 3, we introduce a basic Bayesian model for evaluating variable clusterings, which we then extend in Section 4 to handle noise on the precision matrix.For the proposed model, which can handle noise in the precision matrix, the calculation of the marginal likelihood is infeasible and we describe our approximation strategy in Section 5. Since enumerating all possible clusterings is intractable, we describe in Section 6 an heuristic based on spectral clustering to limit the number of candidate clusterings.We evaluate the proposed method on synthetic and real data in Sections 7 and 8, respectively.Finally, we discuss our findings in Section 9.
Related Work
Finding a clustering of variables is equivalent to finding an appropriate block structure of the covariance matrix.Recently, Tan et al. [2015] and Devijver and Gallopin [2016] suggested to detect block diagonal structure by thresholding the absolute values of the covariance matrix.Their methods perform model selection using the mean squared error of randomly left out elements of the covariance matrix [Tan et al., 2015], and a slope heuristic [Devijver and Gallopin, 2016].
Also several Bayesian latent variable models have been proposed for this task [Marlin and Murphy, 2009, Sun et al., 2014, Palla et al., 2012].Each clustering, including the number of clusters, is either evaluated using the variational lower bound [Marlin and Murphy, 2009], or by placing a Dirichlet Process prior over clusterings [Palla et al., 2012, Sun et al., 2014].However, all of the above methods assume that the partial correlations of variables across clusters are exactly zero.
An exception is the work in [Marlin et al., 2009] which proposes to regularize the precision matrix such that partial correlations of variables that belong to the same cluster are penalized less than those belonging to different clusters.For that purpose they introduce three hyper-parameters, λ 1 (for within cluster penalty), λ 0 (for across clusters), with λ 0 > λ 1 , and λ D for a penalty of the diagonal elements.The clusters do not need to be known a-priori and are estimated by optimizing a lower bound on the marginal likelihood.As such their method can also find variable clusterings, even when the true partial correlation of variables in different clusters is not exactly zero.However, the clustering result is influenced by three hyperparameters λ 0 , λ 1 , and λ D which have to be determined using cross-validation.
Recently, the work in [Sun et al., 2015, Hosseini andLee, 2016] relaxes the assumption of a clean block structure by allowing some variables to correspond to two clusters.The model selection issue, in particular, determining the number of clusters, is either addressed with some heuristics [Sun et al., 2015] or crossvalidation [Hosseini and Lee, 2016].
The Bayesian Gaussian Graphical Model for Clustering
Our starting point for variable clustering is the following Bayesian Gaussian graphical model.Let us denote by p the number of variables, and n the number of observations.We assume that each observation x ∈ R p is generated i.i.d.from a multivariate normal distribution with zero mean and covariance matrix Σ.
Assuming that there are k groups of variables that are mutually independent, we know that, after appropriate permutation of the variables, Σ has the following block structure where Σ j ∈ R pj ×pj , and p j is the number of variables in cluster j.
By placing an inverse Wishart prior over each block Σ j , we arrive at the following Bayesian model where ν j and Σ j,0 , are the degrees of freedom and the scale matrix, respectively.We set ν j = p j + 1, Σ j = I pj leading to a non-informative prior on Σ j .C denotes the variable clustering which imposes the block structure on Σ.We will refer to this model as the basic inverse Wishart prior model.Assuming we are given a set of possible variable clusterings C , we can then choose the clustering C * that maximizes the posterior probability of the clustering, i.e.
where we denote by X the observations x 1 , ..., x n , and p(C) is a prior over the clusterings which we assume to be uniform.Here, we refer to p(X |C) as the marginal likelihood (given the clustering).For the basic inverse Wishart prior model the marginal likelihood can be calculated analytically, see e.g.[Lenkoski and Dobra, 2011].
Proposed Model
In this section, we extend the Bayesian model from Equation (1) to account for non-zero partial correlations between variables in different clusters.For that purpose we introduce the matrix Σ ∈ R p×p that models the noise on the precision matrix.The full joint probability of our model is given as follows: where Ξ := (Σ −1 + βΣ −1 ) −1 , and As before, the block structure of Σ is given by the clustering C. The proposed model is the same model as in Equation ( 1), with the main difference that the noise term βΣ −1 is added to the precision matrix of the normal distribution.1 β > 0 is a hyper-parameter that is fixed to a small positive value accounting for the degree of noise on the precision matrix.Furthermore, we assume non-informative priors on Σ j and Σ by setting ν j = p j + 1, Σ j = I pj and ν = p + 1, Σ ,0 = I p .
Remark on the parameterization We note that as an alternative parameterization, we could have defined Ξ := (Σ −1 + Σ −1 ) −1 , and instead place a prior on Σ that encourages Σ −1 to be small in terms of some matrix norm.For example, we could have set Σ ,0 = 1 β I p .
Estimation of the Marginal Likelihood
The marginal likelihood of the data given our proposed model can be expressed as follows: where Ξ := (Σ −1 + βΣ −1 ) −1 .Clearly, if β = 0, we recover the basic inverse Wishart prior model, as discussed in Section 3, and the marginal likelihood has a closed form solution due to the conjugacy of the covariance matrix of the Gaussian and the inverse Wishart prior.However, if β > 0, there is no analytic solution anymore.Therefore, we propose to either use an estimate based on a variational approximation (Section 5.2) or on MCMC (Section 5.3).Both of our estimates require the calculation of the maximum a posterior solution which we explain first in Section 5.1.
Remark on BIC type approximation of the marginal likelihood We note that for our proposed model an approximation of the marginal likelihood using BIC is not sensible.To see this, recall that BIC consists of two terms: the data log-likelihood under the model with the maximum likelihood estimate, and a penalty depending on the number of free parameters.The maximum likelihood estimate is where S is the sample covariance matrix.Note that without the specification of a prior, it is valid that Σ, Σ are not positive definite as long as the matrix and the data likelihood under the model with the maximum likelihood estimate is simply n i=1 log Normal(x i |0, S), which is independent of the clustering.The number of free parameters is (p 2 − p)/2 which is also independent of the clustering.That means, for any clustering we end up with the same BIC.
Furthermore, a Laplacian approximation as used in the generalized Bayesian information criterion [Konishi et al., 2004] is also not suitable, since in our case the parameter space is over the positive definite matrices.
Solution using a 3-Block ADMM Finding the MAP can be formulated as a convex optimization problem by a change of parameterization: by defining X := Σ −1 , X j := Σ −1 j , and X := Σ −1 , we get the following convex optimization problem: where, for simplifying notation, we introduced the following constants: From this form, we see immediately that the problem is strictly convex jointly in X and X. 1 We further reformulate the problem by introducing an additional variable Z: minimize f (X , X 1 , . . ., X k , Z) It is tempting to use a 2-Block ADMM algorithm, like e.g. in [Boyd et al., 2011], which leads to two optimization problems: update of X, X and update of Z.However, unfortunately, in our case the resulting optimization problem for updating X, X does not have an analytic solution.Therefore, instead, we suggest the use of a 3-Block ADMM, which updates the following sequence: where U is the Lagrange multiplier, and X t , Z t , U t , denotes X, Z, U at iteration t; ρ > 0 is the learning rate. 2ach of the above sub-optimization problem can be solved efficiently via the following strategy.The zero gradient condition for the first optimization problem with variable X is The zero gradient condition for the second optimization problem with variable X is The zero gradient condition for the third optimization problem with variable Z is Each of the above three optimization problem can be solved via an eigenvalue decomposition as follows.We need to solve V such that it satisfies: Since R is a symmetric matrix (not necessarily positive or negative semi-definite), we have the eigenvalue decomposition: where Q is an orthonormal matrix and L is a diagonal matrix with real values.
Since the solution Y must also be a diagonal matrix, we have Y ij = 0, for j = i, and we must have that Then, Equation ( 5) is equivalent to and therefore one solution is Note that for λ > 0, we have that Y ii > 0. Therefore, we have that the resulting Y solves Equation (4) and moreover That means, we can solve the semi-definite problem with only one eigenvalue decomposition, and therefore is in O(p 3 ).Finally, we note that in contrast to the 2-block ADMM, a general 3-block ADMM does not have a convergence guarantee for any ρ > 0. However, using a recent result from [Lin et al., 2015], we can show in Appendix A that in our case the conditions for convergence are met for any ρ > 0.
Variational Approximation of the Marginal Likelihood
Here we explain our strategy for the calculation of a variational approximation of the marginal likelihood.For simplicity, let θ denote the vector of all parameters, X the observed data, and η the vector of all hyper-parameters.
Let θ denote the posterior mode.Furthermore, let g(θ) be an approximation of the posterior distribution p(θ|X , η, C) that is accurate around the mode θ.
Then we have Note that for the Laplace approximation we would use g(θ) = N (θ| θ, V ), where V is an appropriate covariance matrix.However, here the posterior p(θ|X , η, C) is a probability measure over the positive definite matrices and not over R d , which makes the Laplace approximation inappropriate.
Instead, we suggest to approximate the posterior distribution p(Σ , Σ 1 , . . .Σ k |x 1 , ..., x n , ν , Σ ,0 , {ν j } j , {Σ j,0 } j , C) by the factorized distribution We define g (Σ ) and g j (Σ j ) as follows: where Σ is the mode of the posterior probability p(Σ |X , η, C) (as calculated in the previous section).Note that this choice ensures that the mode of g is the same as the mode of p(Σ |x 1 , ..., x n , η, C).Analogously, we set where Σj is the mode of the posterior probability p(Σ j |X , η, C).The remaining parameters ν g, ∈ R and ν g,j ∈ R are optimized by minimizing the KLdivergence between the the factorized distribution g and the posterior distribution p(Σ , Σ 1 , . . .Σ k |x 1 , ..., x n , η, C).The details of the following derivations are given in Appendix B. For simplicity let us denote g J := k j=1 g j , then we have where c is a constant with respect to g and g j .However, the term E g J ,g [log |Σ −1 + βΣ −1 |] cannot be solved analytically, therefore we need to resort to some sort of approximation.
We assume that where we used that and c is a constant with respect to g and g j .
From the above expression, we see that we can optimize the parameters of g and g j independently from each other.The optimal parameter νg, for g is And analogously, we have Each is a one dimensional non-convex optimization problem that we solve with Brent's method [Brent, 1971].
MCMC Estimation of Marginal Likelihood
As an alternative to the variational approximation, we investigate an MCMC estimation based on Chib's method [Chib, 1995, Chib andJeliazkov, 2001].
To simplify the description, we introduction the following notations Furthermore, we define θ <i := {θ 1 , . . ., θ i−1 } and θ >i := {θ i+1 , . . ., θ k+1 }.For simplicity, we also suppress in the notation the explicit conditioning on the hyper-parameters η and the clustering C, which are both fixed.Following the strategy of Chib [1995], the marginal likelihood can be expressed as In order to approximate p(X ) with Equation ( 7), we need to estimate p( θi |X , θ1 , . . .θi−1 ).First, note that we can express the value of the conditional posterior distribution at θi , as follows (see Chib and Jeliazkov [2001], Section 2.3): where q i (θ i ) is a proposal distribution for θ i , and the acceptance probability of moving from state θ i to state θ i , holding the other states fixed is defined as Next, using Equation ( 8), we can estimate p( θi |X , θ1 , . . .θi−1 ) with a Monte Carlo approximation with M samples: , and θ q,m i ∼ q(θ i ).Finally, in order to sample from p(θ ≥i |X , θ<i ), we propose to use the Metropolis-Hastings within Gibbs sampler as shown in Algorithm 1. M H j (θ t j , ψ) denotes the Metropolis-Hastings algorithm with current state θ t j , and acceptance probability α(θ j , θ j |ψ), Equation ( 9), and θ 0 ≥i is a sample after the burn-in.For the proposal distribution q i (θ i ), we use Here κ > 0 is a hyper-parameter of the MCMC algorithm that is chosen to control the acceptance probability.Note that if we choose κ = 1 and β is 0, then the proposal distribution q i (θ i ) equals the posterior distribution p(θ i |X , θ1 , . . .θi−1 ).However, in practice, we found that the acceptance probabilities can be too small, leading to unstable estimates and division by 0 in Equation ( 10).Therefore, for our experiments we chose κ = 10.
Restricting the hypotheses space
The number of possible clusterings follow the Bell numbers, and therefore it is infeasible to enumerate all possible clusterings, even if the number of variables p is small.It is therefore crucial to restrict the hypotheses space to a subset of all clusterings that are likely to contain the true clustering.We denote this subset as C * .
Algorithm 1 Metropolis-Hastings within Gibbs sampler for sampling from p(θ ≥i |X , θ<i ). for t from 1 to M do for j from i to k + 1 do We suggest to use spectral clustering on different estimates of the precision matrix to acquire the set of clusterings C * .A motivation for this heuristic is given in Appendix C.
First, for an appropriate λ, we estimate the precision matrix using In our experiments, we take q = 1, which is equivalent to the Graphical Lasso [Friedman et al., 2008] with an l1-penalty on all entries of X except the diagonal.
In the next step, we then construct the Laplacian L as defined in the following.
Finally, we use k-means clustering on the eigenvectors of the Laplacian L. The details of acquiring the set of clusterings C * using the spectral clustering method are summarized below: In Section 7.1 we confirm experimentally that, even in the presence of noise, C * often contains the true clustering, or clusterings that are close to the true clustering.
Posterior distribution over number of clusters
In principle, the posterior distribution for the number of clusters can be calculated using where C k denotes the set of all clusterings with number of clusters being equal to k.Since this is computationally infeasible, we use the following approximation where C * k is the set of all clusterings with k clusters that are in the restricted hypotheses space C * .Algorithm 2 Spectral Clustering for variable clustering with the Gaussian graphical model.
J := set of regularization parameter values.K max := maximum number of considered clusters.
(e 1 , . . ., e Kmax ) := determine the eigenvectors corresponding to the K max lowest eigenvalues of the Laplacian L as defined in Equations ( 13).
C * := C * ∪ C λ,k end for end for return restricted hypotheses space C *
Simulation Study
In this section, we evaluate the proposed method on simulated data for which the ground truth is available.In sub-section 7.1, we evaluate the quality of the restricted hypotheses space C * , followed by sub-section 7.2, where we evaluated the proposed method's ability to select the best clustering in C * .
In all experiments the number of variables is p = 40, and the ground truth is 4 clusters with 10 variables each.
For generating positive-definite covariance matrices, we consider the following two distributions: InvW(p + 1, I p ), and Uniform p , with dimension p.We denote by U ∼ Uniform p the positive-definite matrix generated in the following way where λ min (A) is the smallest eigenvalue of A, and A is drawn as follows For generating Σ, we either sample each block j from InvW(p j + 1, I pj ) or from Uniform pj .
For generating the noise matrix Σ , we sample either from InvW(p + 1, I p ) or from Uniform p .The final data is then sampled as follows where η defines the noise level.
For evaluation we use the adjusted normalized mutual information (ANMI), where 0.0 means that any correspondence with the true labels is at chance level, and 1.0 means that a perfect one-to-one correspondence exists [Vinh et al., 2010].We repeated all experiments 5 times and report the average ANMI score.
Evaluation of the restricted hypotheses space
First, independent of any model selection criteria, we check here the quality of the clusterings that are found with the spectral clustering algorithm from Section 6.We also compare to single and average linkage clustering as used in [Tan et al., 2015].
The set of all clusterings that are found is denoted by C * (the restricted hypotheses space).
In order to evaluate the quality of the restricted hypotheses space C * , we report the oracle performance calculated by max C∈C * ANMI(C, C T ), where C T denotes the true clustering, and ANMI(C, C T ) denotes the ANMI score when comparing clustering C with the true clustering.In particular, a score of 1.0 means that the true clustering is contained in C * .
The results of all experiments with noise level η ∈ {0.0, 0.01, 0.1} are shown in Tables 1, for balanced clusters, and Table 2, for unbalanced clusters.
From these results we see that the restricted hypotheses space of spectral clustering is around 100, considerably smaller than the number of all possible clusterings.More importantly, we also see that that C * acquired by spectral clustering either contains the true clustering or a clustering that is close to the truth.In contrast, the hypotheses space restricted by single and average linkage is smaller, but more often misses the true clustering.
Evaluation of clustering selection criteria
Here, we evaluate the performance of our proposed method for selecting the correct clustering in the restricted hypotheses space C * .We compare our proposed method (variational) with several baselines and two previously proposed methods [Tan et al., 2015, Palla et al., 2012].Except for the two previously proposed methods, we created C * with the spectral clustering algorithm from Section 6.
As a cluster selection criteria, we compare our method to the Extended Bayesian Information Criterion (EBIC) with γ ∈ {0, 0.5, 1} [Chen andChen, 2008, Foygel andDrton, 2010], Akaike Information Criteria [Akaike, 1973], and the Calinski-Harabaz Index (CHI) [Caliński and Harabasz, 1974].Note that EBIC and AIC are calculated based on the basic Gaussian graphical model (i.e. the model in Equation 1, but ignoring the prior specification). 3Furthermore, we note that EBIC is model consistent, and therefore, assuming that the true precision matrix contains non-zero entries in each element, will choose asymptotically the clustering that has only one cluster with all variables in it.However, as an advantage for EBIC, we exclude that clustering.Furthermore, we note that in contrast to EBIC and AIC, the Calinski-Harabaz Index is not a model-based cluster evaluation criterion.The Calinski-Harabaz Index is an heuristic that uses as clustering criterion the ratio of the variance within and across clusters.As such it is expected to give reasonable clustering results if the noise is considerably smaller in magnitude than the within-cluster variable partial correlations.
We remark that EBIC and AIC is not well defined if the sample covariance matrix is singular, in particular if n < p or n ≈ p.As an ad-hod remedy, which works well in practice4 , we always add 0.001 times the identity matrix to the covariance matrix (see also Ledoit and Wolf [2004]).
Finally, we also compare the proposed method to two previous approaches for variable clustering: the clustered graphical lasso (CGL) as proposed in [Tan et al., 2015], and the Dirichlet process variable clustering (DPVC) model as proposed in [Palla et al., 2012], for which the implementation is available.DPVC models the number of clusters using a Dirichlet process.CGL uses for model selection the mean squared error for recovering randomly left-out elements of the covariance matrix.CGL uses for clustering either the single linkage clustering (SLC) or the average linkage clustering (ALC) method.For conciseness, we show only the results for ALC, since they tended to be better than SLC.
The results of all experiments with noise level η ∈ {0.0, 0.01, 0.1} are shown in Tables 3 and 4, for balanced clusters, and Tables 5 and 6, for unbalanced clusters.
The tables also contain the performance of the proposed method for β ∈ {0, 0.01, 0.02, 0.03}.Note that β = 0.0 corresponds to the basic inverse Wishart prior model for which we can calculate the marginal likelihood analytically.
Comparing the proposed method with different β, we see that β = 0.02 offers good clustering performance in the no noise and noisy setting.In contrast, model selection with EBIC and AIC performs, as expected, well in the no noise scenario, however, in the noisy setting they tend to select incorrect clusterings.In particular for large sample sizes EBIC tends to fail to identify correct clusterings.
The Calinski-Harabaz Index performs well in the noisy settings, whereas in the no noise setting it performs unsatisfactory.
In Figures 1 and 2, we show the posterior distribution with and without noise on the precision matrix, respectively.5In both cases, given that the sample size n is large enough, the proposed method is able to estimate correctly the number of clusters.In contrast, the basic inverse Wishart prior model underestimates the number of clusters for large n and existence of noise in the precision matrix.
Comparison of variational and MCMC estimate
Here, we compare our variational approximation with MCMC on a small scale simulated problem where it is computationally feasible to estimate the marginal likelihood with MCMC.We generated synthetic data as in the previous section, only with the difference that we set the number of variables p to 12.
The number of samples M for MCMC was set to 10000, where we used 10% as burn in.For two randomly picked clusterings for n = 12, and n = 1200000, we checked the acceptance rates and convergence using the multivariate extension of the Gelman-Rubin diagnostic [Brooks and Gelman, 1998].The average acceptance rates were around 80% and the potential scale reduction factor was 1.01.
The runtime of MCMC was around 40 minutes for evaluating one clustering, whereas for the variational approximation the runtime was around 2 seconds. 6he results are shown in Table 7, suggesting that the quality of the selected clusterings using the variational approximation is similar to MCMC.
Real Data Experiments
In this section, we investigate the properties of the proposed model selection criterion on three real data sets.In all cases, we use the spectral clustering algorithm from Appendix C to create cluster candidates.All variables were normalized to have mean 0 and variance 1.For all methods, except DPVC, the number of clusters is considered to be in {2, 3, 4, . . ., min(p − 1, 15)}.DPVC automatically selects the number of clusters by assuming a Dirichlet process prior.We evaluated the proposed method with β = 0.02 using the variational approximation.
Mutual Funds
Here we use the mutual funds data, which has been previously analyzed in [Scott andCarvalho, 2008, Marlin et al., 2009].The data contains 59 mutual funds (p = 59) grouped into 4 clusters: U.S. bond funds, U.S. stock funds, balanced funds (containing U.S. stocks and bonds), and international stock funds.The number of observations is 86.
The results of all methods are visualized in Table 8.It is difficult to interpret the results produced by EBIC (γ = 1.0),AIC and the Calinski-Harabaz Index.In contrast, the proposed method and EBIC (γ = 0.0) produce results that are easier to interpret.In particular, our results suggest that there is a considerable correlation between the balanced funds and the U.S. stock funds which was also observed in Marlin et al. [2009].
In Figure 3 we show a two dimensional representation of the data, that was found using Laplacian Eigenmaps [Belkin and Niyogi, 2003].The figure supports the claim that balanced funds and the U.S. stock funds have similar behavior.
Gene Regulations
We tested our method also on the gene expression data that was analyzed in [Hirose et al., 2017].The data consists of 11 genes with 445 gene expressions.The true gene regularizations are known in this case and shown in Figure 4, adapted from [Hirose et al., 2017].The most important fact is that there are two independent groups of genes and any clustering that mixes these two can be considered as wrong.
We show the results of all methods in Figure 5, where we mark each cluster with a different color superimposed on the true regularization structure.Here only the clustering selected by the proposed method, EBIC (γ = 1.0) and Calinski-Harabaz correctly divide the two group of genes.
Aviation Sensors
As a third data set, we use the flight aviation dataset from NASA7 .The data set contains sensor information sampled from airplanes during operation.We extracted the information of 16 continuous-valued sensors that were recorded for different flights with in total 25032364 samples.
The clustering results are shown in Table 9.The data set does not have any ground truth, but the clustering result of our proposed method is reasonable: Cluster 9 groups sensors that measure or affect altitude8 , Cluster 8 correctly clusters the left and right sensors for measuring the rotation around the axis pointing through the noise of the aircraft, in Cluster 2 all sensors that measure the angle between chord and flight direction are grouped together.It also appears reasonable that the yellow hydraulic system of the left part of the plane has little direct interaction with the hydraulic system of the right part (Cluster 1 and Cluster 4).And the sensor for the rudder, influencing the direction of the plane, is mostly independent of the other sensors (Cluster 5).
In contrast, the clustering selected by the basic inverse Wishart prior, EBIC, and AIC is difficult to interpret.We note that we did not compare to DPVC, since the large number of samples made the MCMC algorithm of DPVC infeasible.
Discussion and Conclusions
We have introduced a new method for evaluating variable clusterings based on the marginal likelihood of a Bayesian model that takes into account noise on the precision matrix.Since the calculation of the marginal likelihood is analytically intractable, we proposed two approximations: a variational approximation and an approximation based on MCMC.Experimentally, we found that the variational approximation is considerably faster than MCMC and also leads to accurate model selections.
We compared our proposed method to several standard model selection criteria.In particular, we compared to BIC and extended BIC (EBIC) which are often the method of choice for model selection in Gaussian graphical models.However, we emphasize that EBIC was designed to handle the situation where p is in the order of n, and has not been designed to handle noise.As a consequence, our experiments showed that in practice its performance depends highly on the choice of the γ parameter.In contrast, the proposed method, with fixed hyper-parameters, shows better performance on various simulated and real data.
We also compared our method to other two previously proposed methods, namely Cluster Graphical Lasso (CGL) [Tan et al., 2015], and Dirichlet Process Variable Clustering (DPVC) [Palla et al., 2012] that performs jointly clustering and model selection.However, it appears that in many situations the model selection algorithm of CGL is not able to detect the true model, even if there is no noise.On the other hand, the Dirichlet process assumption by DPVC appears to be very restrictive, leading again to many situations where the true model (clustering) is missed.Overall, our method performs better in terms of selecting the correct clustering on synthetic data with ground truth, and selects meaningful clusters on real data.
The python source code for variable clustering and model selection with the proposed method and all baselines is available at https://github.com/andrade-stats/robustBayesClustering.
A Convergence of 3-block ADMM
We can write the optimization problem in (3) as First note that the functions f 1 , f 2 and f 3 are convex proper closed functions.Since X , X 1 , . . ., X k 0, we have due to the equality constraint that Z 0. Assuming that the global minima is attained, we can assume that Z σI, for some large enough σ > 0. As a consequence, we have that ∇ 2 f 3 (Z) = Z −1 ⊗ Z −1 σ −2 I, and therefore f 3 is a strongly convex function.Analogously, we have that f 1 and f 2 are strongly convex functions, and therefore also coercive.This allows us to apply Theorem 3.2 in [Lin et al., 2015] which guarantees the convergence of the 3-block ADMM.
B Derivation of variational approximation
Here, we give more details of the KL-divergence minimization from Section 5.2.Recall, that the remaining parameters ν g, ∈ R and ν g,j ∈ R are optimized by minimizing the KL-divergence between the the factorized distribution g and the posterior distribution p(Σ , Σ 1 , . . .Σ k |x 1 , ..., x n , η, C).We have where c is a constant with respect to g and g j .However, the term E g J ,g [log |Σ −1 + βΣ −1 |] cannot be solved analytically, therefore we need to resort to some sort of approximation.Assuming that we get and c is a constant with respect to g and g j .From the above expression, we see that we can optimize the parameters of g and g j independently from each other.The optimal parameter νg, for g is Proposition 1. Optimization problem 1 and 2 have the same solution.Moreover, the m dimensional null space of L can be chosen such that each basis vector is the indicator vector for one variable block of X.
Proof.First let us define the matrix X, by Xij := |X ij | q .Then clearly, iff X is block sparse with m blocks, so is X.Furthermore, Xij ≥ 0, and L is the unnormalized Laplacian as defined in [Von Luxburg, 2007].We can therefore apply Proposition (2) of [Von Luxburg, 2007], to find that the dimension of the eigenspace of L corresponding to eigenvalue 0, is exactly the number of blocks in X.Also from Proposition (2) of [Von Luxburg, 2007] it follows that each such eigenvector e k ∈ R p can be chosen such that it indicates the variables belonging to the same block, i.e. e k (i) = 0, iff variable i belongs to block k.
Using the nuclear norm as a convex relaxation for the rank constraint, we have minimize with an appropriately chosen λ m .By the definition of L, we have that L is positive semi-definite, and therefore ||L|| * = trace(L).As a consequence, we can rewrite the above problem as Finally, for the purpose of learning the Laplacian L, we ignore the term βX and set it to zero.This will necessarily lead to an estimate of X * that is not a clean block matrix, but has small non-zero entries between blocks.Nevertheless, spectral clustering is known to be robust to such violations [Ng et al., 2002].This leads to Algorithm 2 in Section 6.
Figure 1 :
Figure 1: Posterior distribution of the number of clusters of the proposed method (top row) and the basic inverse Wishart prior model (bottom row).Ground truth is 4 clusters; there is no noise on the precision matrix.
Figure 2 :
Figure 2: Posterior distribution of the number of clusters of the proposed method (top row) and the basic inverse Wishart prior model (bottom row).Ground truth is 4 clusters; noise was added to the precision matrix.
Figure 3 :
Figure 3: Two dimensional representation of the mutual funds data.
Table 9 :
Evaluation of selected clusterings of the Aviation Sensor Data with 16 variables.Here the size of the restricted hypotheses space |C * | found by spectral clustering was 28. | 8,752.8 | 2018-06-15T00:00:00.000 | [
"Computer Science"
] |
Control Approach for Networked Control Systems with Deadband Scheduling Scheme
Due to the bandwidth constraints in the networked control systems (NCSs), a deadband scheduling strategy is proposed to reduce the data transmission rate of network nodes. A discrete-time model of NCSs is established with both deadband scheduling and network-induced time-delay. By employing the Lyapunov functional and LMI approach, a state feedback H ∞ controller is designed to ensure the closed-loop system asymptotically to be stable with H ∞ performance index. Simulation results show that the introduced deadband scheduling strategy can ensure the control performance of the system and effectively reduce the node’s data transmission rate.
Introduction
Networked control systems (NCSs) are control systems in which the control loop is closed over a wired or wireless communication network.They have received a great deal of attention in the recent years owing to their successful applications in a wide range of areas such as industrial automation, aerospace, and nuclear power station [1,2].Compared with the traditional point-to-point communication, NCSs have advantages of low cost, easy installation and maintenance, great reliability, and so forth.However, due to the introduction of the communication networks, they also incur some new issues such as network-induced delays, packet dropouts, and limited bandwidth resources, which make the analysis and design of NCSs become more complex [3,4].
Many results for NCSs have been reported to handle network-induced delays, packet dropouts and communication constraints in the literature; see [5][6][7][8][9][10][11][12] and the references therein.It should be pointed out that most of the available results make use of time-driven sampling and communication scheme since it is relatively easy to implement, and there is a well-established system theory for periodic signals.However, time-driven communication scheme is not desirable in many control applications.For example, in the NCSs with limited bandwidth resources, frequent data transmission will increase the network collision probability when there are many nodes on the network, thereby increasing the communication delay and data dropouts and leading to the poor performance and instability of the systems [3].On the other hand, as is well-known, in the wireless networked control systems (WNCSs), a main constraint of wireless devices is the limited battery life, and wireless transmission consumes significantly more energy than internal computation [13]; thus reducing the data transmission rate is particularly important in the WNCSs.For the above two cases, time-driven communication scheme is not suitable since its transmission rate is generally high which results in inefficient utilization of the limited resources, such as network bandwidth and energy.Therefore, how to design a reasonable scheduling strategy to reduce the use of constrained resources and ensure the performance of NCSs becomes one of the research hotspots.
Not only deadband scheduling techniques (i.e., by setting transmission deadband for the network node, the node will not transmit a new message if the node signal or signal change is within the deadband), which can effectively reduce the use of network bandwidth and energy consumption, but also the algorithms which are easy to realize have received an increasing attention in the recent years [14][15][16][17][18]. Besides, numerous other concepts have been proposed in the literature, such as send-on-delta sampling [19][20][21], event-based sampling [22][23][24], and event-triggered sampling [25,26].Despite the existence of many names, the basic principle is the same.In [14], a deadband method was introduced into the NCSs for the first time, in which the transmission deadbands were set in the sensor and controller to reduce the data transmission rate, and the deadband threshold optimization problem was also discussed.In [15], the relationship between the deadband threshold and the control performance was analyzed by simulation.The paper [23] used the deviation of two adjacent states beyond the deadband threshold to drive the nodes' data transmission and a dynamically selected deadband threshold value in accordance with the round-trip delay.In [16,17], the stability of the system with deadband scheduling was investigated, but the network delay was not considered in [16] the nodes should be synchronized and the network delay should be measurable in [17].The paper [18] proposed a signal difference-based transmission deadband scheduling strategy, a continuous-time model of WNCSs was established with both the probability distribution of delay and parametric uncertainties, and the ∞ controller was designed.In [26], for a class of uncertain continuoustime NCSs with quantizations, the codesign for controller and event-triggering scheme was proposed by using a delay system approach.
Until now, although some important pieces of work have been reported on deadband scheduling schemes in NCSs, which have a great significance on both theoretical development and practical applications in NCSs, it is worth noting that the obtained results on deadband scheduling in NCSs mostly focus on the system simulation and performance analysis; few papers have solved the problems of controller design and synthesis, which are more useful and challenging than the issue of performance analysis.In addition, to the best of our knowledge, few related results have been established for discrete-time NCSs with deadband scheduling, which motivates the work of this paper.
In this paper, we propose a deadband scheduling scheme to save the limited bandwidth resources while guaranteeing the desired ∞ control performance.Considering the influence of uncertain short time-delay, the NCSs with both deadband scheduling and time-delay is modeled as a discretetime system with parameter uncertainties.By the Lyapunov functional and LMI approach, the ∞ control problem is investigated.Finally, a numerical example is given to show the usefulness of the derived results.
The rest of this paper is organized as follows.Section 2 gives a discrete-time model of the closed-loop system.In Section 3, a state feedback ∞ controller is designed to ensure the closed-loop system asymptotically to be stable with ∞ performance index.Section 4 demonstrates the validness of the proposed method through a numerical example.Conclusions are given in Section 5.
Notation.The notations used throughout this paper are fairly standard.R denotes the -dimensional Euclidean space.‖ ⋅ ‖ 2 refers to the Euclidean vector norm. 2 [0, ∞) is the space of square summable infinite sequence. and 0 represent the identity matrix and zero matrix with appropriate dimensions, respectively.diag{⋅ ⋅ ⋅ } stands for a diagonal matrix.x(k) x (k) x (k − 1) The superscripts "" and "−1" represent matrix transpose and inverse, and " * " denotes the term that is induced by symmetry.
Problem Description and Modeling
The networked control system with deadband scheduling in this paper is shown in Figure 1, where deadband schedulers (DS1, DS2) are set in the sensor and controller, respectively, sc is sensor-to-controller delay, and ca is controller-toactuator delay.
Consider the following continuous plant model: where () ∈ R is the state vector of plant, V() ∈ R is the input vector, () ∈ R is the output vector, and () ∈ R is the disturbance input., 1 , 2 , and are known real constant matrices with appropriate dimensions.Make the following assumptions for the system.
(1) In the smart sensor, the sampler is time-driven, with a sampling period ℎ; both the controller and actuator are event-driven.(2) The total network-induced time-delay = sc + ca is time varying and nondeterministic, which satisfies 0 ≤ ≤ ℎ.
Thus, the discretized equation of plant can be described as [27] ( + 1) = () + 0 ( ) V () where where ( ) ( ) ≤ . ( Remark 1. Limited by space, the detailed discretization process for system (1) with uncertain short time-delay is omitted in this paper and can be found in [27].From ( 2) and ( 3), we know that the continuous plant with uncertain short timedelay in NCSs can be modeled as a discrete linear system with parameter uncertainties.
Description of Deadband Schedulers
. The signal will be transmitted only when the difference between the current signal and the previous transmission signal is greater than the error threshold.According to this, the deadband schedulers designed in this paper are the error threshold-based deterministic schedulers.The working mechanism of the deadband scheduler 1(DS1) in the sensor can be described as where = 1, 2, . . ., ; ∈ [0, 1], (), and () are given error threshold values, output signals, and input signals of DS1, respectively.
Remark 2. In this paper, the effect of active packet dropouts under the deadband scheduling scheme is modeled as a bounded uncertain item of the transmission signal.The main advantages of this modeling method are as follows: (1) a non-linear relationship between the input and output of the deadband scheduler is converted to a linear relationship with uncertain parameters; (2) due to the bounded ranges of uncertain parameters related to the deadband threshold values, it is easier to merge scheduling policy parameters into the system model.Similarly, the working mechanism of deadband scheduler 2(DS2) in the controller can be described as: 7) the input-output relationship of DS2 can be converted to: where 2 () 2 () ≤ .From the above, we know that after the introduction of deadband schedulers into NCSs, the signals do not need to be transmitted at each sampling period so as to achieve the purpose of reducing data transmission rate and the effect of bandwidth constraints on the system.In addition, the principles of the considered schedulers are simple, which do not require a lot of computing and data storage.
Design of 𝐻 ∞ Controller
In this section, we will investigate ∞ control problem for the closed-loop system (10).Throughout this paper, we will use the following lemmas.
Main Results.
Based on Lyapunov functional method and ∞ theory [28], the following conclusions can be obtained.
Remark 8. Notice that the matrix inequality (16) in Theorem 7 is a bilinear matrix inequality due to the existence of −1 .Generally, it can be solved by the linear approach [30] or the cone complementarity linearization (CCL) method [31].By contrast, the CCL result is less conservative [32] and so is employed in this paper.
Corollary 9.The bilinear matrix inequality (16) can be transformed to the following objective function minimization problems by the CCL method.Find where Since Corollary 9 has turned the nonconvex feasibility problem of Theorem 7 into a minimization problem of nonlinear objective function with linear matrix inequalities constraints, it can be solved by the following iterative algorithm.
Step 3. Substituting the solutions Ξ * into the matrix inequality (16) Thus, a state feedback ∞ controller can be obtained for NCSs with both deadband scheduling and uncertain short time-delay.More especially, if there are no deadband schedulers in the NCSs shown in Figure 1, the closed-loop system in (10) reads ( + 1) = Φ () + () , () = () , (31) where Then, we have the following corollary, which can be proved along similar lines as in the proof of Theorem 7.
Corollary 11.Consider the NCSs in Figure 1, but without the deadband schedulers.For a given scalar > 0, the closed-loop system (31) is asymptotically stable with ∞ performance if there exist symmetric positive-definite matrix , feedback gain matrix , and scalar Similarly, the bilinear matrix inequality (33) can be solved by the above CCL method and the iterative algorithm in Corollary 9 and is thus omitted.
Numerical Example
In this section, a numerical example is introduced to demonstrate the effectiveness of the proposed method.Consider a ball and beam system with [33] In this example, we choose ℎ = 0.5, and ∈ [0, ℎ] is time varying and nondeterministic.According to (2) data transmission rate ( sent and total denote the number of data transmitted with and without deadband schedulers in the runtime, respectively.)and IAE denotes the control performance of the system.Under three different random time-delay sequences, the performance of MTR and IAE is shown for the system with the above four error threshold values in Figures 2 and 3, respectively.It can be easily seen that compared with the system without deadband schedulers (case 1), although the control performance of the system by using deadband scheduling scheme (case 2-case 4) is slightly worse (in Figure 3), the mean data transmission rate of the system is greatly reduced (in Figure 2).Figures 4-6 show the simulation results for the system in which the error threshold values take Λ 1 = diag{0.07,0.07}, Λ 2 = 0.05, the feedback gain matrix = [−0.4185−0.9605] according to Table 1, and the time-delay takes the first series.It can be seen that the closed-loop system is asymptotically stable (in Figure 4), and only part of the sampled data and control signal are transmitted with the proposed deadband scheduling scheme (in Figures 5 and 6, here the transmission interval of 2 () is similar to 1 () in DS1 and is thus omitted).
Furthermore, under zero initial conditions, we get that ‖‖ 2 = 0.7310, ‖‖ 2 = 0.3317, which yields * = 2.20.It means that the practical disturbance attenuation level is smaller than the given level = 5, which shows the effectiveness of the proposed ∞ controller design method.
Conclusions
In this paper, a discrete-time model for NCSs with both deadband scheduling and time-delay has been established and the ∞ control problem has been investigated.Based on the LMI approach, a state feedback ∞ controller has been designed to ensure the closed-loop system asymptotically to be stable with ∞ performance index.A numerical example has been provided to show the validness of the derived results.Since the principles and algorithms of deadband schedulers in this paper are very simple, the smart sensor and controller are easy to implement.In addition, simulation results show that it can effectively reduce the node's data transmission rate, so it is very suitable for applying in the NCSs with limited bandwidth resources.
Figure 1 :
Figure 1: Model of networked control system with deadband scheduling.
Remark 6 .
Due to the introduction of a symmetric positivedefinite matrix instead of a scalar in Lemma 5, problem solving is expected to have a less conservatism compared with Lemma 4.
Figure 2 :
Figure 2: The performance of MTR.
Figure 4 :
Figure 4: State response curves of the closed-loop system.
Figure 6 :
Figure 6: Comparisons of the transmission interval of V().
For a given disturbance attenuation level = 5, based on the LMI toolbox, and applying Corollaries 9 and 11, the feedback gain matrix values are given with different error threshold values Λ 1 , Λ 2 in Table1.It is obvious from Table1that for a given level we can find the feasible feedback gain matrix values when Λ 1 , Λ 2 are within certain ranges. | 3,315.2 | 2013-11-11T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Interactive mixture of inhomogeneous dark fluids driven by dark energy: a dynamical system analysis
We examine the evolution of an inhomogeneous mixture of non-relativistic pressureless cold dark matter (CDM), coupled to dark energy (DE) characterised by the equation of state parameter w<-1/3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$w<-1/3$$\end{document}, with the interaction term proportional to the DE density. This coupled mixture is the source of a spherically symmetric Lemaître–Tolman–Bondi (LTB) metric admitting an asymptotic Friedman–Lemaître–Robertson–Walker (FLRW) background. Einstein’s equations reduce to a 5-dimensional autonomous dynamical system involving quasi–local variables related to suitable averages of covariant scalars and their fluctuations. The phase space evolution around the critical points (past/future attractors and five saddles) is examined in detail. For all parameter values and both directions of energy flow (CDM to DE and DE to CDM) the phase space trajectories are compatible with a physically plausible early cosmic times behaviour near the past attractor. This result compares favourably with mixtures with interaction driven by the CDM density, whose past evolution is unphysical for DE to CDM energy flow. Numerical examples are provided describing the evolution of an initial profile that can be associated with idealised structure formation scenarios.
Introduction
Current observational evidence supports the existence of an accelerated cosmic expansion, likely driven by an unknown form of matter-energy, generically denoted "dark energy" (DE), and usually described by suitable scalar fields or (phenomenologically) as a fluid with negative pressure [1][2][3][4].
Observations also point to the existence of cold dark matter (CDM) clustering around galactic halos, usually described in cosmological scales by pressure-less dust, while ordinary visible matter (baryons, electrons and neutrinos) and phoa e-mail<EMAIL_ADDRESS>tons (radiation) comprise less than 5% of the total contents of cosmic mass-energy.
While both dark sources only interact with ordinary matter and radiation through gravitation, it is very reasonable to assume that there is some form of interaction between them. This assumption cannot be ruled out, given our ignorance on the fundamental nature of these sources. In fact, potentially useful information on the primordial physics behind dark sources may emerge by fitting various assumptions of such interactions to observational data, given the fact that interactive DE and CDM is consistent with the dynamics of galaxy clusters [5] and the integrated Sachs-Wolfe effect [6]. Several models of coupled dark sources can also be found in the literature motivated by particle physics, thermodynamics, etc. [1,2,[7][8][9][10][11].
Observations also suggest that at sufficiently large scales the Universe is well described by linear perturbations of all sources (dark and visible) in an homogeneous Friedman-Lemaître-Robertson-Walker (FLRW) background metric, with non-linear dynamics (whether Newtonian or relativistic) needed to explain the observed local structure [12]. The interplay of local and cosmic dynamics at all scales must comply with the observed anisotropy of the Cosmic Microwave Background (CMB) [1][2][3][4]. Evidently, this dynamics depends on the assumptions made on DE and CDM, which leads to a model dependent power spectrum that should be contrasted with observations at large scales and in structure formation (from data and from numerical simulations). The observed data should provide interesting constraints on assumptions about the dark sources. In particular, different DE models have been considered in the linear order perturbation scheme in the literature [13][14][15][16].
Conventionally, structure formation scenarios are studied by non-linear Newtonian dynamics (analytically [17,18] and through numerical simulations [19], see review [20]), since CDM is assumed to be practically pressure-less and DE can be modelled (or approximated) by a cosmological constant. However, once we assume a fully dynamical DE source with non-trivial pressure and non-trivial interaction with CDM, it is necessary to utilise General Relativity (whether perturbatively or not) to obtain a valid description of its evolution, since Newtonian gravity can (at best) mimic sources with pressure when adiabatic conditions are assumed (see discussion in [21]). While considering a scalar field is the most common approach to dynamical DE, a phenomenological description by means of fluids with negative pressure can also be useful. Ideally, fully relativistic inhomogeneous DE and CDM interacting sources should be examined through high power numerical relativistic codes (whether assuming a continuous modelling or N-body simulations). However, since the latter codes are in their early development stages [22][23][24][25], we can resort to more idealised (yet still relativistic and non-perturbative) description by means of inhomogeneous exact solutions of Einstein's equations.
In most of the literature the term "Lemaître-Tolman-Bondi (LTB) models" is broadly understood to denote spherically symmetric exact solutions endowed with the LTB metric and associated with a pure dust source [26][27][28]. These solutions have been extensively studied (with zero and nonzero cosmological constant) and used in a wide range of astrophysical and cosmological modelling (see extensive reviews in [29][30][31][32]). In particular, a better understanding of their theoretical properties follows by describing their dynamics in terms of "quasi-local scalars" [33][34][35][36] (to be denoted henceforth as "q-scalars"), which are related to averages of standard covariant scalars and satisfy FLRW dynamical equations and scaling laws [35].
Since the deviation from a homogeneous FLRW background can be uniquely determined (in a covariant manner [35]) by fluctuations that relate the q-scalars and the standard covariant scalars, the full dynamics of Einstein's equations is equivalent to the dynamics of the q-scalars and their fluctuations (see discussion in [36]). The q-scalars and their fluctuations allow for a consistent dynamical systems study of the models (with zero cosmological constant in [37,38] and nonzero in [39]). An important theoretical connection with cosmological perturbation theory follows from the fact that the fluctuations of the q-scalars provide an exact analytic (and covariant) generalisation of gauge invariant cosmological perturbations in the isochronous gauge [38,40].
It is less known that LTB metrics admit energy momentum tensors with nonzero pressure in a comoving frame. For a perfect fluid the pressure must be uniform (zero pressure gradients), which allows to interpret the source as a mixture composed by a homogeneous DE fluid interacting with inhomogeneous dust representing CDM (see [41]). For fluids with anisotropic pressure, the latter supports nontrivial pressure gradients, leading to a similar description in terms of q-scalars and their fluctuations as in pure dust LTB models. For anisotropic fluids the anisotropy of the pressure can be related to the fluctuation of the q-scalar associated with the isotropic pressure. Since setting up fluid mixtures is possible and a wide variety of equations of state are admissible, LTB metrics with these sources have been used to model inhomogeneous mixtures of DE and CDM [42,43].
In order to extend earlier work in [42,43] and to explore a generalisation of previous work in [39] that considered DE as a cosmological constant, we studied recently [44] a dynamical systems analysis of an LTB interactive mixture of CDM (dust) and DE (fluid with p/ρ = w, w = const. < −1/3), under the assumption of an interaction driven by CDM: i.e. the interaction term J is proportional (via a dimensionless constant α) to the CDM dust density.
In the present paper we undertake a dynamical systems analysis of a similar (yet qualitatively different) configuration: we assume the same EOS for CDM and DE, but with the interaction now driven by DE, with J now proportional to the DE density. As we show along the paper, the resulting evolution is qualitatively different in both cases. Strictly speaking we should also consider a quasi-homogeneous radiation source that is dominant in early cosmic times. However, as we show in Appendix A, the radiation term does not change the qualitative past evolution and has negligible effects for large cosmic times. Therefore we will not consider this source in the analysis of the CDM-DE mixture.
While the phase space of both (CDM or DE driven) mixtures contains as critical points five saddles and past/future attractors. The attractors have very different properties: • The CDM driven mixture examined in [44]: the phase space position of the past attractor (Big Bang) depended on the parameters α and w. For α > 0 (energy transfer from DE to CDM) the past attractor describes a well behaved CDM dominated scenario. However, for α < 0 (energy transfer from CDM to DE) the DE density became negative in phase space regions around the past attractor (irrespective of initial conditions), thus signalling an unphysical past evolution that is inconsistent with all observational data. This problem was already noticed in the literature with this type of CDM-DE mixtures based on FLRW metrics [1,2,7,8]. • The DE driven mixture examined here. As opposed to the system in [44], the past attractor is now fixed and the future attractor depends on the choice of parameters α and w. Thus, regardless of the sign of α (directionality of interaction energy flow), the past evolution is now a physically plausible CDM dominated scenario (compatible with observations) while the position of the future attractor describes various plausible DE dominated sce-narios that depend on the parameter choices. For α < 0 the future attractor is unphysical (CDM density becomes negative). In this case, the phase space evolution must be appropriately restricted.
Since all observations examine cosmic evolution along our past null cone, the physical plausibility of the models must be determined primarily from their past phase space evolution (the future evolution is much more open to speculation). Hence, a comparison between our results and those of [44] suggests that the DE driven mixture should be favoured, as it exhibits a wider parameter consistency with a past evolution compatible with observations. The plan of the article is summarised as follows. In Sect. 2 we present the q-scalar formalism to set up the evolution equations of for an LTB metric whose source is a mixture of CDM and DE with the coupling term proportional to the DE density. In Sect. 4, the resulting dynamical system is studied and the critical points are classified for different ranges of the free parameters. In Sect. 5, we define a set of initial conditions for the system obtained from a general initial profile and, then, we numerically solve the evolution equations for it for three different choices of the free parameters leading to three different evolutions. In Sect. 6, we summarise the findings. Finally, in Appendix A we prove that adding a radiation source does not change the qualitative evolution of the CDM-DE mixture. Throughout the paper we consider units for which c = 1.
LTB spacetimes, q-scalar variables and coupled dark energy model
Following the methodology described in [42,43] (summarised in [44]), we consider a LTB metric in a comoving frame given by where R = R(t, r ), R = ∂ R/∂r , K = K (r ), with the total energy-momentum tensor given by where ρ(t, r ) and p(t, r ) are the total energy density and isotropic pressure, while the traceless anisotropic pressure tensor is a b = P(t, r ) × diag[0, − 2, 1, 1]. Structure formation in standard cosmology occurs when expansion of the universe is ruled out by non-relativistic matter (dust) and the radiation source present has a much smaller energy density than the dust source. Since we are interested in structure formation scenarios in LTB metrics, we assume that the energymomentum of our metric is a CDM and DE fluid mixture with the following decomposition of (2) 1 where the indices "(m)" and "(e)" (whether above or below) respectively denote the CDM and DE mixture components (we adopt this convention henceforth). The total energy-momentum tensor is conserved: ∇ b T ab = 0, but the decomposition above leads to the conservation law for the mixture components where j a is the coupling current that characterises the interaction of both sources. In order to keep the symmetry of the metric, we assume that this current is a vector parallel to the 4-velocity, so that j a = J u a and h ca j a = 0 hold. The projection along u a is while the spatially projected conservation equation h ac ∇ b T ab = 0 holds for all the evolution.
We have now five state variables A = ρ (m) , ρ (e) , p (m) , p (e) , J which depend on (t, r ). As shown in [42][43][44], we associate to each of them a q-scalar A q and a fluctuation δ A by the following rule 2 where the lower bound of the integrals above x = 0 marks a symmetry centre such that R(t, 0) =Ṙ(t, 0) = 0, witḣ R = u a ∇ a R = ∂ R/∂t. In particular, it is straightforward to show (see [43]) that Other covariant scalars associated with (1) and (2) are the Hubble expansion scalar H = (1/3)∇ a u a = (R 2 R )˙/(R 2 R ) 1 Strictly speaking we would have to include a radiation source that is nearly homogeneous to account for a radiation dominated era at early cosmic times. We have not include this source in (3) because, as shown in Appendix A, it does not introduce significant changes in the qualitative behaviour of the solutions. 2 Notice that A q is related to the proper volume average of A with weight factor √ 1 + K . See comprehensive discussion in [35]. and the spatial curvature K = (1/6) 3 R = 2(K R) /(R 2 R ), where 3 R is the Ricci scalar of constant t hypersurfaces. Their respective q-scalars are given by while for the interaction term we have J = J q (1 + δ J ), whose dependence on state variables will be determined further ahead. For the mixture (3) to describe CDM and DE we will choose the following equations of state (EOS) (the same as in [44]): DE (barotropic fluid): where we have assumed that w = p (e) /ρ (e) < −1/3 is a constant. Given the EOS's (9) and (10), we will adopt the following convention Notice that (7) and (10) imply that only the DE source contributes to the anisotropic pressure: P = P (e) = δ p (e) /2. As shown in [42][43][44] Einstein's equations reduce to the following system of evolution equations 3 together with the algebraic constraints where κ = 8π G/3, (14) follows from (13) by applying the second rule of (6), δ k = (K − K q )/K q , while J q is the qscalar associated (via (6)) to the energy density flux defined from the (or defining a) local J . Notice that (13) is the quasi-local Hamiltonian constraint, which (from (8)) takes the functional form of an FLRW Friedman equation. Also, the evolution Eqs. (12a-12c) for the q-scalars H q , ρ (e) q , ρ (m) q are formally equivalent to FLRW equations for a CDM-DE mixture with EOS (9) and (10). This reinforces the interpretation of the q-scalars as averaged LTB scalars that mimic at every comoving shell r = r i the corresponding scalars of an FLRW background metric. In fact, as shown in [33], an asymptotic FLRW background follows as all fluctuations δ m , δ e δ H , δ J vanish in the limit r → ∞ for all t, which is equivalent to the fact that in this limit the full system above reduces to the FLRW evolution Eqs. (12a-12c) and the Friedman equation in (13).
The autonomous system (12a-12f) can be solved numerically for a choice of w and J q , which determines δ J through (6). In the present work we will consider an interaction term J q proportional to the DE density and Hubble q-scalars as follows: where α is an dimensionless coupling constant. Note that the interaction energy flows from DE to CDM for α > 0 and from CDM to DE for α < 0.
Interactive mixtures with the EOS (9) and (10) and the interaction energy flux term (15) were considered for an FLRW cosmology in [1,2,13,15]. This type of FLRW models provide background for a first order gauge invariant perturbation treatment that yields linear evolution equations for the associated perturbations of all sources, including the interaction term J , which can be considered as a phenomenological "black box" or (ideally in principle) related to some (yet unknown) early Universe physics.
As shown in [40], the dynamics of LTB metrics in the q-scalar formalism yields (through evolution equations like (12a-12f)) an exact non-linear generalisation of linear gauge invariant cosmological perturbations in the isochronous gauge (for any source compatible with the LTB metric). The advantage of using numerical solutions of (12a-12f) lies in the possibility to examine in the non-linear regime the connection between the assumptions on the CDM-DE interaction mediated by J and observations on structure formation in the galactic and galactic cluster and supercluster scales.
The dynamical system
The q-scalar formalism described in the previous section allows us to define suitable dimensionless functions that transform the system (12a-12f) into an autonomous fivedimensional dynamical system that is amenable to a qualitative phase space analysis, analogous to that undertaken in [44].
For the density scalar functions we define below the following q-scalars that are analogous to the factors in FLRW cosmologies transforming the Hamiltonian constraints in (13)-(14) into the following elegant forms Next, we introduce a dimensionless coordinate ξ(t, r ) that will serve as the phase space evolution parameter, so that for all comoving curves r = r i we have In terms of ξ and using the interaction term defined in (15), the system (12a-12f) is transformed into the following dynamical system (20e) The system (20a-20e) can be solved numerically for a set of initial conditions for every comoving shell r = r i once we fix the free parameters of the model, w and α. We can compute afterwards H q (ξ, r i ) from Once we have computed the phase space variables m q , e q , δ m , δ e , δ H , all relevant quantities can be obtained: The qscalars associated with the CDM and DE densities and the spatial curvature and its fluctuation δ k follow directly from (16a)-(16b), (17), (18) and (21), while all local quantities Additionally, it is possible to recover physical time from the phase space evolution parameter ξ(t, r ) from evaluating at each fixed r = r i , though it is important to bear in mind that hypersurfaces of constant ξ and t do not coincide, thus for every scalar A we have [∂ A/∂r ] t = [∂ A/∂r ] ξ (the appropriate integrability conditions are discussed in detail in [39]). Taking this into account, the LTB metric functions follow from evaluat-
Homogeneous and inhomogeneous subspaces
As in [39,44], we can split the phase space of (20a-20e) into two interrelated projection subspaces: The homogeneous subspace It is defined by the phase space variables m q and e q , since they are fully determined by the evolution equations (20a)-(20b), which do not involve the other phase space variables δ m , δ e , δ H . In fact, for every trajectory (fixed r ) these evolution equations are formally identical to FLRW equations for the analogous variables. 4 The inhomogeneous subspace It is defined by the remaining three phase space variables δ m , δ e , δ H , as these provide a measure of the departure of the local scalars from their homogeneous FLRW counterparts.
The study of the phase space will be undertaken by looking at its trajectories in terms of these two projections.
Critical points
The critical points of the system (20a-20e) and their respective eigenvalues are shown in Table 1. As expected, they depend on the free parameters w and α, save for PC1. The critical point PC1 is in fact a line parallel to the δ e axis. The Table 1 The critical points and their respective eigenvalues of the system (20a-20e) eigenvalue λ 1 of PC1 is zero, corresponding to a eigenvector that is also parallel to the δ e axis, indicating that near the line there is no evolution of the space phase trajectory in that direction. For α < 0, the critical points PC4, PC5, PC6 and PC7 are non physical as their component m q is negative, which means the CDM energy density should be negative. We will examine below the homogeneous subspace closely.
Homogeneous subspace
The homogeneous subsystem for α > 0 has the following critical points (see Fig. 1): Both, PC A and PC R can be considered as critical points of the phase space that would result from an FLRW model, or as a projection of the PC1− PC7 points over the [ m q , e q ] subspace in a full five-dimensional representation. In the former case, the trajectories in the phase-space are computed for a given set of initial conditions with δ m = δ e = δ H = 0 and live completely in the homogeneous space, while in the later case the trajectories are computed with a general choice of δ m , δ e and δ H and are represented in the homogeneous subspace as projections of the five-dimensional space trajectories over the [ m q , e q ] subspace. Additionally, in a similar way as in the interaction used in [44], we have a onedimensional invariant subspace (a line) given in this case by where we used (20a-20b). This invariant line contains both the saddle point and the future attractor. Hence, the system can evolve from the saddle point to the future attractor (for initial conditions with m plane is divided in two regions by this invariant line: the region where trajectories evolve from the m q = 0 axis to the future attractor and the region where the trajectories evolve from the past attractor. The later region contains part of the attraction basin of PC A: trajectories that evolve from the past attractor to the future attractor, representing an ever expanding scenario where initially there is only CDE with CDM density increasing from the interaction with DE. This region also contains trajectories for which m q , e q → ∞, which correspond to comoving layers that bounce (since m q , e q diverge as H q → 0). We will not consider the evolution of such trajectories.
For α < 0 the future attractor PC A lies in an unphysical phase space region marked by negative m q . For trajectories emerging from the past attractor the physical evolution, which can only be defined up to the invariant line, describes an expanding scenario in which energy density flows from the CDM to the DE component until the CDM density vanishes on the comoving shells (at different times for different shells). However, the fact that the past evolution is not unphysical makes the coupling term (15) acceptable also when α < 0, as has been stated in the literature [45] deling with these CDM-DE mixtures in FLRW cosmologies. This stands in sharp contrast with the coupling used in [44], where α < 0 leads to grossly unphysical past evolution, which implies considering only the coupling with α > 0 (as in FLRW cosmology scenarios).
In [46], the authors define the ratio between CDM and DE energy densities ρ m /ρ e in a FLRW homogeneous model. For an interacting term proportional to the DE energy density, they show that the ratio is positive defined, and decreases from infinity monotonously as the universes expands. Given that for every shell of our LTB metric we have an homogeneous subsystem, we can define the same ratio for the q-scalar densities at every shell as ρ (m) q /ρ (e) q = m q / e q and the evolution of every shell can be deduced from the trajectories in the homogeneous phase space. For α > 0, the shells whose trajectories evolve from the past attractor to the future attractor present a ratio that is positive defined and decreases from infinity (as e q = 0 for the past attractor) asymptotically to a constant value given by α/(α − w) (as the shell reaches the future attractor). For the shells whose trajectories evolve from the m q = 0 axis to the future attractor, the ratio increases from null to a future value given by α/(α − w). Finally, for the shells whose trajectories evolve to infinity, the ratio will decrease from infinity (at the past attractor) to a different positive constant value at the instant the shell bounces. For α < 0, all the shells have trajectories that evolve from the past attractor, for which the ratio diverges. For the trajectories in the α < 0 case that evolve to the m q = 0 axis, the ratio will decrease asymptotically to null, while for the shells that bounce the ratio will evolve to a fixed positive ratio.
Initial conditions, scaling laws and singularities
To specify initial conditions to integrate the dynamical system (20a)-(20e) we need to provide an initial value formulation for the LTB models under consideration. Proceeding as in [44], we specify initial conditions given at an arbitrary hypersurface t = t 0 (subindex 0 will denote henceforth evaluation at t = t 0 ). It is useful to write LTB metric (1) in the following FLRW-like form where L = L(t, r ) is analogous to the FLRW scale factor. Since the LTB metric admit an arbitrary rescaling of the radial coordinate, we can always define a convenient radial coordinate by specific choices of R 0 (r ). We can identify L = 0 as the locus of the Big Bang singularity, while = 0 marks the locus of a shell crossing singularity [39]. From (8), (12a), (12b) and (12c) we can see how the qscalars scale as their equivalent FLRW scalars, that is which lead to Initial conditions to integrate the system (20a-20e) follow from specifying initial profiles ρ (m) 0 (r ), ρ (e) 0 (r ) and K 0 (r ) and a given choice of R 0 (r ). The initial profiles of the q-scalars ρ (m) q0 (r ), ρ (e) q0 (r ), K q0 (r ) and the fluctuations δ m 0 , δ e 0 , δ H 0 follow directly from (6) with R = R 0 . For simplicity, ξ can take as initial value ξ 0 = ξ(t 0 , r ) = 0 for all r , which sets the initial conditions for the dynamical sys- H (0, r ). This choice of initial value of ξ means that ξ = 0 and t = t 0 mark the same hypersurface, though hypersurfaces of constant t and ξ are different for ξ = 0 and t = t 0 .
Whether comoving shells expand or bounce/recollapse can be determined from (27) by looking at the roots of H q with: where we used the fact that dξ = H q dt leads to ξ = ln(L). For each choice of initial conditions m q0 , e q0 and free parameters w and α, if Q(L) has real roots for a given comoving shell, the latter will bounce at the value of ξ where Q = 0. Conversely, if Q(L) has no real roots the layer has an ever expanding evolution. For the remaining of the paper we will only consider the phase space evolution of expanding layers.
To obtain analytic solutions we need to solve (27) and obtain from the relation L /L = R 0 ( − 1)/R 0 , where the radial derivatives must be evaluated for constant t (see [44])). The scaling laws for the fluctuations can be found from (26). For example, using the definition of δ e , it is straightforward to show that which can (in principle) be used to evaluate once we have a solution t = t (L , r ) of (27). Since analytic solutions of (27) may only exist for very restricted values of α, w, scaling laws like (30) are not useful. In general, the evolution of the models needs to be determined numerically. If = 0 we have a shell crossing singularity, which means that initial conditions should be found to avoid this happening (it is not possible to provide simple guidelines for this, as in dust solutions with zero cosmological constant, see [29][30][31]34]). As shown in previous work (for example [44]) the qscalar formalism fails at shell crossing singularities because the fluctuations δ m , δ e , δ H diverge. Hence, we will select initial conditions such that shell crossings are avoided: > 0 holds throughout the full phase space evolution.
Critical points in terms of the parameters w and α
In this section the critical points of the system are studied for different possibilities of the free parameters w and α. As both parameters are widely used in the FLRW model, we will consider a parameter range that is common in Cosmology.
The constant EOS parameter w plays a similar role as in analogous CDM-DE mixtures based on FLRW models. While observational data seems to favor the -CDM model for which w = −1 holds exactly, small variations from this value are still possible [4]. We will henceforth adopt the current terminology, in the literature by referring to DE with w > −1 as "quintessence models" and w < −1 as "phantom models". The latter models present several theoretical problems, such as the violation of the second law of thermodynamics once we assign an entropy to the phantom fluid, or the presence of a negative kinetic energy of the phantom field term (when described by a scalar field) [1,2]. In this article we will consider quintessence and phantom models with w close to −1.
Considering the same interaction in the present article, the CDM-DE mixture in FLRW geometry examined in [48] finds that the second law of thermodynamics (based on the entropy of DE as an effective field) is violated if α < 0, while the entropy is zero for a scalar field in a pure quantum state. In [6,13,14] and for a similar coupling term with positive defined coupling constant, the authors find that the observational data suggest that α is smaller than 0.003 to 1σ , and smaller than 0.01 to 2σ , while a value of order 0.1 is ruled out at better than 99.95%. On the other hand, in [49] the evolution of linear perturbations in a FLRW background sharing our assumptions on CDM, DE and J q leads to the bounds − 0.22 > 3α > − 0.90 to comply with the constraints of CMB anisotropy. These results are specially interesting, since the dynamics of LTB solutions described by q-scalars and their fluctuations can be mapped to linear perturbation on an FLRW background [40]. While the spherical symmetry of LTB models allows for the description of a single structure, the latter can be studied exactly in full nonlinear regime. In the present paper we will assume positive and negative values of α. For the positive values, we will assume α = 0.1, even if the observational data suggest it is a too high value for a similar but not equal coupling term [6,13,14], for illustrative purposes.
The critical points PC4 − PC7 share the values m q = −α/w and e q = 1 − α/w of the homogeneous projection, but are distinct in the inhomogeneous projection coordinates. The points PC5 and PC7 are always saddle points in the range of free parameters considered. For a choice of parameters such that 1+w+α > 0, the critical point PC4 is a future attractor as all their eigenvalues are negative defined, while PC6 is saddle point. On the other hand, when 1+w + α = 0, PC6 has the same components as PC4, and behaves as a non hyperbolic point as one of its eigenvalues are null while the rest are negative defined. Finally, for 1 + w + α < 0, PC6 is the future attractor while PC4 is a saddle point. Figure 1 shows the homogeneous subspace together with the critical points PC R, PC A, PC S and the invariant line for both cases: α > 0 in panel (a) (in this case we have chosen α = 0.1 and w = −1), and α < 0 in panel (b)(α = −0.1 and w = −1). Some numerically computed trajectories are shown for illustration purposes only. We have chosen to represent arctan( m q ) vs. arctan e q in order to deal with finite values in the plots. The same criterion is used for the rest of the homogeneous projection plots. Figure 2 shows the two inhomogeneous projections of the system (20a-20e) and some numerically computed trajectories. Panel The projection with m q = −α/w and e q = 1 − α/w will be unphysical when α < 0. The projection m q = 1 and e q = 0, on the other hand, will be physically plausible and phenomenologically identical to that in panel 2d in the α > 0 case.
Energy density flow from DE to CDM (α > 0)
In this case all seven critical points are physical. The attractor is a different point for the different choices of w and α, as stated above. The presence of a future attractor for α > 0 allow us to find initial profiles that present inhomogeneities in the attraction basin of it, i.e the fluctuations (δ functions) evolving to constant values given by the components of the corresponding critical point. This behaviour is examined further ahead.
Quintessence and cosmological constant cases
When w ≥ −1 and α > 0, the critical point PC4 acts as a future attractor and the trajectories nearby evolve to it. On the other hand, the critical point PC2 is a past attractor as all the eigenvalues of the system computed near PC2 have positive values. The rest of the critical points are saddle points with their own attraction subspace generated by the corresponding eigenvectors.
Panel 1a shows the homogeneous subspace for w = −1: it is formally identical to that of the w > −1 case, except for the position of the future attractor and the shape of the invariant line. In the panel 2a, the inhomogeneous projection m q = −α/w and e q = 1 − α/w is shown for the w > −1 case. The attractor PC4 is also shown together with some trajectories in its vicinity that evolve to it. Also, the saddle points PC6 and PC7 are displayed (the point PC5 is not shown given that it is located far away and is always a saddle point). Finally, panel 2d displays the inhomogeneous subspace with m q = 1, e q = 0, plotted for the w = −1 case but, again, it is phenomenologically identical to the w > −1 case. In this projection the past attractor PC2 and the saddle points PC1, PC3 are also displayed.
Phantom dark energy case
The points PC4 and PC6 can be (respectively) a future attractor and a saddle point when 1 + w + α > 0 or (respectively) a saddle point and a future attractor when 1 + w + α < 0. Finally, when 1 + w + α = 0 both points have the same coordinates and the point is non-hyperbolic, i.e. it is an attractor in some directions and there is no evolution near it in other directions. The rest of points behave in a similar way as in the previous cases for any choice of free parameters.
The homogeneous subspace is identical to that of panel 1a except for the position of the PC R point and the slope of the invariant line. In the panel 2c, the inhomogeneous projection m q = −α/w and e q = 1 − α/w is displayed for a choice satisfying 1+w+α < 0. The point PC4 is a saddle point and the point PC6 is now the attractor of the system, in contrast with the case 1 + w + α < 0 that will be similar to the quintessence and cosmological constant cases. Finally, the inhomogeneous subspace with m q = 1, e q = 0 is as in the previous case similar to that in 2d.
Energy density flow from CDM to DE (α < 0)
When α < 0, the energy flows from the CDM to DE. In this case only PC1, PC2 and PC3 have physical meaning while the rest of the points represent values with m q < 0. In the homogeneous subsystem, the future attractor PC A is no longer physical and consequently the invariant line is also non physical. The trajectories in the homogeneous subsystem evolve from the past attractor to the m q = 0 axis or to infinity. When a given shell reaches the m q = 0 axis, we can assume that the CDM content of that shell has been consumed by the There is no significative difference between the homogeneous space for the different possibilities of the parameter w. Although the trajectories follow a different curve for every choice of w and α, they all evolve form the critical point PC R. Panel 1b shows schematically the homogeneous subspace for α < 0. The behavior of PC1, PC2 and PC3 in the inhomogeneous subspace with m q = 1, e q = 0 is identical to the α > 0 case, plotted in the panel d of Fig. 2.
The lack of future attractor for the δ A functions ( A = m, e, H, phase space variables of the inhomogeneous subspace) in this case makes it possible for some of their initial profiles to make them evolve: to infinity (shell crossing), or to some δ A < −1. From their definition in (6) the values δ A = −1 imply A = 0 if A q = 0 (we assume A = ρ (m) q , ρ (e) q , H to be positive). Consequently, it is not possible to keep an evolution with a δ A for more negative values than the limit −1, as this would imply negative local densities. We can argue that the evolution equations yield unphysical conditions if (somehow) δ m , δ e < −1 holds. In the next section, we explore this problem for a specific initial profile.
Numerical example of idealised structure formation scenarios
In this section we consider suitable initial profiles of local densities and spatial curvature to obtain a numerical example of potentially interesting structure formation scenarios. In each case we choose an appropriate form for R 0 (r ) defined for the interval r ∈ [0, r max ] and examine the evolution equations for fixed values of r specified by a partition of n elements in this interval.
To look at the numerical evolution of the initial profiles we define the dimensionless time parameter (different from ξ ) given byt = H s t where H s is an arbitrary constant with time inverse dimensions (in cosmological applications it is customary to choose H s = H 0 ). This rescaling of time introduces a rescaling of the remaining variables: For simplicity we will drop the bars on the normalised variables and will set the arbitrary scale as H s = 1, which fixes the energy density normalisation scale as κ/(3H 2 s ) = 1 (see [44]).
In order for the initial profiles to define a structure formation scenario we need some of the "inner" shells (values of r around the symmetry centre r = 0) that initially expand, but at some t bounce and collapse (H q changes sign from positive to negative), whereas "outer" shells continue expanding (H q > 0 holds for all t). The bounce is defined by H q = 0, hence we can define for each r a value t = t max (r ) such that H q (t max (r ), r ) = 0. Notice that the dynamical systems study we have undertaken does not examine phase space trajectories of shells that have bounced and then collapse (H q → 0 evolving towards H q < 0), as both coordinates [ m q , e q ] of the homogeneous projection diverge as H q → 0. The numerical study given in this section will compensate for this deficiency.
For the mixed expanding/collapsing type of evolution described above we need the following homogeneous subspace trajectories: (1) the outer ever expanding shells must evolve from the past to the future attractor (or to the e q axis when α < 0); (2) inner shells must evolve from the past attractor to infinity e q , m q → ∞ as t → t max . For a bounce/collapse regime (and pending on specific initial conditions), the variables of the inhomogeneous subspace could also diverge or not evolve to the future attractor.
As mentioned before, if δ A = −1 on a given shell and A q = 0, then A = 0, which for positive definite quantities (densities) implies that the LTB dynamical yield an unphysical evolution for decreasing δ A < −1. This problem tends to occur specially in the cases with α < 0, when no physical attractor is present, but it may also occur for some configurations in the α > 0 cases, where the inhomogeneous attractor (m) . c local scalar ρ (e) . d scalar J . Refer to the text for a detailed discussion of the panels is physical but the initial profiles were set with the initial δ A functions out of its attraction basin.
We have chosen the following initial profiles to be used to probe the models for three different sets of free parameters together with the coordinate choice R 0 (r ) = tan(r ). Hence, we consider a partition of n = 20 elements for r going from 0 to π/2.
Positive α and 1 + w + α > 0
Considering the numerical values α = 0.1 and w = −1 together with the configuration (31), the shells r 1−6 of the partition collapse while the rest evolve to the future attractor PC4, as the initial values for all the shells are in the attraction basin of PC4. Figure 3a shows the homogeneous projection of the configuration with the invariant line of the system in grey, and the initial conditions for every shell as red points. Figure 3b shows the inhomogeneous projection of the trajectories, and the initial conditions as red points. Figure 4 displays the radial profiles of the local scalars A = H, ρ (m) , ρ (e) and J at different instants of time in the plane arctan(A) vs. arctan(R 0 (r )). For t = 0.50 no shell has collapsed yet, for t = 0.70 shells 1, 2 have collapsed, for t = 1.00 the shells 3, 4 have collapsed, for t = 2.00 shell 5 has collapsed and the last inner shell is about to collapse. Finally, for t = 4.00 all inner shells have already collapsed and outer shells evolve into a homogeneous profile. For expanding trajectories the q-scalars and the fluctuations δ A for the outer shells tend to their attractor values, while the profiles of local scalars A tend to a constant profile, as δ A → 0 (A = m, e, H) for the attractor PC4. It takes a long time for the functions to evolve into their attractor values. The evolution of the these profiles to a constant profile can be appreciated in panel 4a for the local scalar H at t = 4.00, 6.00, 9.00 and in the panel 4b for the local scalar ρ (m) at the instants t = 4.00, 6.00, 9.00. The local scalar ρ (e) , which was initially constant, needs an even longer time to evolve into a constant profile, but the line representing it at t = 9.00 is clearly more homogeneous at the outer shells than in previous instants.
Positive α and 1 + w + α < 0
We choose α = 0.1 and w = −1.15. Only the shells r 1−3 collapse, while the rest evolve towards the attractor PC6. As for the other choice of parameters, the initial δ A functions are in the attraction basin of PC6. In this case, the inhomogeneous projection of the attractor is not zero as δ m = δ e = 0.05, δ H = 0.05/2. Consequently, from (6), the profiles of ρ (m) and ρ (e) do not evolve to a constant profile as in the previous case, but to a profile whose r -dependence is given by ρ (m) = ρ (e) = R 0.15 (1.05), while the local H is given by H = R 0.07 (1 + 0.05/2). Figure 5a shows the homogeneous projection of the configuration with the invariant line of the system in grey, and the initial conditions for every shell as red points. Figure 5b shows the inhomogeneous projection of the trajectories evolving to the future attractor, and the initial conditions as red points. Figure 6 displays profiles of local scalars at different instants of time. At the instant t = 0.80 no inner shell has collapsed yet, at the instant t = 1.00 shells i = 1, 2 have already collapsed and at the instant t = 2.00 all the inner shells have collapsed. Since the q-scalars and fluctuations δ A for the outer shells tend to their attractor values while trajectories expand, the profiles of local scalars tend to the terminal profile shown in panel 6a for H at t = 4.00, 6.00 and in the panel 6c for ρ (e) at instants t = 4.00, 6.00.
Negative α
For any choice of α < 0, the future attractor is no longer physical. To illustrate this case, we chose α = −0.1 and w = −1. The shells r 1−4 collapse while outer shells, r 5−20 , keep their expanding evolution up to a point where δ m = −1 and the LTB evolution is no longer physical. In particular for this profile and this choice of parameters the function δ m tends to −1 very rapidly for the outer shells. (m) . c scalar ρ (e) . d scalar J . Refer to the text for a detailed discussion of the panels Figure 7a shows the homogeneous projection of the configuration with the invariant line of the system in grey, and initial conditions for every shell as red points. All the outer shells evolve to the m q = 0 axis. Figure 7b displays the plot δ m vs. ξ for the outer ever expanding shells i = 5 − 19. The first shell to reach the value δ m = −1 is i = 20, which is not represented in the figure as δ m → −1 occurs immediately for this shell. The value ξ i for which δ m = −1 occurs (i.e. δ m (r = r i , ξ i ) = −1) puts an upper limit to the range of ξ for which we can use Eqs. (20a-20e) to obtain all the local scalars.
Conclusions
We have undertaken a full study of the phase space evolution of expanding and interactive CDM-DE mixtures, under the assumption that the interactive term (see (15)) is proportional to the DE energy density. These mixtures are the source of an inhomogeneous and spherically symmetric exact solution of Einstein's equations characterised by an LTB metric. The present article, together with a recent article [44], generalise previous work [39] for a CDM source with DE modelled by a cosmological constant.
As in [39,44], we examined the dynamics of these LTB solutions by means of q-scalars and their fluctuations, which transform in a natural way Einstein's equations into a dynamical system evolving in a 5-dimensional phase space. The latter was studied in terms of two interrelated subspace projections: the 2-dimensional homogeneous subspace whose variables are q-scalars ( m q , e q ) analogous to covariant scalars of an FLRW model with same type of CDM-DE mixture, and a 3-dimensional subspace involving the fluctuations of the q-scalars (δ m , δ e , δ H ) that control the inhomogeneity of the models (deviation from FLRW).
The critical points associated with the phase space are listed in Table 1: a past attractor, a future attractor and 5 saddle points. All of them (save the past attractor) depend on the two constant free parameters of the solutions: the EOS parameter w and the proportionality between the interaction term and DE density, α, whose sign determines the directionality of the interaction energy transfer (CDM to DE for α > 0 and DE to CDM for α < 0). The phase space evolution was examined for "quintessence" models (−1 < w < −1/3) and "phantom" models (w < −1), keeping in either case w close to the value -1 that is favoured by observations.
It is important to compare our results with those found in our recent study in [44] involving a similar CDM-DE mixture, but with the interaction term proportional to the CDM density through the dimension-less constant α. The main difference between this assumption and that of the present work (interaction proportional to DE density) is in the parameter dependence of the past attractor, which in both mixtures can be associated with the initial Big Bang singularity. For the mixture of [44] the phase space position of this attractor depended on α, w, while in the present mixture it does not, which means that it is a fixed point in the phase space.
The above mentioned difference in the phase space position of the past attractor has very important consequences, as all available cosmological observations survey our past light cone. For α < 0, and for every set of initial conditions, the past attractor in [44] was located in an unphysical phase space region (DE density becomes negative), hence α > 0 (energy flows from CDM to DE) was the only physically plausible choice that can be (in principle) compatible with observational constraints for all choices of EOS parameter w. As a contrast, in the mixture examined here we found that regardless of the sign of α the past attractor is fixed, taking physically plausible values: an Einstein de Sitter state with zero DE density and positive CDM density with unity Omega factor ( e q = 0, m q = 1). Hence, both directions of the interaction energy transfer are (in principle) compatible with observations. (m) . c scalar ρ (e) . d scalar J . Refer to the text for a detailed discussion of the panels Since the future attractors correspond to times much beyond the present cosmic time, they cannot be contrasted with observational data, and thus are more amenable to speculation. For the case examined in [44] this attractor simply marked a fixed de Sitter state ( m q = 0, e q = 1). However, in the present study the phase space position of the future attractor depends on the choice of w, α, and for α < 0 this position is not in a physically meaningful phase space region (CDM density becomes negative). While this can be problematic (as all trajectories must terminate in this attractor), it can still be acceptable provided we only consider the evolution of the models up to phase space points where the CDM density vanishes. Such points correspond to values for which δ m = −1 that mark different cosmic times for differ-ent comoving shells. On the other hand, for α > 0, the shells with initial conditions in the attraction basin evolve to a point where both CDM and DE reach a terminal energy density. In this case a choice of parameters with 1 + w + α > 0 lead to a final homogeneous state, as the phase space variables of the inhomogeneous subspace (the fluctuations) tend to zero. For 1 + w + α < 0 the comoving shells evolve towards a nonzero density profile, since the fluctuations tend to nonzero values that depend on the radial coordinate.
The comparison between the results of the present work and those of [44] provide what can be, perhaps, the most interesting conclusion that follows from the present article: if CDM and DE are assumed to interact, then mixtures in which the interaction term is proportional to the DE density (as in this paper) offer more possibilities for educated speculation, and thus could be preferable over those in which it is proportional to the CDM density (as in [44]). The reason for this is, as explained before, that the past attractor (which determines the past evolution surveyed by observations) is always physical for any reasonable choice of parameters, in contrast with the coupling used in [44] where the choice of α < 0 was not possible.
We also examined three structure formation scenarios, given (for example) by the profile in (31). These initial conditions correspond to inner expanding shells that are not in the attraction basin while the rest of the expanding shells (outer shells) that evolve to the attractor or to the m q = 0 axis. The inner shells will evolve to a point where they bounce and start a collapse. The outer shells expand forever and evolve into a profile determined by the choice of 1 + w + α, as mentioned before. Some examples of those scenarios can be found in Sect. 5, where we have computed the local scalar functions over physical time t.
It is also important to compare our results (and those of [44]) with those obtained for DE models or similar CDM-DE mixtures in FLRW cosmologies [1,2,7,8,[13][14][15][16]48,49], since the dynamics of LTB solutions described by q-scalars and their fluctuations can be mapped to linear perturbation on an FLRW background [40]. While the spherical symmetry of the LTB models allows for the description of a single structure, the evolution of the latter can be studied exactly throughout the full non-linear regime.
We believe that our results can provide interesting clues to test theoretical assumptions on dark sources (DE and CDM) in terms of observations in the scales of structure formation and in the non-linear eegime. In fact, it is straightforward to generalise LTB models to non-spherical Szekeres models, which are endowed with more degrees of freedom and thus allow for a fully relativistic description and modelling of multiple structures [50][51][52]. We are currently undertaking further efforts to extend the present work to probe non-linear observational effects of theoretical assumptions on CDM and DE sources. In particular, we aim at considering "spherical collapse models", as well as less idealised non-spherical models, whose source is the type of CDM-DE mixtures we have examined here, but now attempting to fit more realistic observational constraints of structure formation.
q determined by the evolution alwρ The first two critical points respectively correspond, exactly, to the critical points PC A and PC S of the dark mixture without radiation. The critical point PC S2 r , which corresponds to the critical point PC R of case without radiation, is a saddle point with two eigenvectors related to two positive defined eigenvalues that generate the [ m q , e q ] subspace, with the r q axis acting as an attraction direction. Finally, there is a past attractor in which only the radiation term is nonzero.
Some trajectories of this 3-dimensional homogeneous subspace evolve from the past attractor (where the radiation energy density dominates the expansion) to the future attractor directly. The existence of these trajectories would be the only significant change in comparison with the radiationless evolution that we have described. However, while those trajectories are theoretically possible, they are not of physical interest because (i) they do not affect late time evolution near the future attractor and the do not lead to structure formation (assuming this radiation fluid becomes a photon gas after baryon-photon decoupling).
There exist several initial conditions of the homogeneous subspace that lead to trajectories evolving from the past attractor to the proximity of the point PC S2 r (where CDM dominates the expansion and both the radiation and DE sources have much smaller energy densities than the CDM source) and then evolve in the [ m q , e q ] plane to the future attractor for α > 0 (or to the e q axis when α < 0), or to infinity. The evolution of later trajectories from the instant they reach the proximity of the PC S2 will be identical to those presented in the scenario without radiation. In this sense, the addition of a radiation source extends the past of evolution of the trajectories of the radiation-less case presented in this work, allowing them to evolve to a true past attractor where the homogeneous radiation dominates the expansion. | 13,258 | 2018-03-01T00:00:00.000 | [
"Physics"
] |
From digital world to real life: a robotic approach to the esophagogastric junction with a 3D printed model
Background Three-dimensional (3D) printing may represent a useful tool to provide, in surgery, a good representation of surgical scenario before surgery, particularly in complex cases. Recently, such a technology has been utilized to plan operative interventions in spinal, neuronal, and cardiac surgeries, but few data are available in the literature about their role in the upper gastrointestinal surgery. The feasibility of this technology has been described in a single case of gastroesophageal reflux disease with complex anatomy due to a markedly tortuous descending aorta. Methods A 65-year-old Caucasian woman was referred to our Department complaining heartburn and pyrosis. A chest computed tomography evidenced a tortuous thoracic aorta and consequent compression of the esophagus between the vessel and left atrium. A “dysphagia aortica” has been diagnosed. Thus, surgical treatment of anti-reflux surgery with separation of the distal esophagus from the aorta was planned. To define the strict relationship between the esophagus and the mediastinal organs, a life-size 3D printed model of the esophagus including the proximal stomach, the thoracic aorta and diaphragmatic crus, based on the patient’s CT scan, was manufactured. Results The robotic procedure was performed with the da Vinci Surgical System and lasted 175 min. The surgeons had navigational guidance during the procedure since they could consult the 3D electronically superimposed processed images, in a “picture-in-picture” mode, over the surgical field displayed on the monitor as well as on the robotic headset. There was no injury to the surrounding organs and, most importantly, the patient had an uncomplicated postoperative course. Conclusions The present clinical report highlights the feasibility, utility and clinical effects of 3D printing technology for preoperative planning and intraoperative guidance in surgery, including the esophagogastric field. However, the lack of published data requires more evidence to assess the effectiveness and safety of this novel surgical-applied printing technology.
Introduction
The importance of intraoperative safety for both patients and surgeons and the concept of "tailored surgery" have become one of the main topics in surgical research over the past few years [1]. Patient-centered preoperative planning is required to achieve accurate knowledge of the target anatomy, thereby helping surgeons during critical steps and potential complications [2]. In recent years, the rise of robot-assisted surgery for a variety of surgical procedures has significantly reshaped surgical practice [3][4][5], although the safety of the patient remains essential. The robotic platform enables surgeons to operate more accurately during difficult procedures compared to conventional laparoscopy, which provides high resolution three-dimensional (3D) operative views and improves depth perception, as well as superior instrument handling [6,7]. 3D printing, in addition to the standard medical imaging, may represent an invaluable tool to allow a good representation of surgical scenario, particularly in challenging cases [8]. Additionally, the 3D models provide the surgeon with an opportunity to review, plan, and study the procedure in detail even days before the surgery.
However, from its first description in the 1980s, 3D printing has been limited in maxillofacial and orthopedic surgeries and in particular for implants and prostheses [9,10]. Recently, the technology has been extended to spinal surgery, neurosurgery as well as cardiac surgery [11][12][13][14][15]. Evidence in gastrointestinal surgery is still lacking. Here we report the first case using the combination of 3D printing technology and robotic esophagogastric surgery in a condition of achalasia with complex anatomy due to a markedly tortuous descending aorta.
Materials and methods
A 65-year-old Caucasian woman presented with heartburn and pyrosis. She also experienced intermittent mild dysphagia for solid food. The patient had no history of other relevant diseases and no abnormal family or medication history of note. She denied prior weight loss, halitosis, ingestions of caustic substances, excessive alcohol consumption, hematemesis, and melena. The physical examination was unremarkable. No abnormalities were found at conventional esophageal barium swallow and upper endoscopy revealed evidence of Barrett's esophagus histologically confirmed as uncomplicated intestinal metaplasia. A lower esophageal sphincter (LES) hypotension, shortening of the abdominal LES, ineffective peristalsis and type 2 esophagogastric junction (EGJ) subtype (small hiatus hernia) resulted from esophageal high-resolution manometry (HRM). Additionally, a series of transmitted cardiac pulsations unrelated to swallows were found on the distal esophagus just above the EGJ. A thorax CT scan was helpful in define the anatomical relations of the esophagus with the major thoracic vessels. It highlighted a compression of the esophagus between a tortuous thoracic aorta and left atrium, allowing to diagnose a dysphagia aortica. Thus, surgical treatment of anti-reflux surgery with separation of the distal esophagus from the aorta was planned. To define the strict relationship between the esophagus and the mediastinal organs, a life-size 3D printed model of the esophagus including the proximal stomach, the thoracic aorta and diaphragmatic crus, based on the patient's CT scan, was manufactured. The imaging data were segmented to outline the relevant structures then converted to a 3D triangulated surface mesh file suitable for fabrication with the assistance of a 3D printing and manufacturing company (3dific Srl, Perugia, Italy) (Fig. 1).
The segmentation and design of model were reviewed for accuracy by the surgeon and engineer. Finally, the model was realized in 48 working hours by means of a stereolithography 3D printer selectively curing resin (Fig. 2). The cost associated with 3D printed model production was 230,00 Euros. Using the 3D model of the esophago-gastric junction allowed surgeon to preoperatively locate the general position and proximity of tortuous thoracic aorta with the esophagus as well as surrounding tissues. Particularly, the surgical team measured the thoracic aorta positioning and esophago-gastric structures on the 3D model before operating then applied the patienttailored surgical anatomy in the surgical field. This model enabled surgeons to verify the position of critical structures and to discuss all possible approaches and strategies to operate as well as plan all possible critical maneuvers. This provides proof that the operation can be performed safely through less invasive techniques, without using conventional approach to dominate such complex case. Institutional review board approval was not required, since the privacy and personal identity information of patient were protected, i.e., all the data were analyzed anonymously, and the patient was treated with approved diagnostic and therapeutic procedures according to generally accepted standards of care. Anyway, the patient gave informed written consent to participate to this study.
Results
The robotic procedure was performed with the da Vinci Surgical System (Xi, Intuitive Surgical Inc., Sunnyvale, CA, USA). The manipulation of 3D prototype each time deemed necessary by on-console surgeon ensured adequate orientation on the surgical field and identification of critical anatomic landmarks (Fig. 3). Furthermore, the surgeons had navigational guidance during the procedure, since they could consult the 3D processed images electronically superimposed, in a "picture-in-picture" mode, over the surgical field displayed on the monitor system and on a robotic headset. The procedure started with the division of gastro-hepatic ligament and dissection of the phreno-esophageal membrane to expose and dissect the diaphragmatic pillars. A retro-esophageal window was obtained for the positioning of an umbilical tape, pulling up the esophagus and exposing the hiatus. Periesophageal mediastinal dissection was initiated bluntly, taking care to preserve the aortic plane behind the esophagus. This step was carried out very carefully and thermal devices were limited during dissection to prevent any vascular injuries. Furthermore, the separation of the esophagus from the aorta was obtained as far as possible into mediastinum until 3 cm of the esophagus were transposed into the abdomen under no tension (Fig. 4). The crura were then closed from the right of the esophagus with interrupted non-absorbable sutures placed 8 to 10 mm apart, 10 mm back from the crural edge. A Nissen fundoplication was then completed. There was no injury to the surrounding organs. The total operative time was 175 min. The patient had an uncomplicated postoperative course and was discharged home on the third postoperative day.
Discussion
In this case report, we assessed the feasibility, utility and clinical impact of a novel surgical-applied technology for preoperative planning and intraoperative guidance for a rare case of dysphagia aortica treated by robotic approach. Although three-dimensional printing was described for the first time more than three decades ago, its diffusion in digestive surgery has increased only over the last years [16]. Nevertheless, few data are still available in the literature about its role in esophageal surgery. Dickinson KJ et al. [11] described for the first time the application of 3D modelling to complex esophageal cases. In a patient after a left pneumonectomy and thoracic aorta replacement by means of Dacron graft, complicated by an aorto-esophageal fistula requiring aortic bypass and esophageal diversion with feeding gastrostomy, three-dimensional modelling facilitated the complex definitive endoscopic-surgical treatment. The endoscopic mucosal resection, as well as argon plasma ablation of excluded esophagus followed by transection of the distal esophagus, was allowed by the preoperative treatment simulation and intraoperative model manipulation to obtain an accurate real-time recognition of critical anatomic landmarks. In the same way, another 3D model was used in a patient with multiple esophageal diverticula facilitating the most appropriate surgical strategy in a preoperative setting. To the best of our knowledge, no other study addressing the application of the 3D printed model in upper gastrointestinal surgery is conducted and no consistent data as regard its usefulness are available. Here we demonstrated the feasibility of creating a rigid life-size 3D printed model of the esophagus including the thoracic aorta, the upper third of the stomach, and diaphragmatic crus, using images imported from the standard computerized tomography for the purpose of preoperative procedure planning and external guidance in the intraoperative setting. The preoperative study of the patient-tailored surgical anatomy represents one of the cornerstones of 3D printing. Hamabe et al. [17] created a 3D printed pelvic model to improve the comprehension of pelvic anatomy during laparoscopic surgery for rectal cancer. As a result, the preliminary understanding of complex anatomical relations was reflected in a more safe and effective intervention. In the same way, Garcia-Granero A et al. [18] proposed a 3D model for preoperative planning of superior mesenteric as well as ileocolic vascular pattern to facilitate the lymphadenectomy during complete mesocolic excision for right colon cancer. They discussed that Fig. 3 Model in the operating theatre with the operating robot ready to use Fig. 4 Intraoperative view of the patient anatomy highlighting the "picture-in-picture" mode. E, Esophagus; Ao, Aorta; RP, Right Pillar; L, liver the walk down of vascular roots before laparoscopic right hemicolectomy could reduce the risk of venous injury and intra-operative hemorrhage. The advantage of technological development to create 3D model and tool for surgical use from CT image elaboration is represented by the recognition, in a real size mode, of the interspatial relationship between anatomical structures of the target area [19]. Furthermore, the availability of the model several days prior to surgery represents the mainstay of 3D printing technology: all surgical team members can discuss the surgical strategy evaluating all possible approaches and solutions to perform the operation as well as defining critical maneuvers in a calmer condition than surgical theatre. In the reported case, the 3D model improved the preoperative discussions with better evaluation of the target anatomy and helped the surgeon to decisions to proceed with robotic rather than laparoscopic, open transabdominal as well as transthoracic approach. During the operation, the visualization of the tortuous thoracic aorta and its proximity to the distal esophagus from the 3D model was fundamental to the safe outcome of the procedure, allowing the surgeons to recognize in detail the relative positions of critical structures. In this way, a real model in the surgeon's hands overcame the absence of tactile feedback of the robotic technology and the limitations of 3D CT images manipulation on the screen. Reducing operative time represents another important advantage [20]. The preoperative team discussion and the preliminary intervention plan with the evaluation of all possible solutions, the definition of dissection planes, and the simulation of critical maneuvers allowed the surgeon to focus on other key points resulting in a safer surgery [20,21]. Moreover, it is important to emphasize that the 3D printed model influences the planning of the kind of surgery. Specifically, in the absence of preoperative recognition of critical structures as well as interspatial relationship between anatomical structures of the target area by means of 3D model, the operation would not have been conducted with the robotic approach. Accordingly, in recent years have been reported encouraging results of pre-surgical simulation on a patient-specific tissue-like 3D model [22]. The recent technological advances lead to print any kind of human parts, which can be made of soft and deformable materials mimicking the physical properties of human tissues [8,19,23]. Surgeons of any disciplines and expertise have the chance to improve their surgical skills through multiple repetitions of the same maneuvers outside the operating room getting ready for a real intervention before it is carried out on the patient. Von Rundstedt et al. [22] described their experience on 10 patients with renal tumors who underwent robot-assisted laparoscopic partial nephrectomy after preoperative rehearsal using 3D silicone patient-specific renal models. They demonstrated the same results in tumor resection time, resected tumor volume and morphology between the model and patient's kidney, suggesting that the simulation platform may represent an invaluable tool for surgical decisionmaking, preoperative rehearsal as well as surgical training. Similar results were reported also by Pugliese L et al. [19] after preoperative simulation of robotic live-donor nephrectomy and robotic correction of splenic artery aneurysm. They highlighted that the training on the same patient's anatomy could increase the confidence during the operation. Of course, this would be one of the major advantages over conventional as well as a virtual reality-based surgical simulator for minimally invasive technique. Moser A et al. [24] conducted a studied aiming at underlyine the difference in experience between digital and physical handling in brick-puzzle games. Interestingly, they report that resistance conveys a kind of inertia by means of the fingers feeling the tactile sense, in the experiment conducted in a real world. In the virtual reality, on the contrary, this advantage is lost even if the software included a feedback mechanism. They postulated that the computer may not produce the same detailed awareness that is created via our many sensory experiences. Virtual feedback will therefore always be unsatisfactory because it is not anchored in the real life [25].
The main limitations of 3D printing included the time as well as relatively high costs of production.
The printing process, from the elaboration of the image to production of the model, could take hours to several days [21,26,27]. Of course, the more complex the model, the more time is needed to its production. Actually, for these reasons, the 3D printing remains a prerogative of elective surgery [26,27]. It is of note that, since its primary use consists of preoperative planning, training, and simulation, 3D printing should be prepared in advance in order to be properly applied. It is claimed that variable costs depend on the type of printing technique, materials used, and workload of dedicated staff [20,21,26,27]. Anyway, the sharing of the printing platform and the optimization of production staff members by multiple users could reduce the expenditures [26].
Conclusions
The technological innovations, such as 3D printing and robotic surgery, represent an unimaginable progress until a few years ago. The present report underlines the feasibility, advantage and clinical impact of 3D printing technology for preoperative planning and intraoperative guidance for esophagogastric surgery. However, the lack of published data requires more evidence to assess the effectiveness and safety of this novel surgical-applied printing technology. | 3,614.8 | 2019-10-25T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Semiclassical S-matrix for black holes
We propose a semiclassical method to calculate S-matrix elements for two-stage gravitational transitions involving matter collapse into a black hole and evaporation of the latter. The method consistently incorporates back-reaction of the collapsing and emitted quanta on the metric. We illustrate the method in several toy models describing spherical self-gravitating shells in asymptotically flat and AdS space-times. We find that electrically neutral shells reflect via the above collapse-evaporation process with probability exp(-B), where B is the Bekenstein-Hawking entropy of the intermediate black hole. This is consistent with interpretation of exp(B) as the number of black hole states. The same expression for the probability is obtained in the case of charged shells if one takes into account instability of the Cauchy horizon of the intermediate Reissner-Nordstrom black hole. Our semiclassical method opens a new systematic approach to the gravitational S-matrix in the non-perturbative regime.
Introduction
Almost forty years of intensive research leave the black hole information paradox [1,2] as controversial as ever. Although the argument based on the AdS/CFT correspondence [3,4] indicates that quantum gravity is dual to a healthy CFT and therefore unitary, the process of black hole evaporation still presents an apparent mismatch between the principles of low-energy gravity and those of quantum theory. In particular, recent AMPS version of the paradox [5,6] suggests that certain measurements of Hawking quanta reveal a firewall around the black hole which destroys infalling observers and violates the equivalence principle (see [7][8][9] for related works). Thus, a systematic approach to the processes of black hole formation and evaporation is needed.
A plausible source of confusion is perturbative expansion around the classical black hole background. This expansion is certainly valid at short time scales but has been argued [10] to give inappropriate quantum state of Hawking radiation at late stages of black hole evaporation when the information is released. Indeed, the classical black hole does not correspond to a well-defined asymptotic state of quantum gravity; at best it can be regarded as a metastable state. Used as a zeroth-order approximation for quantum calculations, it is likely to introduce inconsistencies.
A consistent approach to black hole unitarity considers a two-stage scattering process involving collapse and black hole evaporation, see Fig. 1. The initial and final states Ψ i and Ψ f of this process represent free matter particles and free Hawking quanta in flat space-time. Unlike the black hole, these are the true asymptotic states of quantum gravity related by an S-matrix [11][12][13]. The latter must be unitary if black hole formation does not lead to information loss. To realize that the scattering setup is natural for unitarity tests, one imagines a gedanken experiment at a future trans-Planckian collider where collision of a few energetic particles forms a micro black hole, the latter evaporates and its decay products are registered. Experimentalists analyse the scattering amplitudes in various channels and verify if they obey relations imposed by unitarity.
The importance of collapse stage for the resolution of information paradox was emphasized before [13][14][15][16][17][18]. However, no working scheme for calculating the black hole S-matrix from first principles has been formulated so far. Interesting approaches to the gravitational S-matrix have been developed in Refs. [19,20] (see also references therein). Based on perturbative calculations, they demonstrate that scattering of two trans-Planckian particles is accompanied by an increasingly intensive emission of soft quanta as the regime of black hole formation is approached. However, the validity of perturbative expansion in the black hole-mediated regime is not fully understood. To circumvent the obstacles, we focus on the case when both the final and initial states of the scattering process are made of a large number of soft particles. We assume that the total energy of particles exceeds the Planck scale, so that the intermediate black hole has mass well above Planckian. Then the overall process is expected to be described semiclassically in low-energy gravity. Below we develop a systematic semiclassical method to calculate the gravitational S-matrix elements.
A straightforward application of the semiclassical approach to scattering through an intermediate black hole is problematic. The reason is traced to the mismatch between the asymptotic states of classical and quantum gravity. We want to evaluate the amplitude of transition between the initial and final asymptotic states with wave functionals Ψ i [Φ i ] and Ψ f [Φ f ]. The path integral in Eq. (1.1) runs over all fields Φ of the theory including matter fields, metrics and ghosts from gauge-fixing of the diffeomorphism invariance; S is the action. In the asymptotic past and future the configurations Φ in Eq. (1.1) must describe a collection of free particles in flat space-time. However, this condition is not satisfied by the saddle-point configuration Φ cl saturating the integral (1.1) in the semiclassical limit → 0. Indeed, Φ cl extremizes S i.e. solves the Einstein-Hilbert equations and classical equations for matter fields. Since black holes are stable asymptotic states in classical gravity, the solution Φ cl starts with matter in flat space-time and arrives to a black hole in the asymptotic future. It fails to describe the second part of the process -the evaporation of the black hole -and as such, does not satisfy the final-state boundary conditions in Eq. (1.1). One concludes that the amplitude (1.1) cannot be computed with the standard saddle-point technique even when the conditions for the semiclassical approximation are fulfilled.
To overcome this obstacle, we use the modified semiclassical method of Refs. [21][22][23] (see [24,25] for the seminal ideas and [26,27] for field theory applications). The key idea is to constrain integration in the path integral (1.1) to scattering configurations Φ where the mass is concentrated in a compact volume for a fixed time T 0 as measured by the asymptotic observer. Since T 0 is finite, this constraint explicitly eliminates configurations with eternal black holes from the domain of integration. The resulting constrained path integral is saturated by the saddle-point solution with the correct asymptotic behavior corresponding to free particles in the past and future flat space-times. One can say that the constraint forces the intermediate black hole to decay. At the final step of the computation one recovers the original amplitude by integrating over T 0 , i.e. one-parameter family of saddle-point configurations corresponding to different values of black hole lifetime 1 .
Two points must be emphasized. First, in our approach one works with the saddlepoint configurations satisfying the asymptotic boundary conditions and thus encapsulating the black hole decay in the leading order of the semiclassical expansion. This is a crucial difference from the fixed-background semiclassical methods where the black hole evaporation is accounted for only at the one-loop level. Second, the saddle-point configurations saturating the scattering amplitudes are in general complex and do not admit a straightforward interpretation as classical geometries. In particular, they are meaningless for an observer falling into the black hole. Indeed, the latter observer measures local correlation functions given by the path integrals in the in-in formalism -with different boundary conditions and different saddle-point configurations as compared to those in Eq. (1.1). This distinction lies at the heart of the black hole complementarity principle [29].
Our approach is completely general and can be applied to any gravitational system with no symmetry restrictions. However, the task of solving nonlinear saddle-point equations is rather challenging. Below we illustrate the method in several exactly tractable toy models describing spherical gravitating dust shells. We consider neutral and charged shells in asymptotically flat and anti-de Sitter (AdS) space-times. Applications to field theory that are of primary interest are postponed to future.
Although the shell models involve only one collective degree of freedom -the shell radius -they are believed to capture some important features of quantum gravity [30][31][32][33]. Indeed, one can crudely regard thin shells as narrow wavepackets of an underlying field theory. In Refs. [33][34][35] emission of Hawking quanta by a black hole is modeled as tunneling of spherical shells from under the horizon. The respective emission probability includes back-reaction of the shell on geometry, where B i and B f are the Bekenstein-Hawking entropies of the black hole before and after the emission. It has been argued in [36] that this formula is consistent with unitary evolution.
In the context of shell models we consider scattering processes similar to those in Fig. 1: a classical contracting shell forms a black hole and the latter completely decays due to quantum fluctuations into an expanding shell. The initial and final states Ψ i and Ψ f of the process describe free shells in flat or AdS space-times. Our result for the semiclassical amplitude (1.1) has the form The probability is P f i exp(−2Im S reg / ). We show that for neutral shells it coincides with Eq. (1.2), where B i is set equal to the entropy of the intermediate black hole and B f = 0. This is consistent with the result of Refs. [30][31][32][33] since the first stage of the process, i.e. formation of the intermediate black hole, proceeds classically. For charged black holes the same result is recovered once we take into account instability of the inner Cauchy horizon of the intermediate Reissner-Nordström black hole [37][38][39][40][41][42]. Our results are therefore consistent with the interpretation of Hawking radiation as tunneling. However, we obtain important additional information: the phases of the S-matrix elements which explicitly depend, besides the properties of the intermediate black hole, on the initial and final states of the process.
The paper is organized as follows. In Sec. 2 we introduce general semiclassical method to compute S-matrix elements for scattering via black hole formation and evaporation. In Sec. 3 we apply the method to transitions of a neutral shell in asymptotically flat spacetime. We also discuss relation of the scattering processes to the standard thermal radiation of a black hole. This analysis is generalized in Sec. 4 to a neutral shell in asymptotically AdS space-time where scattering of the shell admits an AdS/CFT interpretation. A model Figure 2. The contour used in the calculation of the S-matrix elements. Quantum transition from t i to t f is preceded and followed by the free evolution.
with an electrically charged shell is studied in Sec. 5. Section 6 is devoted to conclusions and discussion of future directions. Appendices contain technical details.
2 Modified semiclassical method 2.1 Semiclassical S-matrix for gravitational scattering The S-matrix is defined as whereÛ is the evolution operator; free evolution operatorsÛ 0 on both sides transform from Schrödinger to the interaction picture. In our caseÛ describes quantum transition in Fig. 1, whileÛ 0 generates evolution of free matter particles and Hawking quanta in the initial and final states. The time variable t ∈ [t i , t f ] is chosen to coincide with the time of an asymptotic observer at infinity. Using path integrals for the evolution operators and taking their convolutions with the wave functionals of the initial and final states, one obtains the path integral representation for the amplitude 2 (2.1), where Φ = {φ, g µν } collectively denotes matter and gravitational fields 3 along the time contour in Fig. 2. The interacting and free actions S and S 0 describe evolution along different parts of the contour. The initial-and final-state wave functionals Ψ i and Ψ f depend on the fields Φ ∓ ≡ Φ(t = 0 ∓ ) at the endpoints of the contour. In the second equality of Eq. (2.2) we combined all factors in the integrand into the "total action" S tot [Φ]. Below we mostly focus on nonlinear evolution from t i to t f and take into account contributions from the dashed parts of the contour in Fig. 2 at the end of the calculation.
To distinguish between different scattering regimes, we introduce a parameter P characterizing the initial state [43] -say, its average energy. If P is small, the gravitational interaction is weak and the particles scatter trivially without forming a black hole. In this regime the integral in Eq. (2.2) is saturated by the saddle-point configuration Φ cl satisfying the classical field equations with boundary conditions related to the initial and final states [44]. However, if P exceeds a certain critical value P * , the classical solution Φ cl corresponds to formation of a black hole. It therefore fails to interpolate towards the asymptotic out-state Ψ f living in flat space-time. This marks a breakdown of the standard semiclassical method for the amplitude (2.2).
To deal with this obstacle, we introduce a constraint in the path integral which explicitly guarantees that all field configurations Φ from the integration domain have flat space-time asymptotics. Namely, we introduce a functional T int [Φ] with the following properties: it is (i) diff-invariant; (ii) positive-definite if Φ is real; (iii) finite if Φ approaches flat space-time at t → ±∞; (iv) divergent for any configuration containing a black hole in the asymptotic future. Roughly speaking, T int [Φ] measures the "lifetime" of a black hole in the configuration Φ. Possible choices of this functional will be discussed in the next subsection; for now let us assume that it exists. Then we consider the identity where in the second equality we used the Fourier representation of the δ-function. Inserting Eq. (2.3) into the integral (2.2) and changing the order of integration, we obtain, The inner integral over Φ in Eq. (2.4) has the same form as the original path integral, but with the modified action This implies that Φ has correct flat-space asymptotics. The integral over T 0 is saturated at = 0. Importantly, we do not substitute = 0 into the saddle-point equations for Φ , since in that case we would recover the original classical equations together with incorrect asymptotics of the saddle-point solutions. Instead, we understand this equation as the limit → +0 (2.7) that must be taken at the last stage of the calculation. The condition Re > 0 is required for convergence of the path integral (2.4). We obtain the saddle-point expression (1.3) for the amplitude with the exponent 4 where the limit is taken in the end of the calculation. To summarize, our method breaks computation of the S-matrix elements into two steps. First, one modifies the action according to Eq. (2.5), where Re > 0, and solves the corresponding classical equations of motion. The modified solutions Φ automatically approach flat space-time in the asymptotic past and future. Second, one evaluates the action on the modified solutions and sends → +0 obtaining the leading semiclassical exponent of the S-matrix element, see A remark is in order. Since the modification adds complex terms to the action, the modified saddle-point configurations Φ are also complex. Typically, the space of complex saddle-point solutions is complicated and selecting the physical solution poses a non-trivial challenge. To this purpose we use the method of continuous deformations. Namely, we pick up a real classical solution Φ 0 describing scattering at a small value of the parameter P < P * . By construction, Φ 0 approaches flat space-time at t → ∓∞ and gives the dominant contribution to the integral (2.4). Next, we modify the action and gradually increase from = 0 to the positive values constructing a continuous branch of modified solutions Φ . At → +0 these solutions reduce to Φ 0 and therefore saturate the integral (2.4). We finally increase the value of P to P > P * assuming that continuously deformed saddlepoint configurations Φ remain physical 5 . In this way we obtain the modified solutions and the semiclassical amplitude at any P . We stress that our continuation procedure cannot be performed with the original classical solutions which, if continued to P > P * , describe formation of black holes. On the contrary, the modified solutions Φ interpolate between the flat-space asymptotics at any P . They are notably different from the real classical solutions at P > P * .
The functional T int [Φ]
Let us construct the appropriate functional T int [Φ]. This is particularly simple in the case of reduced models with spherically-symmetric gravitational and matter fields. The general spherically-symmetric metric has the form where dΩ is the line element on a unit two-sphere and g ab is the metric in the transverse two-dimensional space 6 . Importantly, the radius r(y) of the sphere transforms as a scalar 4 Below we consider only the leading semiclassical exponent. The prefactor in the modified semiclassical approach was discussed in [21][22][23]. 5 In other words, we assume that no Stokes lines [45] are crossed in the course of deformation. This conjecture has been verified in multidimensional quantum mechanics by direct comparison of semiclassical and exact results [21-25, 46, 47]. 6 We use the signature (−, +, . . .) for the metrics gµν and g ab . The Greek indices µ, ν, . . . are used for the four-dimensional tensors, while the Latin ones a, b, . . . = 0, 1 are reserved for the two-dimensional space of the spherically reduced model. under the diffeomorphisms of the y-manifold. Therefore the functional is diff-invariant. Here w(r) and F (∆) are non-negative functions, so that the functional (2.10) is positive-definite. We further require that F (∆) vanishes if and only if ∆ = 1. Finally, we assume that w(r) significantly differs from zero only at r r w , where r w is some fixed value, and falls off sufficiently fast at large r. An example of functions satisfying these conditions is To understand the properties of the functional (2.10), we consider the Schwarzschild frame where r is the spatial coordinate and the metric is diagonal. The functional (2.10) takes the form, Due to fast falloff of w(r) at infinity the integral over r in this expression is finite. However, convergence of the time integral depends on the asymptotics of the metrics in the past and future. In flat space-time g 11 = 1 and the integrand in Eq. (2.10) vanishes. Thus, the integral over t is finite if g ab approaches the flat metric at t → ±∞. Otherwise the integral diverges. In particular, any classical solution with a black hole in the final state leads to linear divergence at t → +∞ because the Schwarzschild metric is static and g 11 = 1. Roughly speaking, T int can be regarded as the Schwarzschild time during which matter fields efficiently interact with gravity inside the region r < r w . If matter leaves this region in finite time, T int takes finite values. It diverges otherwise. Since the functional (2.10) is diff-invariant, these properties do not depend on the particular choice of the coordinate system. The above construction will be sufficient for the purposes of the present paper. Beyond the spherical symmetry one can use the functionals T int [Φ] that involve, e.g., an integral of the square of the Riemann tensor, or the Arnowitt-Deser-Misner (ADM) mass inside a large volume.
3 Neutral shell in flat space-time
The simplest shell model
We illustrate the method of Sec. 2 in the spherically symmetric model of gravity with thin dust shell for matter. The latter is parameterized by a single collective coordinate -the shell radius r(τ ) -depending on the proper time along the shell τ . This is a dramatic simplification as compared to the realistic case of matter described by dynamical fields. Still, one can interprete the shell as a toy model for the evolution of narrow wavepackets in field theory. In particular, one expects that the shell model captures essential features of gravitational transition between such wavepackets. 7 7 Note that our approach does not require complete solution of the quantum shell model which may be ambiguous. Rather, we look for complex solutions of the classical equations saturating the path integral.
The minimal action for a spherical dust shell is
where m is the shell mass. However, such a shell always collapses into a black hole and hence is not sufficient for our purposes. Indeed, as explained in Sec. 2.1, in order to select the physically relevant semiclassical solutions we need a parameter P such that an initially contracting shell reflects classically at P < P * and forms a black hole at P > P * . We therefore generalize the model (3.1). To this end we assume that the shell is assembled from particles with nonzero angular momenta. At each point on the shell the velocities of the constituent particles are uniformly distributed in the tangential directions, so that the overall configuration is spherically-symmetric 8 . The corresponding shell action is [49] where L is a parameter proportional to the angular momentum of the constituent particles.
Its nonzero value provides a centrifugal barrier reflecting classical shells at low energies.
Decreasing this parameter, we arrive to the regime of classical gravitational collapse. In what follows we switch between the scattering regimes by changing the parameter L ≡ P −1 . For completeness we derive the action (3.2) in Appendix A.
Gravitational sector of the model is described by the Einstein-Hilbert action with the Gibbons-Hawking term, Here the metric g µν and curvature scalar R are defined inside the space-time volume V with the boundary 9 ∂V. The latter consists of a time-like surface at spatial infinity r = r ∞ → +∞ and space-like surfaces at the initial and final times t = t i,f → ∓∞. In Eq. (3.4) σ are the coordinates on the boundary, h is the determinant of the induced metric, while K is the extrinsic curvature involving the outer normal. The parameter κ equals +1 (−1) at the time-like (space-like) portions of the boundary. To obtain zero gravitational action in flat space-time, we subtract the regulator K 0 which is equal to the flat-space extrinsic curvature of the boundary [50]. For the sphere at infinity K 0 = 2/r ∞ , while the initial-and final-time hypersurfaces have K 0 = 0. The Gibbons-Hawking term (3.4) will play an important role in our analysis. Let us first discuss the classical dynamics of the system. Equations of motion follow from variation of the total action with respect to the metric g µν and the shell trajectory y a (τ ). In the regions inside and outside the shell the metric satisfies vacuum Einstein equations and therefore, due to Birkhoff theorem, is given by the flat and Schwarzschild solutions, respectively, see Fig. 3a.
Introducing the spherical coordinates (t − , r) inside the shell and Schwarzschild coordinates (t + , r) outside, one writes the inner and outer metrics in the universal form The parameter M is the ADM mass which coincides with the total energy of the shell. In what follows we will also use the Schwarzschild radius r h ≡ 2M . For the validity of the semiclassical approach we assume that the energy is higher than Planckian, M 1. Equation for the shell trajectory is derived in Appendix B by matching the inner and outer metrics at the shell worldsheet with the Israel junction conditions [51,52]. It can be cast into the form of an equation of motion for a particle with zero energy in an effective potential,ṙ This potential goes to −∞ at r → 0 and asymptotes to a negative value 10 1 − M 2 /m 2 at r = +∞, see Fig. 4. At large enough L the potential crosses zero at the points A and A -the turning points of classical motion. A shell coming from infinity reflects from the point A back to r = +∞. When L decreases, the turning points approach each other and 10 Recall that the shell energy M is always larger than its rest mass m. coalesce at a certain critical value 11 L = L * . At even smaller L the turning points migrate into the complex plane, see Fig. 5 (upper left panel), and the potential barrier disappears. Now a classical shell coming from infinity goes all the way to r = 0. This is the classical collapse. Now, we explicitly see an obstacle for finding the reflected semiclassical solutions at L < L * with the method of continuous deformations. Indeed, at large L the reflected solutions r = r(τ ) are implicitly defined as where the square root is positive at r → +∞ + i 0. The indefinite integral is performed along the contour C running from r = +∞ − i0 to r = +∞ + i0 and encircling the turning point A -the branch point of the integrand (see the upper left panel of Fig. 5). As L is lowered, the branch point moves and the integration contour stays attached to it. However, at L = L * when the branch points A and A coalesce, the contour C is undefined. It is therefore impossible to obtain reflected semiclassical solutions at L < L * from the classical solutions at L > L * .
Modification
To find physically relevant reflected trajectories at L < L * , we use the method of Sec. 2 and add an imaginary term i T int to the action. We consider T int of the form (2.10), where the function w(r) is concentrated in the vicinity of r = r w . The radius r w is chosen to be large enough, in particular, larger than the Schwarzschild radius r h and the position r A of the right turning point A. Then the Einstein equations are modified only at r ≈ r w , whereas the geometries inside and outside of this layer are given by the Schwarzschild solutions with masses M and M , see Fig. 3b. To connect these masses, we solve the modified Einstein equations in the vicinity of r w . Inserting general spherically symmetric metric in the Schwarzschild frame, into the (tt) component of Einstein equations, we obtain, The solution reads 12 , This gives the relation Here˜ > 0 is the new parameter of modification. As before, the ADM mass M of the system is conserved in the course of the evolution. It coincides with the initial and final energies of the shell which are, in turn, equal, as will be shown in Sec. 3.3, to the initial-and final-state energies in the quantum scattering problem. Thus, M is real, while the mass M of the Schwarzschild space-time surrounding the shell acquires a positive imaginary part 13 . The shell dynamics in this case is still described by Eq. (3.8), where M is replaced by M in the potential (3.9). Below we find semiclassical solutions for small˜ > 0. In the end˜ will be sent to zero. Let us study the effect of the modification (3.14) on the semiclassical trajectories r = r(τ ) in Eq. (3.10). At L > L * the complex terms in V eff are negligible and the reflected trajectory is obtained with the same contour C as before, see the upper left panel of Fig. 5. The modification of V eff becomes important when L gets close to L * and the two turning points A and A approach each other. Expanding the potential in the neighborhood of the maximum, we write, where V max , µ and r max depend on L and M . For real M = M the extremal value V max is real and crosses zero when L crosses L * , whereas the parameters µ 2 > 0 and r max remain approximately constant. The shift of M into the upper complex half-plane gives a negative imaginary part to V max , where the last inequality follows from the explicit form (3.9). Now, it is straightforward to track the motion of the turning points using Eq. (3.15) as L decreases below L * . Namely, A and A are shifted into the lower and upper half-planes as shown in Fig. 5 (upper right panel). Importantly, these points never coalesce. Physically relevant reflected solution at L < L * is obtained by continuously deforming the contour of integration in Eq. (3.10) 12 The functionf is time-independent due to the (tr) equation. 13 In this setup the method of Sec. 2 is equivalent to analytic continuation of the scattering amplitude into the upper half-plane of complex ADM energy, cf. [25]. while keeping it attached to the same turning point 14 . As we anticipated in Sec. 2, a smooth branch of reflected semiclassical solutions parameterized by L exists in the modified system.
If L is slightly smaller than L * , the relevant saddle-point trajectories reflect at Re r A > r h and hence never cross the horizon. A natural interpretation of the corresponding quantum transitions is over-barrier reflection from the centrifugal potential. However, as L decreases to L → 0, the centrifugal potential vanishes. One expects that the semiclassical trajectories in this limit describe complete gravitational transitions proceeding via formation and decay of a black hole.
We numerically traced the motion of the turning point A as L decreases from large to small values, see Fig. 5 (lower panel). It approaches the singularity 15 r = 0 at L → 0. This behavior is confirmed analytically in Appendix C. Thus, at small L the contour C goes essentially along the real axis making only a tiny excursion into the complex plane near the singularity. It encircles the horizon r = r h from below. 14 In the simple shell model we can take˜ = 0 once the correspondence between the solutions at L > L * and L < L * is established. This may be impossible in more complicated systems [21,22,24,25] where the relevant saddle-point trajectories do not exist at = 0 and one works at nonzero till the end of the calculation. 15 For the validity of low-energy gravity the turning point should remain in the region of sub-Planckian curvature, R µνλρ R µνλρ ∼ M 2 /r 6 1. This translates into the requirement rA M 1/3 which can be satisfied simultaneously with L L * provided the total energy is higher than the Planck mass, M 1. Figure 6. The time contour corresponding to the semiclassical solution at small L. Solid and dashed lines correspond to interacting and free evolution respectively, cf. Fig. 2.
S-matrix element
The choice of the time contour. The action S reg entering the amplitude (1.3) is computed along the contour in complex plane of the asymptotic observer's time t ≡ t + .
Since we have already found the physically relevant contour C for r(τ ), let us calculate the Schwarzschild time t + (r) along this contour. We write, where the indefinite integral runs along C. In Eq. (3.17) we used the the definition of the proper time implying and expressedṙ 2 from Eq. (3.8). The integrand in Eq. (3.17) has a pole at the horizon r = r h , f + (r h ) = 0, which is encircled from below, see Fig. 5, lower panel. The halfresidue at this pole contributes iπr h to t + each time the contour C passes close to it. The contributions have the same sign: although the contour C passes the horizon in the opposite directions, the square root in the integrand changes sign after encircling the turning point. Additional imaginary contribution comes from the integral between the real r-axis and the turning point A; this contribution vanishes at L → 0. The image C t of the contour C is shown in Fig. 6, solid line. Adding free evolution from t + = 0 − to t + = t i and from t + = t f to t + = 0 + (dashed lines), we obtain the contour analogous to the one in Fig. 2. One should not worry about the complex value of t f in Fig. 6: the limit t f → +∞ in the definition of S-matrix implies that S reg does not depend on t f . Besides, the semiclassical solution r = r(t + ) is an analytic function of t + and the contour C t can be deformed in complex plane as long as it does not cross the singularities 16 of r(t + ). Below we calculate the action along C t because the shell position and the metrics are real in the initial and final parts of this contour. This simplifies the calculation of the Gibbons-Hawking terms at t + = t i and t + = t f . 16 In fact, Ct is separated from the real time axis by a singularity where r(t+) = 0. This is the usual situation for tunneling solutions in quantum mechanics and field theory [24,25]. Thus, Sreg cannot be computed along the contour in Fig. 2; rather, Ct or an equivalent contour should be used.
Interacting action. Now, we evaluate the action of the interacting system S(t i , t f ) entering S reg . We rewrite the shell action as An important contribution comes from the Gibbons-Hawking term at spatial infinity r = r ∞ → +∞. The extrinsic curvature reads, (3.20) The first term here is canceled by the regulator K 0 in Eq. (3.4). The remaining expression is finite at r ∞ → +∞, where we transformed to integral running along the contour C using Eq. (3.17). Note that this contribution contains an imaginary part Finally, in Appendix D we evaluate the Gibbons-Hawking terms at the initial-and finaltime hypersurfaces. The result is where r i,f are the radii of the shell at the endpoints of the contour C. The latter radii are real, and so are the terms (3.23). Summing up the above contributions, one obtains, This expression contains linear and logarithmic divergences when r i,f are sent to infinity. Note that the divergences appear only in the real part of the action and thus affect only the phase of the reflection amplitude but not its absolute value.
Initial and final-state contributions. The linear divergence in Eq. (3.24) is related to free motion of the shell in the asymptotic region r → +∞, whereas the logarithmic one is due to the 1/r tails of the gravitational interaction in this region. Though the 1/r terms in the Lagrangian represent vanishingly small gravitational forces in the initial and final states, they produce logarithmic divergences in S(t i , t f ) when integrated over the shell trajectory. To obtain a finite matrix element, we include 17 these terms in the definition of the free action S 0 . In Appendix E the latter action is computed for the shell with energy M , where r 1,2 are the positions of the shell at t + = 0 ∓ and are the initial and final shell momenta with 1/r corrections. The path integral (2.2) for the amplitude involves free wavefunctions Ψ i (r 1 ) and Ψ f (r 2 ) of the initial and final states. We consider the semiclassical wavefunctions of the shell with fixed energy E, where p i,f are the same as in Eq. (3.26). In fact, the energy E is equal to the energy of the semiclassical solution, E = M . Indeed, the path integral (2.2) includes integration over the initial and final configurations of the system, i.e. over r 1 and r 2 in the shell model. The condition for the stationary value of r 1 reads, It is straightforward to check that this expression is finite in the limit r i,f → +∞. In Fig. 7 we plot its real and imaginary parts as functions of L for the case of massless shell (m = 0). In the most interesting case of vanishing centrifugal barrier L → 0 the only imaginary contribution to S reg comes from the residue at the horizon r h = 2M in Eq. (3.29), recall the contour C in Fig. 5. The respective value of the suppression exponent is This result has important physical implications. First, Eq. (3.30) depends only on the total energy M of the shell but not on its rest mass m. Second, the suppression coincides with the Bekenstein-Hawking entropy of a black hole with mass M . The same suppression was obtained in [33,34] for the probability of emitting the total black hole mass in the form of a single shell. We conclude that Eq. (3.30) admits physical interpretation as the probability of the two-stage reflection process where the black hole is formed in classical collapse with probability of order 1, and decays afterwards into a single shell with exponentially suppressed probability. One may be puzzled by the fact that, according to Eq. (3.29), the suppression receives equal contributions from the two parts of the shell trajectory crossing the horizon in the inward and outward directions. Note, however, that the respective parts of the integral (3.29) do not have individual physical meaning. Indeed, we reduced the original twodimensional integral for the action to the form (3.29) by integrating over sections of constant Schwarzschild time. Another choice of the sections would lead to an expression with a different integrand. In particular, using constant-time slices in Painlevé or Finkelstein coordinates one obtains no imaginary contribution to S reg from the inward motion of the shell, whereas the contribution from the outward motion is doubled. The net result for the probability is, of course, the same. 18 The above result unambiguously shows that the shell model, if taken seriously as a full quantum theory, suffers from the information paradox. Indeed, transition between the only two asymptotic states in this theory -contracting and expanding shell -is exponentially suppressed. Either the theory is intrinsically non-unitary or one has to take into consideration an additional asymptotic state of non-evaporating eternal black hole formed in the scattering process with probability 1 − P f i .
On the other hand, the origin of the exponential suppression is clear if one adopts a modest interpretation of the shell model as describing scattering between the narrow wavepackets in field theory. Hawking effect implies that the black hole decays predominantly into configurations with high multiplicity of soft quanta. Its decay into a single hard wavepacket is entropically suppressed. One can therefore argue [36] that the suppression (3.30) is compatible with unitarity of field theory. However, the analysis of this section is clearly insufficient to make any conclusive statements in the field theoretic context.
As a final remark, let us emphasize that besides the reflection probability our method allows one to calculate the phase of the scattering amplitude Re S reg . At L = m = 0 it can be found analytically, (3.31) It explicitly depends on the parameter r 0 of the initial-and final-state wavefunctions.
Relation to the Hawking radiation
In this section we deviate from the main line of the paper which studies transitions between free-particle initial and final states, and consider scattering of a shell off an eternal preexisting black hole. This will allow us to establish a closer relation of our approach to the results of [33,34] and the Hawking radiation. We focus on the scattering probability and thus consider only the imaginary part of the action. The analysis essentially repeats that of the previous sections with several differences. First of all, the inner and outer space-times of the shell are now Schwarzschild with the metric functions where M BH is the eternal black hole mass and M denotes, as before, the energy of the shell. The inner and outer metrics possess horizons at r − h = 2M BH and r + h = 2(M BH +M ), respectively. The shell motion is still described by Eq. (3.8), where the effective potential is obtained by substituting expressions (3.32) into the first line of Eq. (3.9). Next, the global space-time has an additional boundary r = r ∞ → +∞ at the second spatial infinity of the eternal black hole, see Fig. 8. We have to include the corresponding Gibbons-Hawking term, cf. Eq. (3.21), 18 Note that our semiclassical method is free of uncertainties [53][54][55] appearing in the approach of [33].
shell r ∞ r ∞ Finally, the eternal black hole in the initial and final states contributes into the free action S 0 . We use the Hamiltonian action of an isolated featureless black hole in empty spacetime [56], 35) where the integration contour C is similar to that in Fig. 5 (lower panel), it bypasses the two horizons r − h and r + h in the lower half of complex r-plane. In the interesting limit of vanishing centrifugal barrier L → 0 the imaginary part of the action is again given by the residues at the horizons, where B ± = π(r ± h ) 2 are the entropies of the intermediate and final black holes. This suppression coincides with the results of [33,34].
At M BH = 0 the process of this section reduces to reflection of a single self-gravitating shell and expression (3.36) coincides with Eq. (3.30). In the other limiting case M M BH the shell moves in external black hole metric without back-reaction. Reflection probability in this case reduces to the Boltzmann exponent where we introduced the Hawking temperature T H = 1/(8πM BH ). One concludes that reflection of low-energy shells proceeds via infall into the black hole and Hawking evaporation, whereas at larger M the probability (3.37) includes back-reaction effects.
Space-time picture
Let us return to the model with a single shell considered in Secs. 3.1-3.3. In the previous analysis we integrated out the non-dynamical metric degrees of freedom and worked with the semiclassical shell trajectory (t + (τ ), r(τ )). It is instructive to visualize this trajectory in regular coordinates of the outer space-time. Below we consider the case of ultrarelativistic shell with small angular momentum: L → 0 and M m. One introduces Kruskal coordinates for the outer metric, We choose the branch of the square root in these expressions by recalling that M differs from the physical energy M by an infinitesimal imaginary shift, see Eq. (3.14). The initial part of the shell trajectory from t + = t i to the turning point A (Figs. 5, 6) is approximately mapped to a light ray V = V 0 > 0 as shown in Fig. 9. Note that in the limit L → 0 the turning point A is close to the singularity r = 0, but does not coincide with it. At the turning point the shell reflects and its radius r(τ ) starts increasing with the proper time τ . This means that the shell now moves along the light ray U = U 0 > 0, and the direction of τ is opposite to that of the Kruskal time U +V . The corresponding evolution is represented by the interval (A, t f ) in Fig. 9. We conclude that at t + = t f the shell emerges in the opposite asymptotic region in the Kruskal extension of the black hole geometry. This conclusion may seem puzzling. However, the puzzle is resolved by the observation that the two asymptotic regions are related by analytic continuation in time. Indeed it is clear from Eqs. (3.38) that the shift t + → t + − 4πM i corresponds to total reflection of Kruskal coordinates U → −U , V → −V . Precisely this time-shift appears if we extend the evolution of the shell to the real time axis (point t f in Fig. 6). At t + = t f the shell emerges in the right asymptotic region 22 with future-directed proper time τ . The process in Fig. 9 can be viewed as a shell-antishell annihilation which is turned by the analytic continuation into the transition of a single shell from t i to t f . Now, we write down the space-time metric for the saddle-point solution at m = 0 and L → 0. Recall that in this case the shell moves along the real r-axis. We therefore introduce global complex coordinates (r, t + ), where t + belongs to C t and r is real positive. The metric is given by analytic continuation of Eqs. (3.6), (3.7), where we changed the inner time t − to t + by matching them at the shell worldsheet r = r shell (t + ). Importantly, the metric (3.39) is regular at the origin r = 0 which is never reached by the shell. It is also well defined at r h = 2M due to the imaginary part of M ; in the vicinity of the Schwarzschild horizon r h the metric components are essentially complex. Discontinuity of Eq. (3.39) at r = r shell (t + ) is a consequence of the δ-function singularity in the shell energy-momentum tensor. This makes the analytic continuation of the metric ill-defined in the vicinity of the shell trajectory. We expect that this drawback disappears in the realistic field-theory setup where the saddle-point metric will be smooth (and complex-valued) in Schwarzschild coordinates.
Reflection probability
In this and subsequent sections we subject our method to further tests in more complicated shell models. Here we consider a massless shell in 4-dimensional AdS space-time. The analysis is similar to that of Sec. 3, so we will go fast over details. The shell action is still given by Eq. (3.2) with m eff = L/r, while the Einstein-Hilbert action is supplemented by the cosmological constant term, Here Λ ≡ −3/l 2 , l is the AdS radius. The Gibbons-Hawking term has the form (3.4), where now the regulator at the distant sphere is chosen to cancel the gravitational action of an empty AdS 4 . The metric inside and outside the shell is AdS and AdS-Schwarzschild, respectively, where M is the shell energy. The trajectory of the shell obeys Eq. (3.8) with the effective potential given by the first line of Eq. (3.9), The -modification again promotes M in this expression to M = M + i˜ . Repeating the procedure of Sec. 3.2, we start from the reflected trajectory at large L. Keeping˜ > 0, we trace the motion of the turning point as L decreases 23 . The result is a family of contours C spanned by the trajectory in the complex r-plane. These are similar to the contours in Fig. 5. In particular, at L → 0 the contour C mostly runs along the real axis encircling the AdS-Schwarzschild horizon r h from below, as in the lower panel of Fig. 5. Calculation of the action is somewhat different from that in flat space. First, the space-time curvature is now non-zero everywhere. Trace of the Einstein's equations gives 24 R = 4Λ. The Einstein-Hilbert action takes the form, The last term diverging at r ∞ → ∞ is canceled by the similar contribution in the Gibbons-Hawking term at spatial infinity, Second, unlike the case of asymptotically flat space-time, Gibbons-Hawking terms at the initial-and final-time hypersurfaces t + = t i,f vanish, see Appendix D. Finally, the canonical momenta 25 of the free shell in AdS, are negligible in the asymptotic region r → +∞. Thus, the terms involving p i,f in the free action (3.25) and in the initial and final wavefunctions (3.27) are vanishingly small if the normalization point r 0 is large enough. This leaves only the temporal contributions in the free actions, (4.8) 23 Alternatively, one can start from the flat-space trajectory and continuously deform it by introducing the AdS radius l. 24 In the massless case the trace of the shell energy-momentum tensor vanishes, T Summing up Eqs. (4.5), (4.6), (4.8) and the shell action (3.2), we obtain, where the integration contour in the last expression goes below the pole at r = r h . The integral (4.9) converges at infinity due to fast growth of functions f + and f − . In particular, this convergence implies that there are no gravitational self-interactions of the shell in the initial and final states due to screening of infrared effects in AdS. The imaginary part of Eq. (4.9) gives the exponent of the reflection probability. It is related to the residue of the integrand at r h , We again find that the probability is exponentially suppressed by the black hole entropy.
Remarkably, the dependence of the reflection probability on the model parameters has combined into r h which is a complicated function of the AdS-Schwarzschild parameters M and l.
AdS/CFT interpretation
Exponential suppression of the shell reflection has a natural interpretation within the AdS/CFT correspondence [3,57,58]. The latter establishes relationship between gravity in AdS and strongly interacting conformal field theory (CFT). Consider three-dimensional CFT on a manifold with topology R × S 2 parameterized by time t and spherical angles θ. This is the topology of the AdS 4 boundary, so one can think of the CFT 3 as living on this boundary. Let us build the CFT dual for transitions of a gravitating shell in AdS 4 . Assume the CFT 3 has a marginal scalar operatorÔ(t, θ); its conformal dimension is ∆ = 3. This operator is dual to a massless scalar field φ in AdS 4 . Consider now the composite operator where G M (t) is a top-hat function of width ∆t 1/M . This operator is dual to a spherical wavepacket (coherent state) of the φ-field emitted at time t 0 from the boundary towards the center of AdS [59,60]. is proportional to the amplitude for reflection of the contracting wavepacket back to the boundary. If the width of the wavepacket is small enough, ∆t l, the φ-field can be treated in the eikonal approximation and the wavepacket follows a sharply defined trajectory. In this way we arrive to the transition of a massless spherical shell in AdS 4 , see Fig. 10.
Exponential suppression of the transition probability implies respective suppression of the correlator (4.12). However, the latter suppression is natural in CFT 3 because the state created by the composite operatorÔ M (0) is very special. Submitted to time evolution, it evolves into a thermal equilibrium which poorly correlates with the state destroyed bŷ O + M (πl). Restriction of the full quantum theory in AdS 4 to a single shell is equivalent to a brute-force amputation of states with many soft quanta in unitary CFT 3 . Since the latter are mainly produced during thermalization, the amputation procedure leaves us with exponentially suppressed S-matrix elements.
Elementary shell
Another interesting extension of the shell model is obtained by endowing the shell with electric charge. The corresponding action is the sum of Eq. (3.5) and the electromagnetic contribution where A µ is the electromagnetic field with stress tensor F µν = ∂ µ A ν − ∂ ν A µ and Q is the shell charge. This leads to Reissner-Nordström (RN) metric outside the shell and empty flat space-time inside, Other components of A µ are zero everywhere. Importantly, the outside metric has two horizons r Figure 11. Motion of the turning points and the contour C defining the trajectory for (a) the model with elementary charged shell and (b) the model with discharge.
at Q < M . At Q > M the horizons lie in the complex plane, and the shell reflects classically. Since the latter classical reflections proceed without any centrifugal barrier, we set L = 0 henceforth. The semiclassical trajectories will be obtained by continuous change of the shell charge Q. The evolution of the shell is still described by Eq. (3.8) with the effective potential constructed from the metric functions (5.2), This potential always has two turning points on the real axis, The shell reflects classically from the rightmost turning point r A at Q > M . In the opposite case Q < M the turning points are covered by the horizons, and the real classical solutions describe black hole formation. We find the relevant semiclassical solutions at Q < M using -modification. Since the modification term (2.10) does not involve the electromagnetic field, it does not affect the charge Q giving, as before, an imaginary shift to the mass, M → M + i˜ . A notable difference from the case of Sec. 3 is that the turning points (5.5) are almost real at Q < M . The semiclassical trajectories therefore run close to the real r-axis 27 for any Q. On the other hand, the horizons (5.3) approach the real axis with Im r from below and from above, respectively. Since the semiclassical motion of the shell at Q < M proceeds with almost real r(τ ), we can visualize its trajectory in the extended RN geometry, see Fig. 12. The shell starts in the asymptotic region I, crosses the outer and inner horizons r h , repels from the time-like singularity due to electromagnetic interaction, and finally re-emerges in the asymptotic region I . At first glance, this trajectory has different topology as compared to the classical reflected solutions at Q > M : the latter stay in the region I at the final 27 The overall trajectory is nevertheless complex because t+ ∈ C, see below. time t + = t f . However, following Sec. 3.5 we recall that the Schwarzschild time t + of the semiclassical trajectory is complex in the region I , where we used Eq. (3.17) and denoted by t i and t f the values of t + at the initial and final endpoints of the contour C in Fig.11a. Continuing t f to real values, we obtain the semiclassical trajectory arriving to the region I in the infinite future 28 , cf. Sec. 3.5. This is what one expects since the asymptotic behavior of the semiclassical trajectories is not changed in the course of continuous deformations. Let us now evaluate the reflection probability. Although the contour C is real, it receives imaginary contributions from the residues at the horizons. Imaginary part of the total action comes 29 from Eq. (3.29) and the electromagnetic term (5.1). The latter takes the form, where we introduced the shell current j µ , used Maxwell equations ∇ µ F µν = 4πj ν and integrated by parts. From Eq. (5.2) we find, (5.8) 28 Indeed, the coordinate systems that are regular at the horizons r However, they are real and do not contribute into Im Stot.
Combining this with Eq. (3.29), we obtain, After non-trivial cancellation we again arrive to a rather simple expression. However, this time 2Im S tot is not equal to the entropy of the RN black hole, B RN = π r The physical interpretation of this result is unclear. We believe that it is an artifact of viewing the charged shell as an elementary object. Indeed, in quantum mechanics of an elementary shell the reflection probability should vanish at the brink Q = M of classically allowed transitions. It cannot be equal to B RN which does not have this property unlike the expression (5.9). We now explain how the result is altered in a more realistic setup.
Model with discharge
Recall that the inner structure of charged black holes in theories with dynamical fields is different from the maximal extension of the RN metric. Namely, the RN Cauchy horizon r (−) h suffers from instability due to mass inflation and turns into a singularity [38][39][40]. Besides, pair creation of charged particles forces the singularity to discharge [37,41,42]. As a result, the geometry near the singularity resembles that of a Schwarzschild black hole, and the singularity itself is space-like. The part of the maximally extended RN space-time including the Cauchy horizon and beyond (the grey region in Fig. 12) is never formed in classical collapse.
Let us mimic the above discharge phenomenon in the model of a single shell. Although gauge invariance forbids non-conservation of the shell charge Q, we can achieve essentially the same effect on the space-time geometry by switching off the electromagnetic interaction at r → 0. To this end we assume spherical symmetry and introduce a dependence of the electromagnetic coupling on the radius. This leads to the action where e(x) is a positive form-factor starting from e = 0 at x = 0 and approaching e → 1 at x → +∞. We further assume e(x) < x , (5.11) the meaning of this assumption will become clear shortly. Note that the action (5.10) is invariant under gauge transformations, as well as diffeomorphisms preserving the spherical symmetry. The width of the form-factor e(r/Q) in Eq. (5.10) scales linearly with Q to mimic larger discharge regions at larger Q.
The new action (5.10) leads to the following solution outside the shell, The space-time inside the shell is still empty and flat. As expected, the function f + corresponds to the RN metric at large r and the Schwarzschild one at r → 0. Moreover, the horizon r h satisfying f + (r h ) = 0 is unique due to the condition (5.11). It starts from r h = 2M at Q = 0, monotonically decreases with Q and reaches zero at Q * = 2M/a(0). At Q > Q * the horizon is absent and the shell reflects classically. The subsequent analysis proceeds along the lines of Secs. 3, 4. One introduces effective potential for the shell motion, cf. Eq. (5.4), where b 2 ≡ −da/dx x=0 is positive according to Eq. (5.12). As Q decreases within the interval the turning point makes an excursion into the lower half of the r-plane, goes below the origin and returns to the real axis on the negative side, see Fig 11b. For smaller charges r A is small and stays on the negative real axis. The contour C defining the trajectory is shown in Fig. 11b. It bypasses the horizon r h from below, goes close to the singularity, encircles the turning point and returns back to infinity. This behavior is analogous to that in the case of neutral shell. Finally, we evaluate the imaginary part of the action. The electromagnetic contribution is similar to Eq. (5.8), However, in contrast to Sec. 5.1, the trace of the gauge field energy-momentum tensor does not vanish due to explicit dependence of the gauge coupling on r (cf. Eq. (B.3b)), This produces non-zero scalar curvature R = −8πT µ EM µ in the outer region of the shell, and the Einstein-Hilbert action receives an additional contribution, where in the second equality we integrated by parts. Combining everything together, we obtain (cf. Eq. (5.9)), where non-trivial cancellation happens in the last equality for any e(x). To sum up, we accounted for the discharge of the black hole singularity and recovered the intuitive result: the reflection probability is suppressed by the entropy of the intermediate black hole 30 .
Conclusions and outlook
In this paper we developed a consistent semiclassical method to calculate the S-matrix elements for the two-stage transitions involving collapse of matter into a black hole and decay of the latter into free particles. We applied the method to a number of models with matter in the form of thin shells and obtained sensible results for transition amplitudes. We discussed the respective semiclassical solutions and their interpretation. We demonstrated that the probabilities of the two-stage shell transitions are exponentially suppressed by the Bekenstein-Hawking entropies of the intermediate black holes. If the shell model is taken seriously as a full quantum theory, this result implies that its S-matrix is non-unitary. However, the same result is natural and consistent with unitarity if the shells are interpreted as describing scatterings of narrow wavepackets in field theory. It coincides with the probability of black hole decay into a single shell found within the tunneling approach to Hawking radiation [33,34] and is consistent with interpretation of the Bekenstein-Hawking entropy as the number of black hole microstates [36]. Considering the shell in AdS 4 space-time we discussed the result from the AdS/CFT viewpoint. We consider these successes as an encouraging confirmation of the viability of our approach.
In the case of charged shells our method reproduces the entropy suppression only if instability of the Reissner-Nordström Cauchy horizon with respect to pair-production of charged particles is taken into account. This suggests that the latter process is crucial for unitarity of transitions with charged black holes at the intermediate stages.
It will be interesting to apply our method to field theory. Let us anticipate the scheme of such analysis. As an example, consider a spherically-symmetric scalar field φ minimally coupled to gravity 31 . Its classical evolution is described by the wave equation, while Einstein-Hilbert equations reduce to constraints. One can use the simplest Schwarzschild coordinates (t, r) which are well-defined for complex r and t, though other coordinate systems may be convenient for practical reasons. One starts from wavepackets with small amplitudes φ 0 which scatter trivially in flat space-time. Then one adds the complex term (2.5), (2.10) to the classical action and finds the modified saddle-point solutions. Finally, 30 We do not discuss the phase of the scattering amplitude as it essentially depends on our choice of the discharge model. 31 Another interesting arena for application of the method is two-dimensional dilaton gravity [61].
one increases φ 0 and obtains saddle-point solutions for the black hole-mediated transitions. The space-time manifold, if needed, should be deformed to complex values of coordinatesaway from the singularities of the solutions. We argued in Sec. 2 that the modified solutions are guaranteed to approach flat space-time at t → +∞ and as such, describe scattering. The S-matrix element (1.3) is then related to the saddle-point action S reg in the limit of vanishing modification → +0. The above procedure reduces evaluation of S-matrix elements to solution of two-dimensional complexified field equations, which can be performed on the present-day computers. At this point one may wonder whether the leading-order semiclassical results will be useful for addressing the unitarity of the S-matrix. At first sight, the unity operator S † S = 1 does not appear to be "semiclassical." However, its matrix elements in the coherent-state representation a|1|b = e dk a * k b k (6.1) have perfect exponential form, where |a and |b are eigenstates of the annihilation operators with eigenvalues a k and b k [44]. Comparison of Eq. (6.1) with the leading semiclassical exponent of a|S † S|b will provide a strong unitarity test for the gravitational S-matrix.
A A shell of rotating dust particles
Consider a collection of dust particles uniformly distributed on a sphere. Each partice has mass δm and absolute value δL of angular momentum. We assume no preferred direction in particle velocities, so that their angular momenta sum up to zero. This configuration is spherically-symmetric, as well as the collective gravitational field. Since the spherical symmetry is preserved in the course of classical evolution, the particles remain distributed on the sphere of radius r(τ ) at any time τ forming an infinitely thin shell. Each particle is described by the action where in the second equality we substituted the spherically symmetric metric (2.9) and introduced the time parameter τ . To construct the action for r(τ ), we integrate out the motion of the particle along the angular variable ϕ using conservation of angular momentum δL = δmr 2φ −g abẏ aẏb − r 2φ2 .
It would be incorrect to expressφ from this formula and substitute it into Eq. (A.1). To preserve the equations of motion, we perform the substitution in the Hamiltonian where p a and δL are the canonical momenta for y a and ϕ, whereas δL is the Lagrangian in Eq. (A.1). Expressingφ from Eq. (A.2), we obtain, δH = p aẏ a + −g abẏ aẏb δm 2 + δL 2 /r 2 .
From this expression one reads off the action for r(τ ), where we fixed τ to be the proper time along the shell. We finally sum up the actions (A.5) of individual particles into the shell action where N is the number of particles, m = N δm is their total mass and L = N δL is the sum of absolute values of the particles' angular momenta. We stress that L is not the total angular momentum of the shell. The latter is zero because the particles rotate in different directions.
B Equation of motion for the shell
In this appendix we derive equation of motion for the model with the action (3.5). We start by obtaining expression for the shell energy-momentum tensor. Let us introduce coordinates (y a , θ α ) such that the metric (2.9) is continuous 32 across the shell. Here θ α , α = 2, 3 are the spherical angles. Using the identity we recast the shell action (3.2) as an integral over the four-dimensional space-time, Here τ is regarded as a general time parameter. The energy-momentum tensor of the shell is obtained by varying Eq. (B.2) with respect to g ab and r 2 (y), 32 Schwarzschild coordinates in Eq. (3.6) are discontinuous at the shell worldsheet.
where in the final expressions we again set τ equal to the proper time. It is straightforward to see that the τ -integrals in Eqs. (B.3) produce δ-functions of the geodesic distance n from the shell, We finally arrive at where T α shell β ∝ δ α β due to spherical symmetry. Equation of motion for the shell is the consequence of Israel junction conditions which follow from the Einstein equations. The latter conditions relate t µν shell to the jump in the extrinsic curvature across the shell [51,52] Here h µ ν is the induced metric on the shell, K µν is its extrinsic curvature, the subscripts ± denote quantities outside (+) and inside (−) the shell. We define both (K µν ) ± using the outward-pointing normal, n µ ∂ r x µ > 0. Transforming the metric (3.6) into the continuous coordinate system, we obtain, where dot means derivative with respect to τ . From Eq. (B.6) we derive the equations, Only the first equation is independent, since the second is proportional to its time derivative. We conclude that Einstein equations are fulfilled in the entire space-time provided the metrics inside and outside the shell are given by Eqs. (3.6), (3.7) and Eq. (B.8) holds at the shell worldsheet. The latter equation is equivalent to Eqs. (3.8), (3.9) from the main text. The action (3.5) must be also extremized with respect to the shell trajectory y a (τ ). However, the resulting equation is a consequence of Eq. (B.8). Indeed, the shell is described by a single coordinate r(τ ), and its equations of motion are equivalent to conservation of the energy-momentum tensor. The latter conservation, however, is ensured by the Einstein equations.
All turning points approach zero at L → 0 except for r 1,2 in the massive case. Numerically tracing their motion as L decreases from L * , we find that the physical turning point A of the reflected trajectory is r 6 in both cases.
D Gibbons-Hawking terms at the initial-and final-time hypersurfaces
Since the space-time is almost flat in the beginning and end of the scattering process, one might naively expect that the Gibbons-Hawking terms at t + = t i and t + = t f are vanishingly small. However, this expectation is incorrect. Indeed, it is natural to define the initial and final hypersurfaces as t + = const outside of the shell and t − = const inside it. Since the metric is discontinuous in the Schwarzschild coordinates, the inner and outer parts of the surfaces meet at an angle which gives rise to non-zero extrinsic curvature, see Fig. 13. For concreteness we focus on the final-time hypersurface. In the Schwarzschild coordinates the normal vectors to its inner and outer parts are It is easy to see that the extrinsic curvature K = ∇ µ ξ µ is zero everywhere except for the two-dimensional sphere at the intersection the hypersurface with the shell worldsheet. Let us introduce a Gaussian normal frame (τ, n, θ α ) in the vicinity of the shell, see Fig. 13. Here τ is the proper time on the shell, n is the geodesic distance from it, and θ α , α = 2, 3, are the spherical angles. In this frame the metric in the neighborhood of the shell is essentially flat; corrections due to nonzero curvature are irrelevant for our discussion.
To find the components of ξ µ + and ξ µ − in Gaussian normal coordinates, we project them on τ µ and n µ -tangent and normal vectors of the shell. The latter in the inner and outer Schwarzschild coordinates have the form, Evaluating the scalar products of (D.1) and (D.2), we find, As expected, the normals ξ µ ± do not coincide at the position of the shell. To compute the surface integral in the Gibbons-Hawking term, we regularize the jump by replacing (D.3) with ξ µ = ch ψ(n) τ µ − sh ψ(n) n µ , (D. 4) where ψ(n) is a smooth function interpolating between ψ − and ψ + . The expression (3.4) takes the form, where in the second equality we used ds = dn/ ch ψ for the proper length along the finaltime hypersurface and K = ∂ µ ξ µ = − ch ψ ψ for its extrinsic curvature. Next, we express ψ ± (r) from the shell equation of motion (3.8) and expand Eq. (D.5) at large r. Keeping only non-vanishing terms at r = r f → +∞, we obtain Eq. (3.23) for the final-time Gibbons-Hawking term. For the initial-time hypersurface the derivation is the same, the only difference is in the sign of ξ µ which is now past-directed. However, this is compensated by the change of sign ofṙ. One concludes that the Gibbons-Hawking term at t + = t i is obtained from the one at t + = t f by the substitution r f → r i .
Note that expression (D.5) is valid also in the model of Sec. 4 describing massless shell in AdS. It is straightforward to see that in the latter case the Gibbons-Hawking terms vanish at r i,f → ∞ due to growth of the metric functions (4.3) at large r. E Shell self-gravity at order 1/r Let us construct the action for a neutral shell in asymptotically flat space-time taking into account its self-gravity at order 1/r. To this end we recall that the shell is assembled from particles of mass δm, see Appendix A. Every particle moves in the mean field of other particles. Thus, a new particle added to the shell changes the action of the system 33 by 33 Angular motion of the particle gives 1/r 2 contributions to the Lagrangian which are irrelevant in our approximation.
where v = dr/dt + is the shell velocity in the asymptotic coordinates,M is its energy, and we expanded the proper time dτ up to the first order in 1/r in the second equality. At the leading order in 1/r,M wherem is the shell mass before adding the particle. Now, we integrate Eq. (E.1) from m = 0 to the actual shell mass m and obtain the desired action, From this expression one reads off the canonical momentum and energy of the shell, Expressing the shell velocity from Eq. (E.5) and substituting 34 it into Eq. (E.4), we obtain Eq. (3.26) from the main text. | 16,674 | 2015-03-24T00:00:00.000 | [
"Physics"
] |
Two-pulse space-time photocurrent correlations at graphene p-n junctions reveal hot carrier cooling dynamics near the Fermi level
Two-pulse excitation at a graphene p-n junction generates a time-dependent photocurrent response that we show functions as a novel ultrafast thermometer of the hot electron temperature, Te(t). The extracted hot electron cooling rates are consistent with heat dissipation near the Fermi level of graphene occurring by an acoustic phonon supercollision mechanism.
Introduction
With uniform broad spectral coverage, fast response and high carrier mobility, graphene based p-n junctions are a promising new material for next generation optoelectronics such as photodetectors, bolometers and plasmonic devices. The response of such devices depends critically on the hot electron gas temperature and its corresponding heat dissipation rate [1,2]. Below a threshold energy of ~200 meV relative to the Fermi energy, electrons are predicted to dissipate heat predominately by the emission of acoustic phonons [2]. However, the energy dissipated by acoustic phonons emission is very small owing to the large mismatch between the slow sound velocity of the phonons and the fast Fermi velocity of the electrons. As a consequence, a cooling bottleneck has been predicted with long relaxation times, exceeding ~300 ps [2]. Thus far, empirical measurements of the hot electron cooling rate have been limited to transient absorption (TA) spectroscopy. While such measurements exhibit faster cooling kinetics, the transient signal remains convoluted by many competing contributions including electron thermalization, optical phonon emission and intraband induced absorption [2,3]. To enhance our sensitivity to the desired cooling processes near the Fermi level, we require a technique that is independent of the spectral probe window and directly measures the hot electron temperature.
Recently, it was shown that photocurrent (PC) generated at graphene p-n junctions can be used as a thermometer of the hot electron temperature (T e ) using the photothermal effect. [1,7] To capture the timescales of hot electron cooling, we show how the PC generated by a femtosecond two-pulse excitation serves as a novel ultrafast thermometer of the transient electron temperature, T e (t). Unlike existing measurements, the resulting transient photocurrent (TPC) kinetics are approximately independent of the excitation wavelength, and directly measure the temperature hot electronic carriers near the Fermi energy.
Experimental methods
We fabricate p-n junctions from large-grain graphene grown by the chemical vapor deposition (CVD) method with a device carrier mobility of ~8000 cm 2 V -1 s -1 . A tunable back gate (BG) and top gate (TG) couple to graphene electrostatically, defining two p-n junctions where the PC production is maximal. The collected PC amplitude is plotted as the laser is raster scanned over the p-n junction (see superimposed map in Fig. 1a).
We optically excite the graphene p-n junction region with 180 fs pulses produced by two independently-tunable oscillators plus NIR optical parametric oscillator (OPO) system that are synchronously locked. We simultaneously collect the change in reflectivity (∆R(t)/R, TA) and electrical current generated (∆I 12 (t), TPC) as functions of pulse delay time (t) at a lattice temperature, T l = 10 K (Fig. 1a). ∆R(t)/R and ∆I 12 (t) are also measured spatially by raster scanning the collinear laser pulses. Resulting TPC spatial maps are shown in Fig. 1b, and show the decay of the measured signal in time and space.
Results and Discussion
Similar to previous TA on bulk graphene, we find the transient bleach signal at the graphene pn junction is fast and roughly biexponential (τ 1 = 0.33 ps and τ 2 =3.2 ps, see Fig. 1c). In stark contrast, the simultaneously acquired TPC signal decays roughly inversely with time with long tails extending out to 100 ps. Similar strong TPC responses have been observed for exfoliated graphene source-04026-p.2 drain [4], and graphene p-n junction [5] devices. Also unlike TA, we find the kinetic decay of ∆I 12 (t) is approximately independent of both excitation energies investigated (from 0.82 to 1.55 eV) and pulse ordering (1.25 eV pump, 1.45 eV probe or vice versa). This insensitivity to the spectral excitation window, suggests that TPC originates from the hot electron temperature near the Fermi level. To test this observation, we apply the photothermal PC generation model to quantitatively extract the transient hot electron temperature. For a single-pulse excitation, we collect the timedependent photothermal charge given by 1 is the initial hot temperature of the thermalized distribution, T l is the lattice temperature and β is a proportionality constant. Using a two-pulse excitation scheme, we obtain the TPC response simply by integrating piecewise about our delay time t, giving: The above TPC response function, shows how T e (t) can be extracted from the measured charge, Q 12 (t) providing that the underlying hot electron cooling rate law is nonlinear.
We find the PC generated at the p-n junction increases nonlinearly with the square root of photon flux, and decays on a much faster timescale than acoustic phonon emission model predcit(see Fig. 1d) [2,7]. Alternatively, Song et al. predicts that impurities and lattice disorder can relax the momentum conservation constraint, resulting in a more rapid energy relaxation (~1-10 ps); they call this process supercollisions (SC), and its signature is given by the cooling rate law [6]: 3 3 ( ) ( To date, however, this theory has not been tested. For low lattice temperatures (T e >>T l ≈ 10K), the above rate law gives T e (t)=T o /(1+t/τ o ) and predicts that cooling rate, τ o -1 =AT o varies with the square root of incident power. Substituting T e (t) into equation (1), we fit our TPC data (solid line Fig. 1d) and extract A=4.9x10 8 K -1 s -1 [7]. Excellent fits to the kinetic decay across a wide range of electronic temperatures supports a dominant mechanism of SC-assisted hot electron cooling. In Fig. 1d, we further show the extracted hot electron cooling time (τ o ) vary from 1 to 6 ps, and scale with the inverse square root of incident laser flux, as predicted. We find we can accurately predict the PC response in graphene by using the SC cooling model [6] and our TPC response function [7]. This suggests a dominant disorder-assisted SC mechanism for hot electron cooling near the Fermi level. | 1,442 | 2013-03-01T00:00:00.000 | [
"Physics"
] |
From BIM to GIS at the Smithsonian Institution
BIM-files (Building Information Models) are in modern architecture and building management a basic prerequisite for successful creation of construction engineering projects. At the facilities department of the Smithsonian Institution more than six hundred buildings were maintained. All facilities were digital available in an ESRI ArcGISenvironment with connection to the database information about single rooms with the usage and further maintenance information. These data are organization wide available by an intranet viewer, but only in a two-dimensional representation. Goal of the carried out project was the development of a workflow from available BIM-models to the given GIS-structure. The test-environment were the BIM-models of the buildings of the Smithsonian museums along the Washington Mall. Based on new software editions of Autodesk Revit, FME and ArcGIS Pro the workflow from BIM to the GIS-data structure of the Smithsonian was successfully developed and may be applied for the setup of the future 3D intranet viewer.
Introduction
BIM-files (Building Information Models) are in modern architecture and building management a basic prerequisite for successful creation of construction engineering projects.Very detailed, 3D-data will be generated during the BIM-process and could be integrated in GIS-data structure.However, the compatibility of both worlds is loaded with a lot of questions and problems (Liu et al. 2017;Kolbe et al. 2011).At the facilities department of the Smithsonian Institution more than six hundred buildings were maintained.All facilities were digital available in an ESRI ArcGISenvironment with connection to the database information about single rooms with the usage and further maintenance information.These data are organisation wide available by an intranet viewer, but only in a twodimensional representation.Independent of the two-dimensional GIS dataset at the Smithsonian there are many activities to use very detailed BIM-files of their buildings for different facility tasks and visualizations (Kendra 2017) During his time as a Smithsonian Fellow in 2016 a workflow was developed by the author to integrate the given BIM-models of the Smithsonian Museums along the Washington Mall into a GIS-System.The workflowdevelopment based on one hand of the experience of a large 3D City model creation process (Guenther-Diringer 2016), where five historical time steps of the German city of Karlsruhe were created, on the other hand, new possibilities through new software-editions were applied, like Autodesk Revit, FME, or ArcGISPro.
Background
On the ICC Conference 2013 in Dresden, Germany, a poster called "3D-Citymodel: 300 years Karlsruhe: 1715 -1834 -2015" was presented by the author.This project with extended digital 3D citymodels of different time steps (1739 -1834 -1915 -1945 und 2015) was finalized at the 300 year anniversary of Karlsruhe and installed as a self-running application at the Karlsruher city museum in 2016.The technical aspects of this project, which lasts over 4 years were published at the German Cartographic News (Günther-Diringer 2016).During this project a workflow from 2D geospatial base data to extended high-end 3D city models was developed, see Fig. 2.
BIM models of the Smithsonian Institution
Goal of the carried out project was the development of a workflow from available BIM-models to the given GISstructure.The test-environment were the BIM-models of the buildings of the Smithsonian museums along the Washington Mall.The BIM-models were available with a file size from 25 MB (Smithsonian Castle) up to 1 GB (National Museum of African American History and Culture) in the Autodesk Revit-file format, a typical BIM application.
Available software configuration
To integrate the BIM models into GIS the mentioned workflow (fig.2) had to be modified.The input BIM-data were available as Autodesk Revit-files.Autodesk Revit is a standard software for BIM-Modeling (Autodesk 2017).On the GIS-part of the project next to the ESRI ArcGIS-Desktop-Software the new ArcGIS Pro 1.3 was used (Esri 2017).With the new 3D capabilities of ArcGIS Pro it was an appropriate target GIS-System for the available BIM-models.With the integration of ArcGIS Pro the workflow has to be adapted too, because of the now available, integrated 3D capabilities, many of the necessary 3D tasks of 3D city modelling can now carried out in the same GI-environment as the 2D geodata editing.However, due to the file size and the complexity of the single BIM-files it is not useful to import the full BIMmodel for example by the fbx-format into the GIS.In comparison to an available open data set of 50.000 buildings of Washington DC (DC.gov 2017: dcatlas.dcgis.dc.gov/download/bldgPly_3D.zip) with about 95 MB, a single BIM-model were up to 1 GB.The BIM-data has to be restructured and reduced, but without the loss of the internal topological structure and the necessary connections to the database information.For this task, the FME (Feature Manipulation Engine) of Safe Software was used (Safe 2017).FME is an ETL-Software (Extract, Transform, Load) which supports more than 400 different data formats and has extended transformation functionalities by more than 465 transformers.
Applied transformation workflow
After installation of FME on a computer where Autodesk Revit was installed as well, a FME-Export-plug-in at Autodesk Revit was available and due to diverse setting options the file size of the original Revit file could be reduced, f.e. from 25 MB to 7 MB.BIM models usually did not have geocoding information.Normally the origin of their coordinate system, the project base point, is in the center of the building.However, this problem could be solved with FME as well.The co-ordinates and the geodetic datum of the appropriate building have to be available (at least on point at a building corner).In Revit the available survey point, by default identical with the project base point, has to be shifted to the building corner, where the coordinates are known.Inside FME-Software the transformer "offsetter" had trans-formed the building coordinates to the appropriate shift.Later in ArcGIS the correct geodetic datum has to be chosen and the imported model was placed at its correct position.A whole pipeline of transformers can be set up with an appropriate file export format, f.e.Adobe 3D-pdf or ArcGIS file gdb.At the end of the process, the file size was reduced to 1/6 of the origin and can be imported to ArcGIS Pro as a multipatch-object for 3D visualization or as a polygon-object for combined 2D/3D visualization, if the single rooms of one storey would be of interest (fig.6 and fig.7).
ArcGIS Pro integration
Due to the FME transformation process, the generated geodatabase-files could be loaded into ArcGIS Desktop as well as to ArcGISPro 1.3 and combined with other georelated data.The imported 2D polygon file, which was created based on FME workflow in Fig. 7, could be displayed separated by the different floors, with the single rooms as an important base geometry.Due to the preserved IDs the single geometry could be linked to the available facility database, with the necessary information about usage, maintenance and other important information.With the integration of the BIM-models into the ArcGIS Pro environment, all of the available GIS-functionality could be carried out in combination with the BIM data.In Fig. 10 is an example of a combination of the geocoded BIM data with results of the very sophisticated Smithsonian X 3D-project, "Smithsonian X 3D launches a set of use cases which apply various 3D capture methods to iconic collection objects, as well as scientific missions" (Smith-sonian Institution 2017).The downloadable 3D scan "Woolly Mammouth" (3d.si.edu/downloads/55) was import-ed, geocoded and placed in one of the rooms of the Smithsonian Castle, together with an available "Cosmic Bud-dha" statue.Fig. 10.ArcGIS Pro view with a single room and combined 3D scanned objects.
Since a few years the Woolly Mammouth is not available in one of the Smithsonian exhibitions.In combination with single scanned 3D-objects, available BIM-room structure and the given georeferencing of the objects a digital preservation of former exhibitions could be achieved as a future application.Another advantage of the integration into GIS structure is the combination with other georelated data.Together with an open data set of 50.000 buildings of Washington DC, mentioned in chapter 2.2, the imported BIM-data gave an excellent view for further planning applications (Fig. 11).For producing of high-end animation videos, it was not necessary to leave the GIS environment.Due to new extended animation possibilities in ArcGIS Pro, a lot of functionality was now available, with the decisive advantage that the voluminous geodata could be processed in the same system and did not have to be exported to another software package for 3D construction and video processing.In
Conclusions
The workflow from BIM to the GIS-data structure of the Smithsonian was successfully developed and may be applied for the setup of the future 3D intranet viewer.There is the possibility to integrate CityGML as a xmlbased standard data format for 3D citymodels (OGC 2017) into the workflow.The FME-software offers sample work-flows from BIM to GIS based on the CityGML-format.But this format was not in use at the Smithsonian facility up to now and therefore it was not applied in the described workflow.Next to the technical aspect, the future application of the workflow is very much based on organizational issues.Which kind of facility data should be held in which level of detail to support which tasks and to serve which user group inside and outside the Smithsonian Institution?A lot of questions has to be answered, before the time consuming process to set up the possible future 3D intranet viewer can be started.But during this process different departments has to communicate to each other.The 3D realistic view of their common data will give them all better possibilities for solving their complex future tasks.
Fig. 3 .
Fig. 3. National Museum of African American History and Culture: photo and BIM-file in different detail views (Source: Günther-Diringer, Smithsonian Institution).
Fig. 4 .
Fig. 4. Example-workflow in FME.With further processing inside FME export to Adobe 3Dpdf or ArcGIS file geodatabase-format (.gdb) was possible.The file size was further reduced to 2 MB (pdf), respective 4 MB (gdb).
Fig. 6 .
Fig. 6.FME-transformation from Revit-export with geocoding offset to a 3D multipatch object in a ArcGIS file gdb.
Fig. 7 .
Fig. 7. FME-transformation from Revit-export with geocoding offset and separation by different storeys to a 2D polygon object in a ArcGIS file gdb.
Fig. 9 .
Fig. 9. ArcGIS Pro with imported polygon file of single floor with visible IDs of the single rooms.
Fig. 11 .
Fig. 11.BIM-models of the Washington Mall combined with open geodata.
Fig 12 a few screenshots of a video which shows the historical development of the mall from 1791, before the founding of Washington DC up to the new, in 2016 opened Museum of African American History and Culture.
Fig. 12 .
Fig. 12. Screenshots of the video generated by ArcGIS Pro animation functionality.
I
am grateful to Dan Cole, GIS-Coordinator at the Smithsonian Institution, for organizing my research term as a Smithsonian Fellow from March -August 2016 at the National Museum of Natural History in Washington DC. | 2,479.4 | 2018-05-16T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Geography"
] |
Amplification effects of ground motion due to the local geology of the building site in the cities of Kyiv, Kryvyi Rih, Odesa
The results of the research presented in the article elucidate the importance of taking into account the amplitude-frequency characteristics of soil stratum models in the context of seismic safety of construction objects on the territory of Ukraine, in different regions. The presented results of the analysis of the calculated amplitude-frequency characteristics of models of soil environments for different regions of Ukraine reveal that the amplification of vibrations by soils has a complex nature that depends on many factors and can differ significantly for different construction sites. We have shown that during the earthquake-resistant design of buildings and structures, it is necessary to properly take into account the properties of soil complexes under the research site, which can significantly increase oscillations at “resonant” frequencies. The article examines in detail the features of amplitude-frequency characteristic models of soil strata for different geological conditions in Kyiv, Kryvyi Rih and Odesa. The purpose of this study is to compare these characteristics and develop recommendations for reducing risks associated with seismic hazards.
Introduction
Many studies of damages to buildings and structures caused by earthquakes have shown that the local soil conditions of the sites can affect the amplitude of seismic ground motions on the surface during an earthquake.The main reasons are: 1the occurrence of resonance phenomena caused by the natural frequency of buildings, similar to the dominant period of the territory; 2the local structure of the soil, which acts as a filter, selectively amplificates or attenuates the frequency components of the soil movement (when the soil movement is transferred from the bedrock to the surface, its frequency components and amplitude values change significantly).Thus, the influence of local soil conditions of the site on soil oscillations is a fundamental issue in seismology, as it has important theoretical and practical significance for the protection of people, buildings and structures from earthquakes [4].
For example, during the 1985 Mexico City earthquake (Ms=8.1),medium-rise buildings in Mexico City, 400 km from the epicenter, were severely damaged, while low-rise and high-rise buildings (more than 23 stories) remained undamaged [9].This example illustrates the need for research of resonant amplification of dominant frequencies by soils and their impact on safety and stability of buildings and structures during earthquakes.The obtained filtering properties of the soil stratum play a key role in preventing resonance phenomena during seismic impacts.Resonance can occur when the frequency of seismic oscillations coincides with the natural frequency of oscillations of the soil or engineering structures.As a result of resonance phenomena, the amplitudes of oscillations can increase significantly, which leads to damage or destruction of structures.Analysis of the filtering properties of the soil allows you to take this factor into account and develop construction solutions that minimize the likelihood of resonance effects.This will enhance infrastructure reliability and improve people's safety during seismic events.
Thus, the study of the impact of local soils on seismic fluctuations is an important and relevant area of research that contributes to the safety and sustainable development of cities and infrastructure in conditions of seismic activity.Understanding the influence of the soil stratum on seismic vibrations and taking into account resonance phenomena play an important role in modern earthquake-resistant construction, providing more effective and safer protection against earthquakes and reducing seismic risks.
III-IV category soils in terms of seismic properties have significant non-linear properties, which manifest themselves differently, depending on the intensity and frequency composition of the seismic impact.The nonlinear behavior of the soil leads to a change, sometimes very significant, in the forms and spectra of seismic waves in the soil stratums.Resonance frequencies of soils appear to be dependent on the intensity of impact and, in case of sufficiently intense earthquakes, may differ from the values determined by records of seismic noise or weak seismic events.
During intense seismic impacts, the geological properties of soils change, which may be associated, for example, with the movement of groundwater, the breaking of structural bonds between soil particles, and other phenomena.
The article examines the features of the amplitude-frequency characteristics (amplitude-frequency characteristic) of calculated models of soil strata for seismic fluctuations of local soil conditions in Kyiv, Kryvyi Rih and Odesa with the aim of comparing them and developing measures to reduce seismic risks.At the same time, greater attention is paid to the city of Kyiv, which has the largest concentration of construction sites within the territory of Ukraine.
Method
The nature of the passage of seismic vibrations through the sedimentary layers of the soil stratum during distant earthquakes depends on the angle of incidence of seismic waves on the bottom of the sedimentary cover, the directional diagram of energy radiation from the hearth, the strength of the sedimentary layers on construction (exploitation) sites, the geological and geophysical properties of the layers, the geomorphological structure of the location area of the studied area.Large ground accelerations can cause liquefaction and nonlinear effects in soils that, in turn, can cause damage to structures depending on the geometry of the layers, physical-mechanical and seismic properties.The soil can amplify the intensity of seismic vibrations at some frequencies, and weaken it at others.Consider how the soil stratum changes the amplitude and frequency composition of seismic vibrations propagating through it [8].
At the first stages of building models of the soil environment, geological sections were analyzed, built according to the data of well researches within the studied construction sites.Soil stratum models include both physical (speed of elastic waves, density, absorption decrements, etc.) and geometric (thickness of layers, shape of boundaries) characteristics.The boundaries of the interface between the soils with a different composition and physical and mechanical properties, which make up the engineering geological sections, have a characteristic horizontal or close to it lying.
The use in calculations of the dependences of the shear modulus and the absorption coefficient on the shear deformation allows to take into account the nonlinear reaction of the soil stratum to seismic influences [4].
Calculations of the amplitude-frequency characteristic of the model of the soil stratum under the construction site were carried out using the ProShake software complex [7; 10], developed for onedimensional modeling of the response of upper part of the section of the geological environment to seismic influences.Amplitude-frequency characteristic was built for models built based on the data of wells drilled within construction sites.The initial data for the construction of models of the soil environment were taken from the technical reports of engineering and geological surveys for the research site.The parameters of the deeper layers, up to the crystalline foundation, are taken from literary and stock sources.
When constructing the amplitude-frequency characteristic model of the geological environment, the method of equivalent linear modeling of the response of the soil stratum to seismic influences was used.The behavior of each layer of the soil model during calculations was determined by the Kelvin-Voight (viscoelastic) model.Each layer of the soil model was characterized by such parameters as: layer capacity, velocities of longitudinal and transverse waves, density, nonlinear dependencies of shear modulus and absorption coefficient on shear deformation.The boundaries of the interface between soils with a different composition and physico-mechanical properties, which make up engineering-geological sections, have a characteristic horizontal, or close to it, lying [2; 12].
The use in calculations of the dependences of the shear modulus and the absorption coefficient on the shear deformation allows you to take into account the possible nonlinear behavior of soils during an earthquake.The obtained data on the filtering properties of the soil stratum at the studied sites provide an opportunity to develop more effective and safer construction solutions, as well as to reduce the cost of earthquake-resistant construction by avoiding resonant amplification by the sedimentary layer of seismic oscillations in the natural periods of the designed buildings and structures [4; 12].This allows you to take into account the features of each specific geological environment and create optimal conditions for protecting infrastructure and people's lives from seismic threats.
Results and discussion
Consider the calculated amplitude-frequency characteristic models of soil strata under some construction sites of high-rise construction in Kyiv (figure 1).
Figure 1.
Amplification ratio curve for soil of construction sites in the city of Kyiv.
Figure 1 illustrates the amplitude-frequency characteristic of soil stratum models beneath construction sites in Kyiv, showcasing one or two peaks within the frequency range of 0.2 to 2.0 Hz.The observed amplification at these low frequencies (0.2 -2.0 Hz) is crucial to consider during the design of earthquake-resistant high-rise structures in Kyiv.This is particularly significant due to the elevated risk posed to tall buildings by low-frequency vibrations originating from subcrustal seismic events, notably those occurring in the Eastern Carpathians, such as the Vrancea area in Romania.
Figure 2 presents a constructed map illustrating the distribution of calculated Peak Ground Accelerations (PGA) on the free surface of the soil stratum for the city of Kyiv, depicting maximum predicted seismic impacts of up to 0.06 g. Figure 2. Map depicting the distribution of estimated PGA on the free surface in the territory of Kyiv with maximum seismic impacts up to 0.06 g [12].
Figure 2 illustrates that during seismic impacts with a maximum amplitude of input oscillations at 0.06 g, the PGA on the free surface within Kyiv's territory varies from 0.038 g to 0.062 g.Generally, ground motions with higher PGA values are deemed more destructive than those with lower peak accelerations.However, extremely high PGA values, characterized by short durations and high frequencies, may not cause significant damage to certain elongated structures with low natural frequencies of oscillation.
It is essential to consider that, from an engineering perspective, a high PGA value might not be relevant in the case of a singular high-amplitude event or when the oscillation frequency with a high PGA value falls outside the natural oscillation frequencies of the building.Therefore, when interpreting research results, it is crucial to account for the spectral composition of seismic oscillations.
Next, we delve into specific models of the geological environment in the cities of Kyiv, Kryvyi Rih, and Odesa.
Amplitude-Frequency Characteristics of the Geological Environment for the city of Kyiv, specifically at the construction site located at 210 Kharkivske Highway.
The soil stratum model includes both physical (elastic wave velocities, densities, absorption decrements, etc.) and geometric (layer thicknesses, shape of boundaries) characteristics.
One engineering-geological district was allocated within the construction site during seismic microzoning works.Therefore, the averaged model of the geological environment under the construction site was built alone.The groundwater level (GW) is 6.0 m.
The model of the geological environment was built as horizontally layered and vertically heterogeneous.The initial data for building the model were taken from the technical report on the engineering and geological investigations of the research site.Its parameters are shown in table 1.According to the data of engineering and geological searches (up to 33 meters), the sections of the environment within the entire area of the planned construction are composed of alternating sands with different physical and mechanical properties.The boundaries of the interface between the soils with a different composition and physical and mechanical properties, which make up the engineering geological sections, have a characteristic horizontal or close to it lying.
Up to the explored depth of 33 m, a certain number of horizontal layers were conditionally selected (according to the results of the comparison of seismic and geological rock boundaries).Some of these layers included several interlayers similar in lithology.When combining several layers into one seismological soil stratum, the models (such physical parameters as densities and speeds of elastic waves) were calculated as weighted averages for each layer according to the formulas recommended by the current SBC B.1.1-12:2014"Construction in Seismic Areas" [3]: where Hk -thickness of the k-layer, Vs av -average shear velocity.
Values of decrements of absorption of longitudinal and transverse waves in layers were estimated according to literature data [1; 2], in accordance with dependencies: Data for building a model of the environment from a depth of 33 meters to the crystalline foundation was supplemented from literary sources and stock materials.
Thus, the averaged model of the soil stratum under the construction site of the planned construction at the address: 210, Kharkivske Highway has 10 horizontal-parallel soil stratums with different physical parameters, separated from each other by seismological boundaries.
Calculations of the amplitude-frequency characteristic (frequency characteristic) of the model of soil stratum under the construction site were carried out using the ProShake software complex [7; 10], developed for one-dimensional modeling of the response of the upper part of the section of the geological environment to seismic influences.Amplitude-frequency characteristic was built for a model built based on the data of wells drilled within the construction site.The initial data for the construction of amplitude-frequency characteristic were taken from table 1 and the technical report on the engineering and geological investigations of the research site.The parameters of the deeper layers to the crystalline foundation are taken from literary and stock sources.
Figure 3 shows the amplitude-frequency characteristic for the model of the soil stratum under the investigated construction site on the 210, Kharkivske Highway.
Amplitude-frequency characteristic of the soil environment under the site at the address of 210, Kharkivske Highway in the Darnytskyi District of Kyiv is characterized by a frequency range of resonant amplification of oscillations by local soil conditions from 0.3 to 0.98 Hz with a maximum amplification factor of 10.8.In the specified frequency range, one clear maximum is observed at a frequency of 0.48 Hz.
Figure 3.
Amplification ratio curve sample for the T-component of surface oscillations of models of the soil stratum under the research site at the address: 210, Kharkivske Highway in the Darnytskyi district of Kyiv, for the case of a transverse wave falling from below on the sole of the half-space (table 1).
Calculated accelerograms for simulating earthquakes in the Vrancha zone at the research site were synthesized using a regularized inverse Fourier transform algorithm [4].During their generation, various combinations of theoretical contour amplitude spectra of calculated accelerograms, normalized frequency characteristics of the environment and phase spectra obtained from various records of real subcortical earthquakes from the Vrancha zone were used.
The obtained data were transferred to designers and builders for further assessment of the impact of Frequency, Hz seismic vibrations on building structures and engineering systems, as well as for modeling the behavior of the structure under a specific seismic load, presented in the form of sets of calculated accelerograms and response spectra.
Amplitude-frequency characteristics of the geological environment for the city of Kryvyi Rih
(seismological station "Kryvyi Rih") Amplitude-frequency characteristic was calculated for the location of the Kryvyi Rih seismological station, where the geological section is represented by a sedimentary cover with a thickness of up to 30 m, which overlaps the crystalline rocks of the Precambrian basement [5; 6].
Accelerograms (records on bedrock) on the free surface of the soil were calculated for the Myroliubivka district of the city of Kryvyi Rih, where the seismic station "Kryvyi Rih" is located, in order to assess the influence of the soil stratum on the seismic hazard parameters on its free surface.To solve the problem, two records of earthquakes on bedrock with PGA=0.06g(where g is the acceleration of gravity, 1g = 9.81 m/s 2 ) and PGA=0.1g were chosen.The equivalent-linear method of modeling soil response to seismic loads and the ProShake software package [7] were used.
The model of the soil stratum of the Myroliubivka district of the city of Kryvyi Rih is given in table 2.
Figure 4 shows the calculated accelerograms on the free surface of Myroliubivka district.The obtained results indicate an increase of PGA on the free surface relative to PGA on bedrock (in both cases) by approximately 4 times.Figure 5 shows the variation of PGA with depth, in the direction from bedrock to the free surface.
The Fourier amplitude spectrum plot reveals that the highest PGAs occur within the frequency range of 1.2 Hz to 1.75 Hz, as depicted in figure 6.
Thus, the soil stratum in the Myroliubivka area, composed mainly of loams and clays, exhibits Input earthquake (PGA=0.06g)Input earthquake (PGA=0.10g)resonant properties in the frequency range from 1.2 Hz to 1.75 Hz.Theoretical modeling showed an increase in the amplitude of seismic oscillations in the indicated range by approximately 4 times.Since civil buildings have their own resonance frequencies in the same range (1 -2 Hz), during construction, it is necessary to conduct detailed studies on the prevention of resonance effects and ensuring the seismic resistance of buildings.
Amplitude-frequency characteristics of the geological environment for the city of Odesa
Figures 7 -9 present amplitude-frequency characteristics (frequency characteristics) constructed for models of soil strata under the construction sites of high-rise buildings in the city of Odesa and in the Odesa region [11; 12].
Analyzing the amplitude-frequency characteristic of the soils under the construction sites of the Odesa region, it can be seen that they have a large number of well-defined maxima and a wide frequency range of possible resonant amplification of seismic vibrations.According to amplitude-frequency characteristic of models of soil strata presented in figure 8, the frequency ranges of possible resonant amplification of seismic vibrations by local soil conditions for each construction site were determined.For all construction sites of the Odesa region, the first maximum of the frequency characteristic is observed in the low-frequency range from 0.15 Hz to 0.35 Hz, which is obviously a manifestation of the large (from 1400 to 1600 m) power of sedimentary deposits.The following maxima of the frequency characteristics of the soil strata under the construction sites of the Odesa region are observed in the frequency range from 0.5 Hz to 10 Hz.Thus, it can be concluded that in earthquake-resistant construction design, it is necessary to carry out detailed studies of the resonant properties of the soils construction, regardless of the number of floors of the buildings and the complexity of the structure, since the natural frequencies of oscillations of both single-story and high-rise buildings usually lie in this frequency range.
For all studied sites, the influence of the physical and mechanical properties of the sediment layer on the seismic effect on the surface within the territory of the studied construction site was analyzed under possible seismic impacts with different maximum peak accelerations, which with a 90% probability will not be exceeded in the next 50 years.
Based on the obtained results, recommendations were formulated to prevent the occurrence of resonance effects in the designed objects due to the coincidence of the frequencies of the maximum oscillations in incident seismic oscillations with the maxima of the frequency characteristics of the soil and the frequencies of the natural oscillations of buildings and structures.
Conclusions
Based on the analysis of amplitude-frequency characteristics of soil models across various construction sites in different cities of Ukraine, it is evident that vibration amplification by soils exhibits a complex pattern.This phenomenon depends on numerous factors and can vary significantly between different construction locations.Therefore, a crucial inference to draw is that earthquake-resistant design for buildings and structures must appropriately consider the filtering properties of soil complexes.This entails acknowledging the potential substantial increase in vibrations at "resonant" frequencies.
When calculating frequency characteristics, it is essential to factor in the influence of rheological properties of soil strata and employ nonlinear methods to determine their frequency characteristics.Calculated accelerograms should also consider the vibrations originating from earthquake epicenters and the filtering properties of the site's soil complexes.
The inclusion of frequency characteristics that fully reflect the influence of the soil stratum beneath the future building allows for cost reduction in construction while simultaneously enhancing the seismic resistance of structures.This can be achieved by developing design solutions that prevent the alignment of the natural frequencies of the intended building with the maxima of the frequency characteristic of the soil stratum.
A comparison of the resonant properties of soils in Kyiv, Kryvyi Rih, and Odesa reveals valuable insights into their seismic potential.Our research indicates that, from a seismicity standpoint, the local A key indicator of this is the wide frequency range of resonant amplification in Odesa compared to Kyiv and Kryvyi Rih.This suggests that Odesa's soils possess characteristics such as catacombs, landslides, and a high groundwater level, contributing to seismic vibration amplification across a broader frequency spectrum compared to other cities.The substantial sedimentary layer in Odesa, in contrast to Kyiv and Kryvyi Rih, significantly influences resonance amplification results.
Consequently, our research underscores that Odesa faces hazardous local ground conditions in a seismic context, characterized by a wide frequency range of resonant amplification of seismic oscillations spanning from 0.1 Hz to 10 Hz.This information is vital for consideration during construction and seismic protection planning in the region.
The data obtained on the filtering properties of the soil stratum at the studied sites, determining quantitative seismic hazard characteristics, concurrently ensure the stability of designed structures and substantially reduce earthquake-resistant construction costs by averting resonant amplification by the sedimentary layer of seismic oscillations at their own periods in designed buildings and structures.
and δs -decrements of absorption of longitudinal and transverse waves, respectively.
Figure 4 .
Figure 4. Recalculated accelelograms from PGA=0.06g and PGA=0.1g to the free surface of the Miroliubivka area.
Figure 5 .
Figure 5. Change of PGA with depth, in the direction from bedrock to the free surface of the Myroliubivka area.
Figure 6 .
Figure 6.Amplitude Fourier spectrum of seismic oscillations on the free surface of Myroliubivka district.
Figure 7 .
Figure 7. Amplification ratio curve for soils of different sites in Odesa [11].
Figure 8 .
Figure 8. Amplification ratio curve sample for the T-component of surface oscillations of models of the soil stratum of engineering-geological districts I, II and III at the site at the address: Odesa, 30, Academic street, for the case of a transverse wave falling from below on the sole of the half-space.Conventions: 1districts I; 2districts II; 3districts III.
10 Figure 9 .
Figure 9. Amplification ratio curve for soil of the construction site of the complex of berths No. 5, 6, 7, 8 in the Port of Pivdenyi.
soil conditions in Odesa are the most perilous.
Table 1 .
Averaged Model of the Geological Environment at the Site on Kharkivske Highway, 210, Darnytskyi District, Kyiv.
Table 2 .
Model of the soil stratum of the Myroliubivka district of the city of Kryvyi Rih. | 5,164.6 | 2024-05-01T00:00:00.000 | [
"Geology",
"Engineering",
"Environmental Science"
] |
Loss and retention of resistance genes in five species of the Brassicaceae family
Background Plants have evolved disease resistance (R) genes encoding for nucleotide-binding site (NB) and leucine-rich repeat (LRR) proteins with N-terminals represented by either Toll/Interleukin-1 receptor (TIR) or coiled-coil (CC) domains. Here, a genome-wide study of presence and diversification of CC-NB-LRR and TIR-NB-LRR encoding genes, and shorter domain combinations in 19 Arabidopsis thaliana accessions and Arabidopsis lyrata, Capsella rubella, Brassica rapa and Eutrema salsugineum are presented. Results Out of 528 R genes analyzed, 12 CC-NB-LRR and 17 TIR-NB-LRR genes were conserved among the 19 A. thaliana genotypes, while only two CC-NB-LRRs, including ZAR1, and three TIR-NB-LRRs were conserved when comparing the five species. The RESISTANCE TO LEPTOSPHAERIA MACULANS 1 (RLM1) locus confers resistance to the Brassica pathogen L. maculans the causal agent of blackleg disease and has undergone conservation and diversification events particularly in B. rapa. On the contrary, the RLM3 locus important in the immune response towards Botrytis cinerea and Alternaria spp. has recently evolved in the Arabidopsis genus. Conclusion Our genome-wide analysis of the R gene repertoire revealed a large sequence variation in the 23 cruciferous genomes. The data provides further insights into evolutionary processes impacting this important gene family. Electronic supplementary material The online version of this article (doi:10.1186/s12870-014-0298-z) contains supplementary material, which is available to authorized users.
Background
As sessile organisms, plants have adapted to their changing surroundings and their survival is based primarily on timely evolved immune responses. The first line of defense occurs at the plant cell surface with the recognition of conserved microbial groups such as lipopolysaccharides and peptidoglycans, commonly revered to as pathogen or microbeassociated molecular patterns (PAMPs/MAMPs). The MAMPs are recognized by cognate pattern-recognition receptors (PRRs) and trigger immediate immune responses leading to basal PAMP-triggered immunity (PTI) [1,2]. Known PRRs fall into one of two receptor classes: transmembrane receptor kinases and transmembrane receptorlike proteins, the latter of which lack any apparent internal signaling domain [3]. Notably, PRRs are components of multiprotein complexes at the plasma membrane under tight control by protein phosphatases and other regulatory proteins [4]. In a number of cases specialized pathogens are able to overcome basal PTI by either circumventing the detection of PAMPs or interfering with PTI by delaying, suppressing or reprogramming host responses via delivery of effector molecules inside host cells. As a counter mechanism, deployed intracellular resistance (R) proteins detect the presence of these effectors directly or indirectly leading to effector-triggered immunity (ETI). The RPM1-INTERACTING PROTEIN 4 (RIN4) is a well-studied key-player in the former situation [5,6], whereas direct interaction could be exemplified by the R genes and effectors in the rice -Magnaporthe oryzae pathosystem [7,8].
The plant resistance proteins are modular, that is, they consist of combinations of conserved elements some with features shared with animals reviewed by [9][10][11]. The majority of R proteins are typically composed of a nucleotide-binding site (NB) with a leucine-rich repeat (LRR) domain of variable length at the C-terminus. These NB-LRR proteins are divided into two classes on the basis of their N-terminal sequences consisting either of a coiled-coil (CC) sequence or of a domain that shares sequence similarity with the Drosophila melanogaster TOLL and human interleukin-1 receptor referred to as TIR. These blocks of conserved sequences have remained throughout evolution and can still be identified in diverse organisms of eubacteria, archaea, metazoans and bryophytes [12]. Despite this high degree of conservation, the R proteins confer resistance to a broad spectrum of plant pathogens, including viruses, bacteria, fungi, oomycetes and nematodes [13][14][15].
NB-encoding resistance genes have been annotated in many monocot and dicot species pioneered by Arabidopsis thaliana [16]. The current wealth of genomes of sequenced plant species has revealed R genes to be one of the largest plant gene families. In the reference genome of A. thaliana, 149 R-proteins harbor a LRR motif whereof 83 are composed of TIR-NB-LRR and 51 have CC-NB-LRR domains [17,18]. Several shorter proteins also are present comprising one or two domains represented by 19 TIR-NB encoding genes and 30 genes with TIR-X domains. In total, A. thaliana has approximatelỹ 200 proteins with one to three R gene-associated protein domain combinations.
In this study we took advantage of the accelerating genome information in A. thaliana and performed genomewide analyses of R genes in 19 A. thaliana genomes. We further expanded the analysis by including the genomes of the related Arabidopsis lyrata, Capsella rubella, Brassica rapa and Eutrema salsugineum species. In addition we selected two loci harboring resistance to Brassica fungal pathogens in order to trace down their evolutionary patterns. We found that 29 R genes formed a core set within A. thaliana, whereas as few as five R genes were retrieved from the genomes of the five different species. One of those five genes, the HOPZ-ACTIVATED RESISTANCE 1 (ZAR1) gene known to possess novel signaling requirements is also present in other plant families within the Rosid clade. The RESISTANCE TO LEPTOSPHAERIA MACULANS 1 (RLM1) locus was partly conserved in A. lyrata and C. rubella and greatly diversified in B. rapa and E. salsugineum, while the RLM3 locus has recently evolved in the Arabidopsis genus. This work provides aspects on R gene diversity and choice of reference genotype in comparative genomic analysis.
Pfam homology and COILS server searches on the predicted 148 NB-LRR-encoding genes [18] resulted in a reduced list of 124 R genes in Col-0 for further analysis, comprising 48 CC-NB-LRR (CNLs) and 76 TIR-NB-LRRs (TNLs) (Additional file 1: Table S1). Between 97 (Edi-0) to 109 (Hi-0 and Po-0) of these R genes were found within the genomes of the 18 newly sequenced A. thaliana accessions ( Figure 1A, B). No additional R genes besides those present in Col-0 were found in the trace sequence archives of the 18 genomes.
In a comparison of the 48 CNL encoding genes in Col-0, between 27 (Edi-0) to 40 (Hi-0) were recovered in the selected accessions ( Figure 1A). The protein products of the remaining genes orthologous to the CNL proteins in Col-0 were either missing one or several domains (CN, NL, N or L) or were completely absent in at least one accession ( Figure 1C). Representatives of known defense-related genes that were absent included RPS5 in Edi-0, No-0 and Sf-2, and ADR1 in Zu-0. For gene abbreviations, see Additional file 2: Table S2. In the TNL group, the number of complete TNL genes varied between 49 (No-0) and 59 (Po-0 and Wu-0) ( Figure 1B, D). Examples of missing genes were RPP5 in Ct-1, Mt-0, Oy-0 and Wu-0, and SNC1 in Can-0, Edi-0, No-0, Rsch-4, Tsu-0 and Wu-0.
In summary, a rather wide distribution of R gene repertoires was found among the 19 A. thaliana accessions. Out of the 124 encoding R genes in Col-0, 41 genes had orthologs in the other 18 accessions. However, 12 of these genes lacked one or two domains in at least one accession. For example, RPP13 had lost its LRR domain in No-0, Rsch-4, Wil-2 and Zu-0. In the remaining core set of 12 CNL and 17 TNL encoding genes, all randomly distributed over the genome (Additional file 3: Figure S1), nine genes (ADR1-L1, ADR1-L2, LOV1, RPS2, RPS4, RPS6, SUMM2, TTR1 and ZAR1), are known to be implicated in various plant defense responses. To expand the analysis on R genes in A. thaliana, we monitored possible conservation of R genes across lineages in Brassicaceae represented by A. lyrata, C. rubella, B. rapa and E. salsugineum. Pfam homology and COILS server searches identified 404 proteins with CNL or TNL architecture (Additional file 1: Table S1). The number of predicted CNL and TNL encoding genes varied greatly: E. salsugineum (67), C. rubella (75), A. thaliana Col-0 (124), A. lyrata (127), and B. rapa (135), numbers that do not reflect the genome sizes or number of predicted gene models in the individual species.
Orthologous sequences in the five species were identified by phylogenetic analysis of the NB domains in the CNL and TNL sequences. In the resulting phylogenetic tree, 57 clades with orthologs from at least two plant species were formed (Additional file 4: Figure S2 and Additional file 5: Table S3). Within these 57 clades, multicopy genes from single species were also found identified as in-paralogous sequences within that specific species. The placement of the sequences outside the 57 clades was not resolved. Within the orthologous sequences a bias towards the TNL group was seen, with 52 out of 76 A. thaliana TNL sequences having an ortholog in one or more species, while only 17 out of 48 CNLs had an ortholog. Excluding in-paralogous genes, the highest number of orthologous sequences was identified between A. thaliana and A. lyrata (Figure 2), as concurrent with earlier findings [20,21]. From the A. thaliana core set of 29 genes, 7 CNL and 9 TNL genes were also found within two or more species including ADR1-L1, ADR1-L2, RPS2, RPS6, TTR1 and ZAR1.
TNLs
of these clades (no. 5; Additional file 4: Figure S2) contained a gene implicated in defense responses, known as ZAR1 and required for recognition of the Pseudomonas syringae T3SE HopZ1a effector [22]. ZAR1 has homologs in several species within the Rosid clade as well as in Vitis vinifera and Solanum species, and in our dataset ZAR1 was well conserved, with a Ka/Ks ratio of 0.4 supporting purifying selection. Two other genes, At5g66900 and At5g66910 were found in the same clade (no. 12; Additional file 4: Figure S2), suggesting that they were paralogous to each other and possibly have redundant functions. In this clade, B. rapa and E. salsugineum were represented with three and two genes, respectively, while there was a single gene from A. lyrata and C. rubella. Phylogenetic analysis of the CDS sequences revealed that only the At5g66900 gene was conserved among the five species (Additional file 6: Figure S3). The RPS2 gene was earlier found in several Brassica species, including B. montana, B. rapa and B. oleracea [23,24], and it has most likely a homolog (945467, identity of 94%) in A. lyrata [20]. In our dataset, the A. thaliana RPS2 gene was also identified in E. salsugineum but not in C. rubella. However, a BLASTN homology search, revealed similarity between RPS2 and a region annotated on the anti-sense strand as a gene without any domains in C. rubella (Carubv10005994m). The high similarity and identity of 88.7 suggested a possible third CNL gene being conserved among the five species.
In summary, orthology with two CNL genes (At3g50950 and At5g66900) with the possible addition of RPS2 and three TNL genes (At4g19510, At5g45230, At5g17680) was observed in all five species. Within the 19 genomes of A. thaliana only the CNL genes were conserved in this particular genomic comparison. No known function has been attributed to four out of the five conserved genes, including their orthologs.
Conservation and diversification of the RLM1 locus L. maculans is a hemitrophic fungal pathogen and the causal agent of the widespread blackleg disease of Brassica crops [25]. The RLM1 locus in A. thaliana Col-0 was earlier identified as displaying important roles in the immune response [26] and contains seven genes with TNL architectures spanning between At1g63710 and At1g64360 (Additional file 7: Figure S4). Two genes, RLM1A and RLM1B were found to be responsible for RLM1 activity, with RLM1A as the main player in the immune response [26]. No function is known for the remaining five RLM1C-RLM1G genes. Diversification in resistant loci in different accessions has been demonstrated in several cases [21,27,28] and to expand our knowledge on RLM1, we studied the presence and diversification of RLM1 in our genomic data set.
Here, we found RLM1A to be present in all 18 A. thaliana accessions encoding all three domains in fourteen accessions (Can-0, Ct-1, Edi-0, Hi-0, Ler-0, Mt-0, No-0, Po-0, Sf-2, Tsu-0, Wil-2, Ws-0, Wu-0 and Zu-0 (Additional file 8: Table S4). This is in agreement with their resistance phenotype [29]. In general the RLM1A genes in 17 accessions had very few variable sites compared to RLM1A in Col-0 (p-distance 0.2 to 0.9%). Ws-0 was atypical and diverged most with 230 variable sites in comparison to RLM1A in Col-0 resulting in a p-distance of 13.8% ( Figure 3A and Additional file 9: Table S5). No RLM1A homologs were identified in the A. lyrata, B. rapa and E. salsugineum genomes. One RLM1A candidate was found un-annotated in the C. rubella genomic sequence and RNA expression data of the LRR region [30] suggests that this gene is expressed, and Figure S2).
might have a potential role in defense responses. To support our findings, PCR amplification and sequencing of the RLM1A region in A. lyrata, B. rapa and C. rubella confirmed that only C. rubella has maintained RLM1A. B. rapa species are not known to host resistance to L. maculans [31] except the weedy relative B. rapa ssp. sylvestris [32,33]. In order to clarify the presence of RLM1A we used RLM1A specific primers to amplify this region in B. napus cv. Surpass 400 harboring resistance traits from the wild B. rapa relative, the gene progenitor, and for comparison, a known susceptible B. rapa genotype. Here, only B. rapa ssp. sylvestris contained a genomic sequence highly similar to the RLM1A gene of A. thaliana (identity 81%).
The RLM1B gene has a minor role in the immune response and is flanked by RLM1C and RLM1D. These three TNL genes encoded proteins lacking one or more domains in most of the 18 accessions in comparison to Col-0, especially RLM1D (Additional file 8: Table S4). One possible candidate orthologous to RLM1C was found in the genomic sequence of C. rubella but using the annotation of A. thaliana for comparison the potential gene had multiple stop codons. Similarity was found for the RLM1B to RLM1C genes in the genome of A. lyrata, B. rapa and E. salsugineum (Additional file 7: Figure S4). Due to the lack of orthology between species this chromosomal region seems to be under positive selection, showing a reduction of the RLM1B to RLMD genes within A. lyrata and E. salsugineum. In B. rapa on the contrary an expansion was observed with five TNL and one TN genes annotated to the RLM1B-RLM1D region, showing similarity to the RLM1B and RLM1C genes of A. thaliana Col-0.
The most conserved sequence within the A. thaliana accessions were RLM1E, F and G genes which displayed only a few modifications (p-distance 0.5-0.8%) (Additional file 9: Table S5). Further conservation was observed for RLM1F and RLM1G in A. lyrata, the latter containing two orthologs to the RLM1F and RLM1G genes with Ka/Ks ratios of 1.3 and 0.8 in comparison to A. thaliana Col-0. Additionally, similarity was found for RLM1G to the genomic region in C. rubella (Ka/Ks ratio of 0.7) and transcript data has previously revealed that RLM1G is expressed in C. rubella [30]. In B. rapa, five TNL encoding genes were found to be orthologous to RLM1F and RLM1G (clade no. 21, Additional file 4: Figure S2), but only two were found in the RLM1 locus. The three other TNL encoding genes were located elsewhere with no synteny with the RLM1 locus. No orthology was found for the RLM1E to RLM1G genes in E. salsugineum.
Overall, in the A. thaliana accessions the RLM1 locus is conserved in the RLM1E to RLM1G region and appears to have experienced diversification in the RLM1A to RLM1D sequence stretch. An exception was Wu-0, in which the RLM1 locus was highly similar to the RLM1 locus in Col-0, with only an average p-distance of 0.2% (Additional file 9: Table S5). In the other four species, several of the RLM1 genes have experienced diversification in comparison to A. thaliana as well as to each other. The exception is the conserved RLM1G in both A. lyrata and C. rubella and the RLM1F in A. lyrata while RLM1A was also found in C. rubella.
The RLM3 locus is unique for A. thaliana and A. lyrata
The RLM3 gene is of importance for immune responses not only to L. maculans but also to Botrytis cinerea and Alternaria species [34]. The gene encodes TIR and NB domains, but lacks a LRR domain. Instead, the C-terminal end contains three copies of the DZC (disease resistance, zinc finger, chromosome condensation) or BRX domain (brevis radix) originally described having a role in root development [35]. In addition to RLM3, 18 genes in A. thaliana Col-0 contain TN genes without LRR domains [18]. However, RLM3 is the only TN gene in the A. thaliana reference genome that contains BRX domains. To gain more insight on the TN encoding genes in A. thaliana Col-0, a Pfam homology and COILS server search was employed. This was designed to exclude genes with truncated TIR or NB domain, resulting in eleven TN genes (Additional file 1: Table S1). The presence of the TN encoding genes was further investigated in the 18 additional A. thaliana genomes.
Overall, we found between six (Wil-2) and eleven (Hi-0, Po-0 and Zu-0) genes encoding both the entire TIR and NB domain ( Figure 3B). Of the eleven TN genes in Col-0, seven were present in all 18 accessions, with three encoding the complete TN. The remaining four genes encoded modifications (T or N) in at least one accession ( Figure 3C). At1g72850 was absent in most accessions (Can-0, Edi-0, Mt-0, No-0, Oy-0, Wil-2 and Ws-0) and encoding only a TIR domain in Bur-0, Ct-1 and Sf-2. When we expanded the Pfam homology searches we found seven TNs in A. lyrata, one in C. rubella, sixteen in B. rapa and no TN encoding gene in E. salsugineum. Within the phylogenetic tree, five clades with orthologous proteins were identified (Additional file 4: Figure S2). None of the clades contained proteins from all four species.
A complete RLM3 sequence was present in 13 out of 19 A. thaliana accessions including Col-0 and no transcripts lacking one or more domains were identified. The high Ka/ Ks ratio of 2.3 suggests that RLM3 is under positive selection in the 13 accessions. Examination of the chromosome region spanning the RLM3 locus revealed that approximately 8,200 bp in Col-0 was completely absent in six accessions (Kn-0, Rsch-4, Tsu-0, Wil-2, Ws-0 and Wu-0), while the flanking genes; At4g16980 and At4g17000 were present ( Figure 3D). The At4g17000 gene has experienced mutations and small deletions, resulting in early stop codons. The approximately 400 bp between At4g16980 and At4g17000 not found in the Col-0 genomic sequence showed minor polymorphisms between these six accessions indicating that the deletion of RLM3 resulted from a single event.
A RLM3-like gene was found in A. lyrata (clade no. 3; Additional file 4: Figure S2) suggesting the presence of RLM3 before the split from A. thaliana~13 Mya [36]. In contrast, no RLM3 homolog was found in the C. rubella, B. rapa and E. salsugineum genome sequences. To further trace a possible origin of RLM3, the BRX domain was used in phylogenetic analysis but no orthology could be found to sequences within the kingdom Plantae (Additional file 10: Figure S5). We conclude that RLM3 has most likely evolved entirely within the genus of Arabidopsis.
Discussion
In this report we describe a genome-wide survey of the large R gene family in 19 A. thaliana accessions and four related species in the Brassicaceae family. The comparisons of the A. thaliana accessions revealed a great variation in gene numbers and a biased loss of LRR domains. Interestingly, the Col-0 genome was the most R gene dense accession in the dataset. We checked for biases in the re-sequencing and gene annotation process of the additional A. thaliana genotypes but could not identify any obvious explanation for loss of R genes in these accessions. This is in line with a recent genome study comprising de novo assembly of 180 A. thaliana accessions, which revealed large variation in genome size, with 1.3-3.3 Mb of new sequences and 200-300 additional genes per genotype [37]. The differences were however found to be mainly due to 45S rDNA copies and no new R genes absent in Col-0 was reported.
Col-0 is a direct descendent of Col-1 and was selected from a Landsberg population based on its fertility, and vigorous plant growth [16]. The same population was used in irradiation experiments, resulting in the Landsberg erecta accessions (Ler). It has now become clear that the original Landsberg population contained a mixture of slightly different genotypes, explaining the observed difference in R gene repertoire between Col-0 and Ler-0. The genetic variation among A. thaliana accessions as observed in our dataset has a long history of being exploited for R gene mapping and cloning. Characterization of resistance genes to P. syringae (RPM, RPS) together with RPP genes to the oomoycete Hyaloperonospora arabidopsidis have been in the forefront and also advanced the understanding of interactions with pathogen effectors. The RPP1 locus of the Ws-0 and Nd-1 accessions recognize different H. arabidopsidis isolates, an observation that lead to the discovery of the avirulence gene ATR1 and six divergent alleles [38]. Sequence alignment with ATR1 syntenic genes in Phytophthora sojae and P. infestans in turn revealed the RxLR translocation core motif, adding another dimension to the genetic makeup of hostpathogen pairs and effector biology.
Within the 18 accessions of A. thaliana a large number of R genes were missing one or more domains in comparison to Col-0, with the loss of LRR domains as the most common alteration. Modulation of the LRR sequences together with gene conversion, domain swapping and deletion events are suggested strategies for a plant to coevolve with a pathogen. LRR domains have been identified in a diverse variety of bacterial, protist and fungal species, together representing thousands of genes [12]. Fusion of the LRR domains with the NB domain is of a more recent origin than LRR fusion with receptor-like kinases, which are seen only in the land plant lineage. The LRR domain is suggested to have evolved several times resulting in eight specific classes, which differ in sequence length and similarity within the variable segment of the LRR domain [39,40]. One of the LRR classes, referred to as Plant Specific LRRs has been shown to be under diversifying selection in several R proteins [41][42][43][44]. This type of sequence diversifications most likely reflects co-evolution with pathogen effectors, proteins known to directly or indirectly interact with the LRR motifs [7,[45][46][47]. The importance of presence or absence of a particular LRR domain has also been demonstrated. In the absence of the P. syringae effector AvrPphB, the LRR domain of RPS5 inhibits the activity of the CC and NB domains [48]. Consequently, loss of the LRR suppressor activity results in plant cell death due to constitutive RPS5 activity. It was therefore not surprising that none of the RPS5 homologs in our dataset lacked the LRR domain. RPS2, RPS4 and RPS6 sequences were highly conserved between accessions and the LRR domains showed low degree of polymorphisms (Ka/Ks ratio between 0.64 and 0.76). In case of RPS4 the LRR domain is important for protein stability but it lacks the suppressor activity, like RPS5 [49].
In many A. thaliana accessions in our dataset we found R genes encoding bipartite proteins, often represented by the loss of the LRR domain in comparison to Col-0. Such TN-encoding genes have been speculated to function as adapter proteins interacting with TNL proteins or with downstream signaling components [17]. For example, PBS1, an important player in the RPS5 defense response, was found to interact with a TN protein [50]. Whether CN and TN genes in general act in protein complexes recognizing pathogen effectors remains to be demonstrated. Plant R genes encoding bipartite proteins also have been speculated to be part of an evolutionary reservoir in plants, allowing the formation of new genes through duplications, translocation and fusion [12,51,52]. The fusion between the TN and BRX domain in RLM3 is unique for A. thaliana and A. lyrata, possible dimerizing with other BRX domaincontaining proteins, since homo-and heterodimerization capability between BRX domains of individual proteins has been shown [53]. Further, the transcription factor BRX, containing two BRX domains was shown to control the expression of a gene important in brassinolide synthesis [54] and thereby modulate both plant root and shoot growth.
In our dataset we observed a great variation in the number of unique CNL and TNL R genes, ranging from 33 in E. salsugineum to 63 in B. rapa. Copy number differences within different species of the R gene family is proposed to be driven by gene loss through pseudogenization or expansion through duplication events and subsequent divergence [12]. The five species in our dataset represent two lineages; lineage I (Arabidopsis and Capsella) and lineage II (Brassica and Eutrema), diverging at approximately 43 Mya [36,55]. Due to the close relationship between the five species, higher numbers of conserved R genes was expected, but no lineage-specific R gene repertoires were found. Comparative genomic analysis between A. thaliana and B. rapa already established orthology between several NB-LRR genes [24]. However, in our study we found eleven additional sets including orthologs to ADR1-L1, ADR1-L2, RPP1, RPP13 and ZAR1. Out of the 528 R genes analyzed, only two CNLs and three TNLs were conserved in the five species. One of these, ZAR1, is also present in many other species within the eudicots, mainly within the Rosid clade [22]. The Rosid clade diverged from the Caryophyllales and Asterids more than 110 Mya [56] suggesting an ancient origin of the ZAR1 gene. Recently it was shown that ZAR1 interacts with the pseudokinase ZED1 in mediating immunity to P. syringae [57]. This pseudokinase family is also common among flowering plants and it could be speculated that pseudokinases and ZAR1 plays a general role in basal plant defense responses not seen in the ETI response triggered by P. syringae in A. thaliana.
Conclusions
Here, we have revealed a large variation in the R gene repertoire in the A. thaliana accessions, highlighting both the fast evolving nature of the R gene family but also a potential bias in the usage of a single genotype for genome comparisons. The recent advances in genome sequencing technologies enable re-sequencing of genotypes of interest for crop improvements with reasonable costs and rapid generation of molecular markers that co-segregate with traits of interest. An abundant supply of gene information from the rich genetic resources of Brassica species can therefore be foreseen along with methods for enrichment of genes of interests. Using such strategies, the number of NB-LRR genes in the potato genome was increased from 438 to 755 [58], demonstrating new avenues and breakthroughs made possible by next generation sequencing in the relatively short time that has passed since the sequencing of the first flowering plant.
Data sampling
The coding (CDS) and protein sequences of the A. thaliana Col-0 reference genome, 18 A. thaliana accessions, A. lyrata, C. rubella, B. rapa and E. salsugineum (previously Thellungiella halophila) genomes were downloaded from online databases [19,[59][60][61][62][63][64][65][66]. Proteins with significant match according to the Pfam software [67] with the TIR domain (PF01582), NB-ARC (NB) domain (PF00931), and LRR domains (LRR1-5, 7-8), (PF00560, PF07723, PF07725, PF12799, PF13306, PF13504, PF13855) were selected. All proteins lacking the TIR domain were analyzed for the presence of the CC region with the COILS server using default settings and a confidence threshold >0.9 [68]. For the A. thaliana reference genome of Col-0 and the four species, genes encoding a TIR domain in combination of a NB and LRR (TNL) or a CC in combination with a NB and LRR (CNL) domains were selected. In the case of different isoforms, the longest transcript of each gene was included in the dataset. All protein sequences were subjected to Pfam homology and COILS server searches to identify CNL or TNL as described above for the A. thaliana accessions.
The RESISTANCE TO LEPTOSPHAERIA MACULANS 1 (RLM1) and RESISTANCE TO LEPTOSPHAERIA MACU-LANS 3 (RLM3) loci were selected for detailed analysis. Genomic and CDS sequences spanning two genes upstream (At1g63710) and downstream (At1g64090) of the RLM1 locus [26] were retrieved from the TAIR10 database [16]. The CDS sequences of At1g63710 through At1g64090 in Col-0 were used to identify the corresponding chromosomal regions in A. lyrata, C. rubella, B. rapa, and E. salsugineum by BLAST search against the Phytozome database [60,69]. Similarly, the At4g16980-At4g17000 region around the RLM3 locus (At4g16990) [34] was selected and identified in A. lyrata, C. rubella, B. rapa, and E. salsugineum. The Pfam software was used to select genes encoding a combination of TIR and NB domains (TN) in Col-0 and subsequent orthologs in the 18 A. thaliana accessions were identified. For the presence/absence (P/A) polymorphisms of the NB-LRR genes the definition of [70] was used. The average non-synonymous and synonymous substitutions per site ratio (Ka/Ks) for each gene were determined using the number of differences with the Nei-Gojobori distance method implemented in MEGA 5.2 [71].
Multiple sequence alignment and phylogenetic analysis
The NB domains in the CNL and TNL proteins identified in A. lyrata, C. rubella, B. rapa and E. salsugineum genomes were aligned with ClustalW [72] using default settings and the alignment translated to nucleotides with the TranslatorX tool [73]. Poorly aligned sites were removed from the dataset using GBlocks 0.91b [74] with following settings: −b1 = 282, −b2 = 283, −b4 = 5, −b5 = h, −b6 = y. Identical proteins were reduced to one representative. A neighbor-joining tree was constructed using PAUP* 4.0β10 [75] through Geneious version 7.0.4 [76] using the GTR+G+I model with a 0.1 proportion of invariable sites and 1,000 bootstrap replicates. Proteins with a bootstrap confidence ≥70 were selected as orthologous. To further analyze parts of the resulting tree, a maximum likelihood (ML) analysis was performed using the GTR+G+I model and 1,000 bootstrap rates replicates in MEGA 5.2 [71]. Proteins with a BREVIS RADIX (BRX) domain were identified in BLASTP homology searches using a hidden Markov model (HMM) of the BRX domain sequence (PF08381). The BRX domain sequences were aligned and translated to nucleotides with translatorX and a ML tree was constructed in MEGA 5.2 using the GTR+G+I rates and 1,000 bootstrap replicates.
Analysis of the RLM1 and RLM3 loci
Syntenic orthologs between A. thaliana Col-0, A. lyrata, C. rubella, B. rapa, and E. salsugineum were identified using the SynOrths v1.0 tool with default settings [77], by comparing all genes in the selected region between all pairs of species. Protein pairs with an E-value cutoff of <1e-9 were considered orthologous. All none-TNL proteins within the RLM1 region in the different species were assigned to orthologous groups using the OrthoMCL version 2.0 server [78] followed by Pfam homology search to identify domain architecture. TNL proteins and the unannotated regions within the RLM1 locus in the different species were aligned using ClustalW, manually inspected and classified as highly similar (≥60% aa identity) or orthologous (≥80 aa identity). The evolutionary p-distance (the proportion of amino acid sites at which two sequences are different divided by the total number of sites converted to percentages) between the TNL genes in the RLM1 region of the 18 A. thaliana accessions [19] was calculated in comparison to Col-0 [79]. For the RLM3 locus, the region between At4g16980-At4g17000 in A. thaliana Col-0, A. lyrata, C. rubella, B. rapa and E. salsugineum were aligned using ClustalW with the default settings and manually inspected.
To PCR amplify the RLM1A region in different species, DNA was extracted by dissolving crushed leaves of A. | 7,287 | 2014-11-01T00:00:00.000 | [
"Biology"
] |
Low-threshold , mid-infrared backward-wave parametric oscillator with periodically poled Rb : KTP
We report on the development of a nanosecond mirrorless OPO pumped at 1 μm. The gain medium of the OPO was periodically poled Rubidium-doped KTP with a grating period of Λ = 509 nm for first order quasi-phase matching. For grating periods of this length, we demonstrate backward propagation of the signal field and forward propagation of the idler field. To the best of our knowledge, this is the first time such a counter-propagating geometry has been demonstrated in mirrorless OPOs. Pumping with a maximum energy of 6.48 mJ, the OPO yielded an overall conversion efficiency exceeding 53 % with signal and idler energies of 1.96 mJ and 1.46 mJ respectively. The generated signal and idler field spectra were measured to show narrowband linewidths on the order of 0.5 nm. We motivate that such a MOPO is ideal for seeding applications and discuss further improvements and work.
I. INTRODUCTION
Backward-wave optical parametric oscillators 1 , (BWOPO) like their electronic counterparts which were proposed much earlier 2,3 can sustain oscillation owing to a self-established positive feedback mechanism.In BWOPO such a mechanism relies on three-wave mixing (TWM) between counterpropagating signal and idler waves in presence of a co-propagating pump beam.Such oscillators possess properties which are rather unusual for optical parametric oscillators (OPOs).First, owing to the fact that the oscillation is established by the distributed feedback and not by any external cavity, the pump intensity at threshold will depend primarily on the length and nonlinearity of nonlinear medium 1,4,5,6 .Secondly, the parametric wave (signal or idler) which is generated in the opposite direction to that of pump, is inherently narrowband and largely insensitive to pump frequency variation.The energy conservation then ensures that this variation is inherited by the complementary parametric wave generated in the forward direction 7,8,9 .Third, the frequencies of the parametric waves generated in BWOPO are substantially less sensitive to the nonlinear crystal temperature and pump angle variations as compared to the conventional OPOs 7,10 .Such properties are conductive to achieve narrowband precisely tunable near-and mid-infrared wavelength generation with scalable output energy in a simple and robust arrangement.This would be beneficial in a number of applications including sources of nonclassical light 11 , remote sensing and differential absorption LIDARs and others, where seeded or doubly-resonant OPOs are currently employed 12,13,14 .
BWOPO oscillation in near-and mid-infrared can be realized if momentum conservation in counter-propagating TWM is satisfied.With the available second-order nonlinear materials this can be achieved only in quasi-phase matched (QPM) structures 15 with sub-µm periodicity.So far, such structures have been demonstrated by employing periodic poling in KTiOPO4 (KTP) isomorphs 7 , owing to its beneficial mm2 crystalline structure and substantial anisotropy of the ferroelectric domain growth during the poling process.The first demonstrations of BWOPO required pump intensities substantially higher than 1 GWcm -2 which mandated pumping with picosecond pulses 7,8,10 .It would be impractical to operate BWOPO using such structures within the nanosecond pulse regime due to competition from stimulated Raman scattering processes as well as close proximity to the optical damage threshold, even though the optical damage threshold in this material is rather high 16,17 .
In this work we show that improved structuring methods now allow fabrication of periodically poled Rb-doped KTP (PPRKTP) structures with sub-µm periodicity, where the BWOPO thresholds are similar to those regularly obtained in low threshold nanosecond PPKTP OPO devices using the usual co-propagating TWM interaction.These advances allowed reliable operation of BWOPO pumped by 10 ns Q-switched pulses, generating narrowband pulses with output energy of 3.4 mJ with conversion a<EMAIL_ADDRESS>exceeding 53%.Moreover, as we show here, such performance can now be achieved in PPRKTP structures with the QPM periodicity as small as 509 nm.
II. BWOPO PHASE MATCHING
BWOPO employs counter-propagating TWM, which has to satisfy the momentum conservation condition, − = ± ∓ where ( = , , ) denote wave vectors of the pump, signal and idler respectively and = 2/Λ is the wave vector of the QPM structure with periodicity Λ.Here we use the standard convention for the signal and idler frequencies ≥ .The upper signs in the momentum conservation condition correspond to the case where the idler wave is generated in the opposite direction to the pump, while the lower signsto the case where signal is generated backwards.The calculated dependence of the signal wavelength on Λ in these two cases is shown in FIG. 1, for the pump wavelength of 1.064 µm.Here we employed Sellmeier expansions for KTP 18 , which are suitable for RKTP with Rb concentrations below 1%, which is the doping concentration of the crystals used in this work.As can be seen from FIG. 1, the signal at the wavelength of 1.85 µm can be generated in the direction parallel or antiparallel to the pump in PPRKTP structures with the period of 692 nm or 500 nm, respectively.So far, all demonstrations of the BWOPO employed longer periods, where the signal is generated parallel to the pump.However, in this geometry, the BWOPO has freedom to start cascaded oscillations where the signal of the first process plays the role of the pump 19 .In some applications this is not desirable as these cascaded oscillations would generate additional spectral lines and the efficiency of the first process will be limited.Such cascading does not occur in BWOPO with a backwardgenerated signal.This asymmetry stems from the BWOPO fundamental property that the frequency of the parametric wave generated in the opposite direction to the pump, is mostly determined by the QPM grating and does not vary substantially when the frequency of the pump is changed.Then from momentum and energy conservation conditions it directly follows that, in BWOPO with backward signal generation, the first cascaded process would require generation of an idler wave with negative frequency.Obviously, this is unphysical and therefore cascading in such devices does not happen.
However, as evident from FIG. 1, achieving the BWOPO regime with backward generated signal requires substantially shorter QPM periodicity.Specifically, for a given pump wavelength , the periodicity of the QPM structure must satisfy the inequality, < /( + − ), where denotes the corresponding refractive indices.RKTP has so far proved to be the most suitable ferroelectric material for fabricating such sub-µm-periodicity structures over the volumes required for low-threshold, nanosecond, millijoule-level BWOPOs.Low Rb-doping (~0.3%) contributes in reducing the ionic conductivity of pure KTP by several orders of magnitude 20 without substantially modifying its linear and nonlinear properties.Moreover, the doping greatly reduces color-center accumulation effects, usually observed in undoped KTP under exposure by high-intensity light in the blue spectral region 21 .
III. EXPERIMENTAL SETUP & RESULTS
For the BWOPO pumped at 1.064 µm and generating backward signal at 1.856 µm we chose the PPRKTP periodicity of 509 nm.The fabrication process starts with interferometric UV-laser lithography together with liftoff in order to define an Al-surface mask.Then, a coercive field grating is created in the crystal by performing ion-exchange through the Al-mask.After, the metal is removed and periodic poling is achieved by applying 5.ms long pulses of an electric field strength of 6.2 kV/mm.The periodic modulation of the coercive field in the volume close to the crystal surface is crucial since it alleviates the fringing-field problem associated with periodic metal electrodes 23 The amount of pump energy steered towards the MOPO was varied with a half-wave plate and thinfilm polarizer combination.The pump beam had an elliptical Gaussian spatial profile with M 2 values of 3.2 and 3.3 in the x and y directions respectively.It was guided through a CaF2 mirror which was reflective for the signal wavelength.The pump beam was focused by a spherical CaF2 lens with a focal length of f = 250 mm.The resulting beam radius in the crystal was measured to be w0x = 298 µm and w0y = 297 µm (1/e 2 ) with the travelling knife-edge method.The crystal was positioned to have its center coincide with the focus of the beam.The crystal was mounted onto a holder with a Peltier element for temperature stabilization.The Peltier temperature was set to room temperature around 21ºC.Lastly, a mirror that is highly reflective for the idler wavelength was placed after the crystal to separate the idler from the depleted pump.
The BWOPO started oscillation when the pump energy inside the crystal reached 1.5 mJ.This corresponds to the threshold intensity of 83 MWcm -2 .This threshold intensity is similar to typically achieved in usual OPOs employing PPKTP as a gain material 24 .From the threshold intensity we can estimate 6 that the effective nonlinearity in 7 mm-long structure was 7.5 pm/V, not far from maximum value of 2 33 ⁄ = 9.8 pm/V 25 .This is a good indication of the high quality of the QPM structure considering that the crystal contains approximately 28000 periodically inverted domains, each nominally being only 250 nm-long.With such low threshold intensity the BWOPO could be readily pumped up to the energy of about 6.5 mJ, before reaching energy fluence of 5 J/cm 2 , which is half the optical damage threshold 17 .The BWOPO output energy, efficiency and pump depletion characteristics are shown in FIG.4(a).Pumping with a maximum input energy of 6.48 mJ, output energies of 1.96 mJ and 1.46 mJ were reached for the signal and the idler, respectively.At the maximum pump energy, the total conversion of the device was measured to be 53 %.Correspondingly, the pump depletion was measured to be 53.8 %.
The temporal traces of the pump, depleted pump and the forward-generated BWOPO idler at 2.495 µm, measured at the input pump energy of 6 mJ, are shown in FIG.4(b).The traces were measured with a 2 GHz analog-bandwidth oscilloscope.For the pump measurement we employed Si pi-n diode with a rise time of 1 ns (Thorlabs), while the idler pulse was measured with HgCdTe photoelectromagnetic detector (Vigo System) with the rise time below 1 ns.At this pump energy, the FWHM length of the BWOPO pulse is 9 ns.Here the BWOPO is operating 4-times above threshold.However, neither temporal traces nor efficiency graph in FIG.4(a) show any signs of back-conversion.In standard OPOs employing co-propagating TWM, back-conversion is usually quite prominent at these pump levels even in singly-resonant cavities.In BWOPO the back-conversion process is strongly limited due to the inherent property of counter-propagating TWM which ensures that the maximum intensities of the signal and idler are reached at the opposite ends of the nonlinear crystal 4,5 .Spatial intensity profile of the pump and the BWOPO beams were measured with the aid of a pyroelectric camera (Pyrocam III).In FIG. 7, we show the far field spatial profiles of the pump and the idler at two different pump energies.Cut-on filters were used when measuring idler and signal beams in order to prevent any residual pump light exposure.The idler beam profile was similar to that of the input pump.In general, several factors can affect the spatial intensity distribution in parametric devices, e.g., intensity distribution of the pump, its spatial phase distribution, back-conversion processes, spatial pump depletion and homogeneity of the QPM structure.However, in our case, considering that our injection seeded pump gives a beam with relatively high spatial coherence, that the QPM structure was homogeneous and the virtual absence of back-conversion processes, the BWOPO spatial beam profile should be mainly determined by the pump intensity distribution and the spatial pump depletion.This is also the case for the signal beam profile, which showed similar structure to that of the pump beam.
IV. CONCLUSION
In conclusion, in this work we demonstrated a BWOPO with PPRKTP where the higher-frequency wave, the signal, is generated in the direction opposite to that of the pump.BWOPO with backward signal generation is beneficial if cascaded parametric oscillation processes need to be avoided.For pumping at 1.064 µm, achieving such an oscillation regime required QPM periodicities shorter than 600 nm.PPRKTP structure with periodicity of 509 nm and the length of 7 mm showed an effective nonlinearity of 7.4 pm/V, which allowed reaching BWOPO oscillation threshold comparable to those typically achieved in co-propagating OPOs using the PPRKTP nonlinear crystals.Such low thresholds in turn give the opportunity to pump BWOPO with injection-seeded Q-switched Nd:YAG laser sources for generation of transform-limited pulses in the near and mid-infrared.Due to low threshold, the BWOPO could reach an efficiency exceeding 53% for the pump energy fluence half the optical damage threshold.Counter-propagating TWM in BWOPO strongly suppresses back-conversion and multi-step (2) : (2) processes, therefore preventing spectral broadening and deterioration of the spatial beam distribution and higher conversion efficiencies.With total output of 3.42 mJ and a simple configuration, the BWOPO can be used for seeding narrowband high-energy optical parametric amplifiers, e.g., in differential absorption LIDARs.It should be noted that precise tunability over the range of hundreds of GHz can readily be achieved in the BWOPO 10 by simple angular rotation of the crystal.
The authors would like to acknowledge VR and the Swedish Foundation for Strategic Research for generous funding.
FIG. 1 .
FIG. 1.Dependence of BWOPO signal wavelength on QPM period , in KTP for 1.064 µm pump.Red line: signal generated in the direction of the pump; Black line: signal generated in the opposite direction.The signal and idler are always collinear and counter-propagating.The vertical line marks the signal wavelength generated in this work.
. The procedure is described in more detail in Refs.19, 22 and 23.The fabricated crystal had a homogeneous poled volume of 7 mm × 3 mm × 1 mm as measured along the crystal a, b, c axes, respectively.
FIG. 2 .
FIG. 2. Atomic force microscopy scans showing the etched relief of the domain structure in the (a) patterned face, (b) opposite polar face. | 3,164 | 2018-06-20T00:00:00.000 | [
"Physics"
] |
Energy spectrum of a doubly orbitally degenerate model with non-equivalent
In the present paper we investigate a doubly orbitally degenerate narrowband model with correlated hopping. The model peculiarity takes into account the matrix element of electron-electron interaction which describes intersite hoppings of electrons. In particular, this leads to the concentration dependence of the effective hopping integral. The cases of the strong and weak Hund’s coupling are considered. By means of a generalized meanfield approximation the single-particle Green function and quasiparticle energy spectrum are calculated. Metal-insulator transition is studied in the model at different integer values of the electron concentration. Using the obtained energy spectrum we find criteria of metal-insulator transition.
Both theoretical analysis [1][2][3] and available experimental data [4] point out that the Hubbard model [5] should be generalized by taking into account orbital degeneration and correlated hopping.In the present paper we study a metal-insulator transition in the recently proposed [6] doubly orbitally degenerate narrow-band model with correlated hopping.The peculiarity of the model is the electron-hole asymmetry and the dependence of hopping integral on the average number of electrons per site, thus the model shows much better properties than, for example, the Hubbard model with doubly orbital degeneration.The model Hamiltonian is where µ is the chemical potential, a + iγσ , a iγσ are the creation and destruction operators of an electron of spin σ (σ =↑, ↓; σ denotes spin projection which is opposite to σ) on i-site and in orbital γ (γ = α, β denotes two possible values of orbital states), n iγσ = a + iγσ a iγσ is the number operator of electrons of spin σ and in orbital γ on i-site, n iγ = n iγ↑ + n iγ↓ ; t ij is the hopping integral of an electron from γ-orbital of j-site to γ-orbital of i-site (we neglect the electron hoppings between α-and βorbitals), t ′ ij (t ′′ ij ) includes the influence of an electron on γ (γ)-orbital of i-or j-site on hopping process, the prime at the second sum in equation (1) signifies that i = j, U is the intra-atomic Coulomb repulsion of two electrons of the opposite spins at the same orbital (we assume that it has the same value at α-and β-orbitals), U ′ is the intra-atomic Coulomb repulsion of two electrons of the opposite spins at the different orbitals, J is the intra-atomic exchange interaction energy which stabilizes the Hund's states forming the atomic magnetic moments, and the effective hopping integral t ij (n) = t ij + nT 1 (ij) is concentration-dependent due to taking into account the correlated hopping T 1 (ij).
The Hamiltonian (1) describes the model with non-equivalent subbands (the analogues of Hubbard subbands).The non-equivalence of the subbands leads to different width of the subbands and different values of the density of states within the subbands.At the same time, the density of states within each subband is symmetrical.As a consequence, the chemical potential is placed between the subbands at integer values of the electron concentration n = 1, 2, 3.In these cases, in the model described by the Hamiltonian (1), the metal-insulator transition (MIT) can occur.
1. Let us consider the case of the strong intra-atomic Coulomb interaction U ′ ≫ t ij and the strong Hund's coupling U ′ ≫ U ′ − J (values U ′ and J are of the same order).These conditions allow us to neglect the states of site when there are more than two electrons on the site and the "non-Hund's" doubly occupied states (the analogous conditions are used for an investigation of magnetic properties of the Hubbard model with twofold orbital degeneration in [7][8][9]).Thus, lattice sites can be in one of the seven possible states: a hole (a non-occupied by electron site); a single occupied by electron site; the Hund's doublon (a site with two electrons on different orbitals with the same spins).
Using the method of works [10][11][12][13] we obtain the energy gap (here we neglect the correlated hopping) where w = z|t|, z is the number of the nearest neighbours to a site, c is the hole concentration.At T = 0 K MIT occurs when (U ′ − J)/(2w) = 0.75.The energy gap width ∆E as a function of the parameters (U ′ − J)/(2w) and (kT )/(2w) is presented in figure 1 and figure 2, respectively.With a change of the parameter (U ′ − J)/(2w) the system undergoes the transition from an insulating to a metallic state (negative values of the energy gap width correspond to the overlapping of the Hubbard subbands).In the model under consideration at T = 0 K, an insulator-metal transition at n = 1 occurs when (U ′ − J)/(2w) = 0.75 (figure 1, the lower curve).
The transition from a metallic to an insulating state with the increase of temperature at a given value of the parameter (U ′ − J)/(2w) is also possible (figure 2).It can be explained by the fact that the energy gap width ∆E given by equation ( 2) increases with the temperature T increase which is caused by the rise of the polar states concentration at constant w, (U ′ − J).
2. The exchange interaction splits some of the bands.If the exchange interaction is small comparative to the Coulomb interaction J ≪ U, then the splitting is small and leads only to a weak broadening of the bands.Forasmuch we calculate the width of the energy gap we can take into account the effect of J by an appropriate shift of the band center resulting from the inclusion of J into the chemical potential by means of mean-field approximation (see, e.g., [6,14]).
To describe MIT at the electron concentration n, we can take into account in the Hamiltonian only the states of site with n − 1, n, n + 1 electrons (the analogous simplification has been used in [15,16]).In the vicinity of the transition point at the electron concentration n = 1, the concentrations of sites occupied by three and four electrons are small.We can neglect the small amounts of these sites.For calculation of single-particle Green functions we use the generalized mean-field approximation [10].After transition to k-representation, we obtain the quasiparticle energy spectrum: By use of the mean-field approximation, in the case of k and c, b, d being the concentrations of the holes and sites occupied by one, two electrons, respectively, connected by the relations: c = 6d, b = 1 4 − 3d.In the transition point, when the concentrations of the holes and doublons are equal to zero, the energies of the electrons within the subbands are From the equations ( 4) we obtain the criterion of MIT: With the increase of the correlated hopping at the fixed value of parameter U/2w, the energy gap width increases and the region of values of U/2w at which the system is in a metallic state, decreases.In the partial case Let us consider the MIT at electron concentration n = 2.In the vicinity of the transition point in the case of two electrons per atom, the concentrations of holes and sites occupied by four electrons are small.For the small values of the intra-atomic exchange interaction (J ≪ U) we take J into account analogously to the case of n = 1.To calculate single-particle Green functions we use the generalized meanfield approximation.After transition to k-representation, we obtain the quasiparticle energy spectrum: By use of the mean-field approximation analogously to the above, in the case of t In the transition point, when the concentrations of the singly and triply occupied sites are equal to zero, the quasiparticle energy spectrum is Using the quasiparticle energy spectrum (6), we find the energy gap width.In the point of MIT the energy gap is equal to zero.From this condition we find the criterion of MIT.With the increase of the correlated hopping at the fixed value of parameter U/2w, the energy gap width increases faster than at n = 1 and the region of values of U/2w at which the system is in the metallic state, decreases, analogously to the case n = 1.In the partial case of t ′ k = t ′′ k = 0 (in this case t * k = tk ) we find U c /2ω = 2 √ 2/3.In a similar way, we consider the case of electron concentration n = 3.In the vicinity of the transition point in the case of three electrons per atom, the concentrations of holes and sites occupied by one electron are small.Neglecting the small amounts of these sites, we can calculate the single-particle Green functions analogously to the above.We find the values of ǫ(k), ǫ(k), ζ(k), ζ(k) using the mean-field approximation.They are functions of t k and d, t, f being the concentrations of the sites occupied by two, three and four electrons, respectively, connected by the relations: f = 6d, t = 1/4 − 3d.
In the transition point, when the concentrations of the holes and single electrons are equal to zero, the energies of the electrons within the subbands are From the equation ( 7) we obtain the criterion of the MIT at the electron concentration n = 3: With the increase of the correlated hopping at the fixed value of parameter U/2w, the energy gap width increases faster than at n = 1, n = 2 and the region of values of U/2w at which the system is in a metallic state, decreases.In the partial case t ′ k = t ′′ k = 0 (in this case t k = tk ) we have U c /2w = 1.This result coincides with the corresponding critical value at the electron concentration n = 1 due to the electron-hole symmetry of the model without the correlated hopping.
The peculiarities of the expressions for the quasiparticle energy spectrum are the dependences on the concentration of polar states (holes, doublons at n = 1; single electron and triple occupied sites at n = 2; doublons and sites occupied by four electrons at n = 3) and on the hopping integrals (thus on external pressure).At given values of U and hopping integrals (constant external pressure), the concentration dependence of ∆E permits to study MIT under the action of external effects.In particular, ∆E(T )-dependence can lead to the transition from a metallic state to an insulating state with the increase of temperature (see figure 4).The described transition is observed, in particular, in the (V 1−x Cr x ) 2 O 3 compound [4,17] and the NiS 2−x Se x system [18,19].The similar dependence of the energy gap width can be observed at the change of the polar states concentration under the action of photoeffect or magnetic field.The strong magnetic field can lead, for example, to the decrease of the polar state concentration (see [20]) initiating the transition from a paramagnetic insulator state to a paramagnetic metal state.The increase of the polar state concentration under the action of light, stimulates the metal-insulator transition, analogously to the influence of temperature change.At the increase of bandwidth (for example, under the action of external pressure or composition changes) the insulator-to-metal transition can occur.
If the correlated hopping is absent in the case n = 2, the MIT occurs at the smaller value of U/2w than in the case n = 1 (figure 3).This result is in qualitative accordance with the results of work [14], in distinction from [16,21].Using the critical values of the parameter U/(2w) at which MIT occurs for different integer electron concentrations (see figure 3) we can interpret the fact that in the series of disulphides MS 2 , the CoS 2 (one electron within e g band corresponding to n = 1) and CuS 2 compounds (three electrons within e g -band corresponding n = 3) are metals, and the NiS 2 compound (two electrons within e g -band corresponding n = 2) is an insulator.Really, for 0.94 U/2w 1 at the electron concentration n = 2 the system described by the present model is an insulator, whereas for the same values of the parameter U/2w at the electron concentrations n = 1, 3 the system is a metal (according with the calculations of [22] the ratios U/2w in these compounds have close values).
We have found that in the case of the strong Hund's coupling at n = 1, the metalinsulator transition occurs at a smaller value of the parameter ((U − J)/2w) c = 0.75 than in the case of the weak Hund's coupling ((U − J)/2w) c = 1.
At nonzero values of correlated hopping, the point of MIT moves towards the values of parameter U/2w at which the system is a metal (figure 4).The nonequivalence of the cases n = 1 and n = 3 is a manifestation of the electron-hole asymmetry which is a characteristic of the models with correlated hopping.
Thus, both orbital degeneracy and correlated hopping are the factors favouring the transition of the system to an insulating state in the case of half-filling with the increase of intra-atomic Coulomb repulsion in comparison with the single-band Hubbard model.
and b, d, where b is the concentration of the sites occupied by one (or three) electrons, d is the concentration of the doubly occupied sites, connected by the relation b = (1 − 8d)/6.
Figure 3 .
Figure 3.The electron concentration vs. interaction strength phase diagram showing the paramagnetic metal (PM) and paramagnetic insulator (PI) in the absence of correlated hopping. | 3,131.8 | 2001-01-01T00:00:00.000 | [
"Physics"
] |
Research and Development of Electrostatic Accelerometers for Space Science Missions at HUST
High-precision electrostatic accelerometers have achieved remarkable success in satellite Earth gravity field recovery missions. Ultralow-noise inertial sensors play important roles in space gravitational wave detection missions such as the Laser Interferometer Space Antenna (LISA) mission, and key technologies have been verified in the LISA Pathfinder mission. Meanwhile, at Huazhong University of Science and Technology (HUST, China), a space accelerometer and inertial sensor based on capacitive sensors and the electrostatic control technique have also been studied and developed independently for more than 16 years. In this paper, we review the operational principle, application, and requirements of the electrostatic accelerometer and inertial sensor in different space missions. The development and progress of a space electrostatic accelerometer at HUST, including ground investigation and space verification are presented.
Introduction
The history of high precision space accelerometers with small measurement range but high resolution dates back to the 1950s, when they were originally used to monitor the motion of satellites for the purpose of satellite control [1] in space environment investigation missions. In the 1960s, the technology of satellite control followed a geodesic line in space with the guidance of a space inertial sensor originally proposed at Stanford University [2]. The first drag-free satellite, TRIAD, with a target drag-free level of 10 −1 m/s 2 was launched in 1972, and its most important experimental payload was an inertial sensor adopting an electrostatic scheme designated DISturbance COmpensation System (DISCOS) [3,4]. In the early 1970s, an electrostatic accelerometer designated Capteur Accelerometrique Triaxial Ultra Sensible (CACTUS), with a designed resolution of 10 −9 m/s 2 , was developed by the Office National d'Etudes et de Recherches Aérospatiales (ONERA, France) and was used to test satellite focusing in the study of drag forces, such as residual gas and solar radiation pressure [5,6].
In early applications of the electrostatic accelerometer in space, a spherical proof mass (PM) was selected for its advantage of not requiring rotational control, so that servo control could be easily realized. However, for the electrostatic accelerometer with a spherical PM, the accuracy and linearity is very poor, the cross-talk between different axes is very significant, and it cannot be used to monitor the rotational motions of a satellite. Because of these disadvantages, ONERA began to consider cubic PMs in the development of accelerometers. A typical example is the electrostatic accelerometer Accelerometre Spatial Triaxial Electrostatique (ASTRE) applied to monitor the motions of a spacecraft [7]. Based on the successful development and application of ASTRE, some similar accelerometers with minor variations or adjustments for different missions were developed, such as the STAR, SuperSTAR, and GRADIO accelerometers developed and employed for different satellite Earth field recovery missions, the CHAllenging Minisatellite Payload (CHAMP), Gravity Recovery And Climate Experiment (GRACE), and Gravity field and steady-state Ocean Circulation Explorer (GOCE) missions, respectively [8]. In the United States, some other electrostatic accelerometers have also been developed separately, such as the Miniature Electrostatic Accelerometer (MESA), the purpose of which was to monitor the motions of the Space Shuttle along three translational axes. The MESA was developed by Bell Aerosystems, who selected a thin-walled cylinder with a thin central flange to be the proof mass [9]. In the Swarm mission, which was launched on 22 November 2013 to study the Earth's magnetic field, several onboard accelerometers were applied for observing the non-gravitational accelerations derived from the thermosphere's density and wind [10]. Working in a different mode, the electrostatic accelerometer can also be used as a geodesic reference and be nominally called an inertial sensor in space gravitational wave detection missions such as LISA [11]. In order to verify the key technologies affecting the performance of the inertial sensor, a technology demonstration mission designated LISA Pathfinder was launched on 3 December 2015. The results of LISA Pathfinder indicate that the performance of the inertial sensor has already achieved the requirement of LISA [12]. An electrostatic accelerometer is also quite useful in fundamental physics, such as the MICROSatellite pour l'Observation du Principe d'Equivalence (MICROSCOPE) mission, which is aimed at testing the weak equivalence principle (WEP) in space down to an accuracy of 10 −15 . MICROSCOPE was proposed by ONERA and the Centre d'Etudes et de Recherches en Geodynamique et astrometrie (CERGA, France) and was launched on 25 April 2016. The electrostatic accelerometer is the most important payload for detecting the weak force due to the possible deviation of WEP in this mission [13].
Since 2000, a space electrostatic accelerometer has been under development at Huazhong University of Science and Technology (HUST), China, in order to facilitate space missions such as Test of the Inverse-Square law in Space (TISS) [14], Test of Equivalence Principle in space with Optical readout (TEPO) [15], space gravitational wave detection (TianQin) [16], and native satellite gravity measurement in China [17]. In this review, the basic principle of the accelerometer and the typical applications of a space accelerometer for different space missions are introduced and discussed. The development details and progress of the development of the electrostatic accelerometer at HUST are then presented. Specifically: (1) the low-noise capacitive displacement sensor and electrostatic actuators that meet the requirement of the above-mentioned missions have achieved; (2) the high-voltage levitation and fiber suspension facilities to investigate the function and performances of electrostatic accelerometers on the ground have been developed and constructed; and (3) the experimental results of a flight model tested in orbit for more than three years have been analyzed.
Principle of the Accelerometer
An accelerometer consists of at least a PM and a frame surrounding the PM, and its function is based on Newton's Second Law. As the PM and the frame are subjected to different forces, the relative displacement between them will vary with time.
For an open-loop mechanical accelerometer with a spring linkage, as illustrated in Figure 1, the PM is isolated from external forces in ideal conditions, the relative displacement is proportional to the acceleration of the frame with respect to the local inertial reference frame, and its sensitivity is inversely proportional to the square of the natural frequency of the spring-mass oscillator system at very low frequencies. In Figure 1, x and y represent the displacement of the PM and the frame with respect to the local inertial reference frame, respectively, and x r = x − y represents the relative displacement between the PM and the frame. Neglecting the damping of the spring oscillator, the equation of motion of the PM is given by: (1) where ω0 is the natural angular frequency of the spring-mass oscillator.
From Equation (1), we obtain that the acceleration of the frame at low frequencies (ω << ω0) is dependent on the length of the spring, xr: It is obvious that there are two ways to improve the intrinsic detection capability of this type of accelerometer: the first is to improve the resolution of the position transducer, and the second is to decrease the natural frequency of the oscillator.
Therefore, the mechanical spring linkage is usually replaced by softer and low-dissipation suspension schemes. Thus, high-precision accelerometers with pico-g or even better resolution based on an electrostatic control scheme [18] and the superconductive magnetic suspension scheme have achieved [19]. In addition, some other schemes, such as optical and atomic techniques, are being considered [20]. The softer linkage cannot only improve its sensitivity, but also suppress the backaction effect due to relative motion fluctuation or displacement sensing noise. However, the PM or the frame, in general, must be servo-controlled motionlessly with respect to each other for supersoftlinking-type accelerometers.
A schematic of the HUST closed-loop electrostatic accelerometer is shown in Figure 2. It consists of a sensor head, a displacement transducer, a controller, and an actuator. The sensor head consists of a PM and the surrounding electrode cage, where the six-degree-of-freedom (DoF) motions of the PM with respect to the electrode cage are measured by a six-channel capacitive displacement transducer, and then the low-frequency feedback voltages calculated from the controller are applied on the electrodes by drive-voltage amplifiers. Finally, the PM is controlled motionlessly with respect to the cage. In this case, the feedback voltage could indicate the differential forces acting on the PM and satellite, so that we can obtain the gravitational gradient or the non-gravitational forces acting on the satellite, such as air drag and light pressure. Here, a direct DC bias voltage, Vb, and a highfrequency (i.e., 50-100 kHz) pumping voltage, Vp, are applied to the PM by a soft metal wire, which help to linearize the electrostatic actuator and drive the capacitance transform bridge, respectively.
The measurement of electrostatic accelerometer can be expressed as: where aelec means the acceleration acting on the PM by the electrostatic suspension, and it can be calculated by multiplication of the measurement feedback voltage Vfed and the calibrated transfer function of the electrostatic actuator Ha. asc represents the acceleration of the spacecraft. an,acc is the residual acceleration disturbances and it could be given by: Neglecting the damping of the spring oscillator, the equation of motion of the PM is given by: where ω 0 is the natural angular frequency of the spring-mass oscillator. From Equation (1), we obtain that the acceleration of the frame at low frequencies (ω << ω 0 ) is dependent on the length of the spring, x r : It is obvious that there are two ways to improve the intrinsic detection capability of this type of accelerometer: the first is to improve the resolution of the position transducer, and the second is to decrease the natural frequency of the oscillator. Therefore, the mechanical spring linkage is usually replaced by softer and low-dissipation suspension schemes. Thus, high-precision accelerometers with pico-g or even better resolution based on an electrostatic control scheme [18] and the superconductive magnetic suspension scheme have achieved [19]. In addition, some other schemes, such as optical and atomic techniques, are being considered [20]. The softer linkage cannot only improve its sensitivity, but also suppress the back-action effect due to relative motion fluctuation or displacement sensing noise. However, the PM or the frame, in general, must be servo-controlled motionlessly with respect to each other for supersoft-linking-type accelerometers.
A schematic of the HUST closed-loop electrostatic accelerometer is shown in Figure 2. It consists of a sensor head, a displacement transducer, a controller, and an actuator. The sensor head consists of a PM and the surrounding electrode cage, where the six-degree-of-freedom (DoF) motions of the PM with respect to the electrode cage are measured by a six-channel capacitive displacement transducer, and then the low-frequency feedback voltages calculated from the controller are applied on the electrodes by drive-voltage amplifiers. Finally, the PM is controlled motionlessly with respect to the cage. In this case, the feedback voltage could indicate the differential forces acting on the PM and satellite, so that we can obtain the gravitational gradient or the non-gravitational forces acting on the satellite, such as air drag and light pressure. Here, a direct DC bias voltage, V b , and a high-frequency (i.e., 50-100 kHz) pumping voltage, V p , are applied to the PM by a soft metal wire, which help to linearize the electrostatic actuator and drive the capacitance transform bridge, respectively.
The measurement of electrostatic accelerometer can be expressed as: a elec = a sc + a n,acc where a elec means the acceleration acting on the PM by the electrostatic suspension, and it can be calculated by multiplication of the measurement feedback voltage V fed and the calibrated transfer function of the electrostatic actuator H a . a sc represents the acceleration of the spacecraft. a n,acc is the residual acceleration disturbances and it could be given by: a n,acc = a ext,n + a thermal + (ω 2 + ω 2 e )x n + H a V out,n where a ext,n is the external disturbances acting on the PM which are induced by space and spacecraft environment. a thermal here is the damping effect of the metal wire which is linked to the PM. x n , and V out,n represent the displacement noise and readout voltage noise, respectively; ω is the natural angular frequency and ω e , represent the parasitic angular frequency associated to a negative stiffness because of the back-action of the capacitive transducer, which is mainly dependent on the bias voltage V b , capacitance C 0 , gap d 0 and the mass of the PM.
Sensors 2017, 17, 1943 4 of 18 where aext,n is the external disturbances acting on the PM which are induced by space and spacecraft environment. athermal here is the damping effect of the metal wire which is linked to the PM. xn, and Vout,n represent the displacement noise and readout voltage noise, respectively; ω is the natural angular frequency and ωe, represent the parasitic angular frequency associated to a negative stiffness because of the back-action of the capacitive transducer, which is mainly dependent on the bias voltage Vb, capacitance C0, gap d0 and the mass of the PM. This indicates that the electrostatic accelerometer must work in a closed-loop situation. The intrinsic noise of the electrostatic accelerometer itself is mainly limited by the displacement and readout voltage noises. But in actual applications, the external disturbances must be considered such as the electromagnetic and thermal environmental noises, the coupling influence from the spacecraft, and so on.
When the PM is servo-controlled to follow the spacecraft, this is called acceleration measurement mode. It has been successfully used in the CHAMP, GRACE and GOCE missions, but when the PM operates close to the free-falling state and acts as the geodesic reference; the spacecraft can then follow the PM by employing the thruster drag-free control system [3]. We call this scheme the geodesic reference mode, which is usually adopted in high-precision gravitational experiments in space to improve the microgravity level of a spacecraft. In this mode, the non-gravitational forces acting on the spacecraft such as the atmospheric drag, the solar radiation and so on will be compensated by the thrusters, namely the spacecraft will follow with the PM and the residual acceleration disturbances of the PM an,gr can be given by: 2 2 n,gr ext,n e n df,n where xdf,n represents the control level of the spacecraft by the drag-free control, which influenced by the drag-free external control forces acting on the spacecraft and the control loop gain. This operation mode is mainly suitable for the space gravitational wave detection missions such as LISA. In LPF mission, the relative acceleration noise between two free-falling reference proof masses on one satellite are measured [12].
Application in the Measurement of the Earth's Gravity Field
In acceleration measurement mode, the largest successful application of space electrostatic accelerometers is in the recovery of global Earth gravity field missions. The accelerometer, as a force probe, is used to measure the non-gravitational force acting on the satellite. For example, in order to recover the parameters of the global Earth's field, the Global Navigation Satellite System (GNSS, This indicates that the electrostatic accelerometer must work in a closed-loop situation. The intrinsic noise of the electrostatic accelerometer itself is mainly limited by the displacement and readout voltage noises. But in actual applications, the external disturbances must be considered such as the electromagnetic and thermal environmental noises, the coupling influence from the spacecraft, and so on. When the PM is servo-controlled to follow the spacecraft, this is called acceleration measurement mode. It has been successfully used in the CHAMP, GRACE and GOCE missions, but when the PM operates close to the free-falling state and acts as the geodesic reference; the spacecraft can then follow the PM by employing the thruster drag-free control system [3]. We call this scheme the geodesic reference mode, which is usually adopted in high-precision gravitational experiments in space to improve the microgravity level of a spacecraft. In this mode, the non-gravitational forces acting on the spacecraft such as the atmospheric drag, the solar radiation and so on will be compensated by the thrusters, namely the spacecraft will follow with the PM and the residual acceleration disturbances of the PM a n,gr can be given by: a n,gr = a ext,n + (ω 2 + ω 2 e )(x n + x df,n ) where x df,n represents the control level of the spacecraft by the drag-free control, which influenced by the drag-free external control forces acting on the spacecraft and the control loop gain. This operation mode is mainly suitable for the space gravitational wave detection missions such as LISA. In LPF mission, the relative acceleration noise between two free-falling reference proof masses on one satellite are measured [12].
Application in the Measurement of the Earth's Gravity Field
In acceleration measurement mode, the largest successful application of space electrostatic accelerometers is in the recovery of global Earth gravity field missions. The accelerometer, as a force probe, is used to measure the non-gravitational force acting on the satellite. For example, in order to recover the parameters of the global Earth's field, the Global Navigation Satellite System (GNSS, including GPS and BeiDou) and inter-satellite ranging(using microwave or laser measurement) are used to measure the position and time variation of the trajectories of the satellites, which aim at deducing the total force acting on the satellites. In high-precision accelerometers, such as STAR in the CHAMP mission and SuperSTAR in the GRACE mission, the non-gravitational forces acting on the satellites are measured simultaneously. The differential between the total force and the non-gravitational force is the gravitational force, described by the parameters of the global gravity field.
The German CHAMP mission, launched in July 2000 at an initial altitude of 454 km, measured the global magnetic and gravity fields and the Earth's atmosphere until September 2010. The three-axis STAR accelerometer is integrated at the center of mass of the satellite, presents a measurement range of ±10 −4 m/s 2 and exhibits a resolution of better than 3 × 10 −9 m/s 2 /Hz 1/2 for the highly sensitive axes within the measurement bandwidth from 0.1 mHz to 0.1 Hz [8]. The following GRACE mission, consisting of two identical satellites separated by approximately 220 km on the same quasicircular orbit, was launched on 17 March 2002 at an initial altitude of 500 km. Taking advantage of the CHAMP mission experience, the SuperSTAR accelerometer is similar to the STAR accelerometer, while its measurement noise level is 1 order of magnitude better, leading to a noise level of 1 × 10 −10 m/s 2 /Hz 1/2 along the highly sensitive axes with a reduced range of ±5 × 10 −5 m/s 2 [21,22]. The GRACE Follow-on Mission will use the same method to map gravitational fields; it is scheduled for launch in 2018. The two GRACE Follow-on satellites will use the same kind of microwave ranging system as GRACE, but they will simultaneously demonstrate laser ranging with approximately 20 times the resolution of microwave ranging. The accelerometer will realize a noise level better than 10 −10 m/s 2 /Hz 1/2 , and special designs are being considered to improve the thermal characteristics of the accelerometer [23].
The GOCE mission is the first mission to use the concept of satellite gravity gradiometry in space to obtain higher harmonics of the Earth's gravity mapping. The electrostatic gravity gradiometer (EGG) on the GOCE mission, constructed with three pairs of three-axis electrostatic accelerometers, was designed to measure the gradient components of the Earth's gravity field. Each pair of accelerometers is identical, separated by approximately 0.5 m. The GOCE satellite was launched in March 2009, and the in-orbit data shows that the six accelerometers are fully operational as drag compensation sensors as well as serving as scientific instruments. After being calibrated carefully through a series of methods, the gradiometer reached a 10-20 mE/Hz 1/2 accuracy at tens of mHz and an outstanding accelerometer in-orbit noise level of approximately (3-6) × 10 −12 m/s 2 /Hz 1/2 [24]. Even then, the major error sources come from the intrinsic noise of the electrostatic accelerometer and coupling from the satellite environment. Thus, in order to further improve the resolution, two schemes were designed by reducing the dynamic range and choosing a much heavier PM to suppress thermal noise limited by the discharging gold wire. A higher resolution, of approximately 7 × 10 −13 m/s 2 /Hz 1/2 , could then be achievable in future satellite gradiometry missions [25].
Regarding the Gravity Recovery and Interior Laboratory (GRAIL) mission [26], the purpose of which is to map the Moon's gravity field, the satellite gravity gradiometry method can also be considered. It can determine the Moon's gravity field with a higher resolution and obtain the mediumand short-wavelength component information with greater accuracy. A model with a high accuracy of 14 mGal and a geoid with an accuracy of 20.5 cm can be realized with a gradiometer accuracy level of approximately 30 mE/Hz 1/2 [27].
Application in Space Gravitational Wave Detection
In space gravitational wave detection missions, the inertial sensor works in "geodesic reference mode" and plays the role of a gravitational probe [28]. In order to detect gravitational waves, the PM of the inertial sensor acts as not only an object responding to the time-space variance induced by the passage of the gravitational waves but also as a free-falling reference with which to guide the control of the spacecraft with micro-Newton thrusters. According to the requirement of the LISA mission, the residual disturbance of the PM should be controlled below 3 × 10 −15 m/s 2 /Hz 1/2 in the measurement bandwidth from 0.1 mHz to 0.1 Hz [29]. The LISA mission is a joint European Space Agency/U.S. National Aeronautics and Space Administration (ESA/NASA) mission for detecting low-frequency gravitational waves in space, which have been studied since 1993 [30]. In response to the call of the ESA for L3 mission concepts, the LISA Mission consortium submitted the proposal for LISA on 13 January 2017. LISA consists of a triangular formation with three spacecraft in an Earth-trailing heliocentric orbit separated by 2.5 million km [31]. In each spacecraft, there are two inertial sensors with two PMs inside, which will provide the reference frame for the satellite and guide the drag-free control system to compensate for the non-gravitational force acting on the satellite using a thruster array. As gravitational waves pass the triangle, they will squeeze and stretch the space between the separations, and they can then be detected by delicate laser interferometers, which continuously monitor the tiny changes in the long separations at a level of tens of picometers.
LISA Pathfinder is a pioneer mission that was proposed in 1998 to test key technologies such as the inertial sensor, laser interferometer, micro-Newton thrusters, and drag-free control for the LISA mission [32]; it was launched on 3 December 2015. LISA Pathfinder carries two payloads, the European-provided LISA Technology Package (LTP) and the NASA-provided Disturbance Reduction System (DRS). In LISA Pathfinder, one laser arm is effectively reduced to approximately 38 cm inside a single spacecraft. The two cubic PMs both serve as mirrors for the laser interferometer, and one PM serves as an inertial reference for the drag-free control system of the spacecraft, which will be used for the LISA mission. The position and attitude of the PMs are controlled by a combination of the inertial sensors and spacecraft micro-thruster drag-free control. The in-orbit results show that the relative acceleration noise level is approximately 5 fm/s 2 /Hz 1/2 between 0.7 and 20 mHz [12], which is better than the expectation of LISA Pathfinder. With the improvement of the vacuum of the PMs and temperature stabilization, much better results should be reported quickly.
The TianQin mission is a new proposal for a spaceborne detector of gravitational waves in the mHz frequency range [16]. An illustration of the preliminary concept of the TianQin mission is shown in Figure 3, in which three identical spacecraft form a nearly equilateral triangle in geocentric orbits with a semi-major axis at the 10 5 km level. Each of the spacecraft will be equipped with two free-falling PMs inside. The key technologies rely on two components: the laser interferometer and the disturbance reduction system. The primary mission goal is to detect gravitational waves with anticipated properties from a single well-understood and easily accessible reference source, such as the ultracompact binary white dwarf RX J0806.3 + 152. All the aspects of the experiment are optimized using properties of a tentative reference source, and the present results show that the requirement for the residual acceleration is 10 −15 m/s 2 /Hz 1/2 at approximately 6 mHz. Some detailed designs of the mission, such as the scheme for the inertial sensing system of the disturbance reduction system, are still incomplete and will be fully demonstrated and confirmed as development proceeds.
Sensors 2017, 17,1943 6 of 18 gravitational waves in space, which have been studied since 1993 [30]. In response to the call of the ESA for L3 mission concepts, the LISA Mission consortium submitted the proposal for LISA on 13 January 2017. LISA consists of a triangular formation with three spacecraft in an Earth-trailing heliocentric orbit separated by 2.5 million km [31]. In each spacecraft, there are two inertial sensors with two PMs inside, which will provide the reference frame for the satellite and guide the drag-free control system to compensate for the non-gravitational force acting on the satellite using a thruster array. As gravitational waves pass the triangle, they will squeeze and stretch the space between the separations, and they can then be detected by delicate laser interferometers, which continuously monitor the tiny changes in the long separations at a level of tens of picometers. LISA Pathfinder is a pioneer mission that was proposed in 1998 to test key technologies such as the inertial sensor, laser interferometer, micro-Newton thrusters, and drag-free control for the LISA mission [32]; it was launched on 3 December 2015. LISA Pathfinder carries two payloads, the European-provided LISA Technology Package (LTP) and the NASA-provided Disturbance Reduction System (DRS). In LISA Pathfinder, one laser arm is effectively reduced to approximately 38 cm inside a single spacecraft. The two cubic PMs both serve as mirrors for the laser interferometer, and one PM serves as an inertial reference for the drag-free control system of the spacecraft, which will be used for the LISA mission. The position and attitude of the PMs are controlled by a combination of the inertial sensors and spacecraft micro-thruster drag-free control. The in-orbit results show that the relative acceleration noise level is approximately 5 fm/s 2 /Hz 1/2 between 0.7 and 20 mHz [12], which is better than the expectation of LISA Pathfinder. With the improvement of the vacuum of the PMs and temperature stabilization, much better results should be reported quickly.
The TianQin mission is a new proposal for a spaceborne detector of gravitational waves in the mHz frequency range [16]. An illustration of the preliminary concept of the TianQin mission is shown in Figure 3, in which three identical spacecraft form a nearly equilateral triangle in geocentric orbits with a semi-major axis at the 10 5 km level. Each of the spacecraft will be equipped with two freefalling PMs inside. The key technologies rely on two components: the laser interferometer and the disturbance reduction system. The primary mission goal is to detect gravitational waves with anticipated properties from a single well-understood and easily accessible reference source, such as the ultracompact binary white dwarf RX J0806.3 + 152. All the aspects of the experiment are optimized using properties of a tentative reference source, and the present results show that the requirement for the residual acceleration is 10 −15 m/s 2 /Hz 1/2 at approximately 6 mHz. Some detailed designs of the mission, such as the scheme for the inertial sensing system of the disturbance reduction system, are still incomplete and will be fully demonstrated and confirmed as development proceeds. As a key payload in spaceborne gravitational wave detectors, an extremely high requirement has arisen for the inertial sensor, which is definitely beyond the experience accrued from any existing missions. To develop and demonstrate a high-precision inertial sensor, a few delicate torsion pendulums have been constructed and developed [33][34][35], and a number of noise sources have been carefully studied, such as cross-coupling [36], thermal noise [37], and surface potential difference [38]. The qualification of the inertial sensor for such a noise level depends on its operation in space. The Trento group has been engaged in developing the inertial sensor for spaceborne gravitational As a key payload in spaceborne gravitational wave detectors, an extremely high requirement has arisen for the inertial sensor, which is definitely beyond the experience accrued from any existing missions. To develop and demonstrate a high-precision inertial sensor, a few delicate torsion pendulums have been constructed and developed [33][34][35], and a number of noise sources have been carefully studied, such as cross-coupling [36], thermal noise [37], and surface potential difference [38]. The qualification of the inertial sensor for such a noise level depends on its operation in space. The Trento group has been engaged in developing the inertial sensor for spaceborne gravitational missions like LISA and LISA Pathfinder. Several highly sensitive torsion pendulums have been developed in order to estimate the upper limit of the noise and characterize noise sources experimentally in the laboratory [39], where a gold-coated PM was suspended by a tungsten or fused silica fiber with a torque sensitivity of approximately 1 fNm/Hz 1/2 at mHz frequencies [37]. The PM is hollow in order to maximize the sensitivity of the torsion pendulum to the torque noise arising from the surface effects of the PM. With the above-described weak force measurement facility, a few interesting effects have been carefully investigated, such as the electrostatic stiffness and the dielectric dissipation in the conductive surface. A few possible upgrades of the torsion pendulum are under study, with the goal of trying to meet the verification demand for the advanced inertial sensor for the LISA mission in the laboratory in the near future. A torsion pendulum has been also developed to investigate residual disturbances of the PM, such as patch effects. A torsion pendulum has been built at the University of Washington (UW) to measure the surface-potential variations between two gold-coated surfaces, with a noise level of approximately 30 µV/Hz 1/2 at frequencies above 0.1 mHz [40]. In the interest of realizing further improvements, the workers at UW have used an ultraviolet LED to demonstrate both charging and discharging of the pendulum.
In addition, a few other projects have been proposed or have achieved testing of the gravitational law and have searched for new interactions using the inertial techniques, including Relativity Mission Gravity Probe B (GP-B) [41], Astrodynamical Space Test of Relativity using Optical Devices (ASTROD) [42], and DECi-hertz Interferometer Gravitational wave Observatory (DECIGO) [43].
Application in the Test of the Equivalence Principle in Space
The equivalence principle (EP), as a basic hypothesis of general relativity, has been an attractive test object for experimental scientists since it was first put forward. To improve the testing level of EP using the space environment, the Satellite Test of the Equivalence Principle (STEP) mission was proposed in 1972 at Stanford University [44], following some other similar missions, including Galileo Galilei (GG), MiniSTEP, and QuickSTEP, proposed by different organizations [45]. MICROSCOPE was proposed in 1999 by ONERA and launched on 25 April 2016. It is the first mission specifically designed to test the EP in space at the 10 −15 level, which is 2 orders of magnitude better than the current ground-based experiments, and which could allow us to rule out theories beyond general relativity that predict a WEP violation at approximately the level of 10 −15 , or to complete general relativity if a WEP violation is detected. In the MICROSCOPE mission, the motions of the two PMs, which have different compositions, i.e., Pt and Ti, are monitored by capacitive transducers and controlled to be motionless by electrostatic forces [46]. The feedback electrostatic forces applied to the two PMs implies differential accelerations along the sensitive axis corresponding to the violation of the EP. The designed resolution of the differential SAGE accelerometers in MICROSCOPE is expected to be approximately 2 × 10 −12 m/s 2 /Hz 1/2 in the frequency band of 0.1 mHz to 0.03 Hz [47].
The TEPO project was proposed to test the EP at the level of 10 −17 for different composition bodies by HUST, in which the technologies used in the MICROSCOPE and LISA Pathfinder, such as the heterodyne laser interferometer, precision electrostatic accelerometers, and the ultraviolet (UV) charge management system, are expected to be adopted [15]. In the TEPO mission, the PMs are designed to be hollow concentric cylinders, the same seminal design as in the MICROSCOPE mission, with an outer titanium PM and an inner platinum PM. The relative motion of the two PMs in the sensitive axis, which are affected by the possible EP violation, are monitored by a laser heterodyne interferometer, and then controlled to be motionless by electrostatic actuators, as shown in Figure 4. Instead of the gold wires employed in the MICROSCOPE mission, a UV discharge system, as developed and tested with the GP-B [41] and LISA Pathfinder [12] proof masses, is used to discharge the PMs based on the photoemission effect, which can avoid the damping of the gold wires. Based on detailed analysis and theoretical calculations, the accuracy of the TEPO project based on the best level of the technologies mentioned above is estimated, and the results show that the resolution of the differential acceleration could reach 1.9 × 10 −13 m/s 2 /Hz 1/2 at a frequency of 1 mHz; the EP could be tested at 8 × 10 −17 with a 1-d integration.
Progress of Electrostatic Accelerometer Development at HUST
To advance space gravitational experiments such as the TISS and TEPO projects and the satellite Earth's gravity recovery mission, our group at HUST began to study and develop high-precision space electrostatic accelerometers in 2000.
Application in the Test of the Inverse-Square-Law in Space
The Test of the Inverse-Square-law in Space (TISS) project was proposed in 2006 to test the Newtonian gravitational law and to search for new interactions in a sub-millimeter range by an electrostatic accelerometer [14]. A schematic of the TISS project is shown in Figure 5. The proof mass is attached to the middle of the frame, and the source mass is fixed on a high-precision positioning device such as a piezoelectric transducer (PZT) platform. Six-degree-of-freedom capacitive sensors are used to detect the distance between the source mass and the PM, and then feedback voltages are applied on the capacitive plates, which makes the PM maintain its initial equilibrium position. The feedback voltages can represent the gravitational force of the PM attracted by the source mass. When the distance varies, the feedback voltage varies as well, and changes in the Newtonian gravity force can be calculated. The theoretical calculation showed that the strength factor α for the general Yukawa's potential can be tested lower than the 10 5 at the μm level range [14], when the resolution of the electrostatic accelerometer reaches 3 × 10 −10 m/s 2 /Hz 1/2 and the minimum distance between the source mass and PM can be periodically driven from 20 to 10 μm with a period of approximately 10-100 s. In this case, the result can be improved by a factor of 5-10 compared with current terrestrial experimental results.
Application in the Test of the Inverse-Square-Law in Space
The Test of the Inverse-Square-law in Space (TISS) project was proposed in 2006 to test the Newtonian gravitational law and to search for new interactions in a sub-millimeter range by an electrostatic accelerometer [14]. A schematic of the TISS project is shown in Figure 5. The proof mass is attached to the middle of the frame, and the source mass is fixed on a high-precision positioning device such as a piezoelectric transducer (PZT) platform. Six-degree-of-freedom capacitive sensors are used to detect the distance between the source mass and the PM, and then feedback voltages are applied on the capacitive plates, which makes the PM maintain its initial equilibrium position. The feedback voltages can represent the gravitational force of the PM attracted by the source mass. When the distance varies, the feedback voltage varies as well, and changes in the Newtonian gravity force can be calculated. The theoretical calculation showed that the strength factor α for the general Yukawa's potential can be tested lower than the 10 5 at the µm level range [14], when the resolution of the electrostatic accelerometer reaches 3 × 10 −10 m/s 2 /Hz 1/2 and the minimum distance between the source mass and PM can be periodically driven from 20 to 10 µm with a period of approximately 10-100 s. In this case, the result can be improved by a factor of 5-10 compared with current terrestrial experimental results.
Progress of Electrostatic Accelerometer Development at HUST
To advance space gravitational experiments such as the TISS and TEPO projects and the satellite Earth's gravity recovery mission, our group at HUST began to study and develop high-precision space electrostatic accelerometers in 2000.
Application in the Test of the Inverse-Square-Law in Space
The Test of the Inverse-Square-law in Space (TISS) project was proposed in 2006 to test the Newtonian gravitational law and to search for new interactions in a sub-millimeter range by an electrostatic accelerometer [14]. A schematic of the TISS project is shown in Figure 5. The proof mass is attached to the middle of the frame, and the source mass is fixed on a high-precision positioning device such as a piezoelectric transducer (PZT) platform. Six-degree-of-freedom capacitive sensors are used to detect the distance between the source mass and the PM, and then feedback voltages are applied on the capacitive plates, which makes the PM maintain its initial equilibrium position. The feedback voltages can represent the gravitational force of the PM attracted by the source mass. When the distance varies, the feedback voltage varies as well, and changes in the Newtonian gravity force can be calculated. The theoretical calculation showed that the strength factor α for the general Yukawa's potential can be tested lower than the 10 5 at the μm level range [14], when the resolution of the electrostatic accelerometer reaches 3 × 10 −10 m/s 2 /Hz 1/2 and the minimum distance between the source mass and PM can be periodically driven from 20 to 10 μm with a period of approximately 10-100 s. In this case, the result can be improved by a factor of 5-10 compared with current terrestrial experimental results.
Progress of Electrostatic Accelerometer Development at HUST
To advance space gravitational experiments such as the TISS and TEPO projects and the satellite Earth's gravity recovery mission, our group at HUST began to study and develop high-precision space electrostatic accelerometers in 2000.
Sensor Head Manufacturing Technique
The sensor head of an electrostatic accelerometer usually consists of a PM and a surrounding electrode housing. The materials of the PM and housing electrodes are typically titanium and Ultra low expansion glass. The gap between the PM and the electrode housing affects the dynamic range, as well as the measurement resolution, of the accelerometer, and there is a tradeoff between these two parameters. The gap is generally chosen to be hundreds of micrometers, which requires an extremely high-precision manufacturing technique to fabricate the sensor head. The processing procedure mainly includes ultrasonic machining, wire-cutting machining, polishing, and coating, along with other processes. So far, our group has independently mastered all of the processing technology for building a sensor head, in which the accuracy of the flatness is better than 1 µm, and the perpendicularity is better than 5 arcsec.
Low-Noise Capacitive Transducer and Readout System
An important technology of accelerometer fabrication is development of an ultra-low noise electronics unit. Taking an accelerometer with a design resolution of 2 × 10 −12 m/s 2 /Hz 1/2 at the frequency range of 5 mHz to 0.1 Hz, for example, the capacitive transducer resolution should increase to 2 × 10 −7 pF/Hz 1/2 at 0.1 Hz, corresponding to approximately 4 pm/Hz 1/2 for a 300-µm gap design, while the value of ω e 2 is approximately 0.05 rad/s 2 , and the readout voltage noise should be controlled within 2 µV/Hz 1/2 at 5 mHz. But in actual applications, the external disturbances must be considered such as the electromagnetic and thermal environmental noises, the coupling influence from the spacecraft, and so on. A precise capacitive transducer based on a differential transform bridge and the Field-Programmable Gate Array (FPGA) technique has been carefully studied and developed [48], and currently, the noise level has increased to 1.6 × 10 −7 pF/Hz 1/2 down to 1 mHz [49], which is limited by the thermal noise of the front-end electronics as shown in Figure 6; this can satisfy the requirement of the displacement measurement for a 10 −12 m/s 2 /Hz 1/2 -level accelerometer and is also suitable for position measurement of the PM in the TianQin project. The sensor head of an electrostatic accelerometer usually consists of a PM and a surrounding electrode housing. The materials of the PM and housing electrodes are typically titanium and Ultra low expansion glass. The gap between the PM and the electrode housing affects the dynamic range, as well as the measurement resolution, of the accelerometer, and there is a tradeoff between these two parameters. The gap is generally chosen to be hundreds of micrometers, which requires an extremely high-precision manufacturing technique to fabricate the sensor head. The processing procedure mainly includes ultrasonic machining, wire-cutting machining, polishing, and coating, along with other processes. So far, our group has independently mastered all of the processing technology for building a sensor head, in which the accuracy of the flatness is better than 1 μm, and the perpendicularity is better than 5 arcsec.
Low-Noise Capacitive Transducer and Readout System
An important technology of accelerometer fabrication is development of an ultra-low noise electronics unit. Taking an accelerometer with a design resolution of 2 × 10 −12 m/s 2 /Hz 1/2 at the frequency range of 5 mHz to 0.1 Hz, for example, the capacitive transducer resolution should increase to 2 × 10 −7 pF/Hz 1/2 at 0.1 Hz, corresponding to approximately 4 pm/Hz 1/2 for a 300-μm gap design, while the value of ωe 2 is approximately 0.05 rad/s 2 , and the readout voltage noise should be controlled within 2 μV/Hz 1/2 at 5 mHz. But in actual applications, the external disturbances must be considered such as the electromagnetic and thermal environmental noises, the coupling influence from the spacecraft, and so on.
A precise capacitive transducer based on a differential transform bridge and the Field-Programmable Gate Array (FPGA) technique has been carefully studied and developed [48], and currently, the noise level has increased to 1.6 × 10 −7 pF/Hz 1/2 down to 1 mHz [49], which is limited by the thermal noise of the front-end electronics as shown in Figure 6; this can satisfy the requirement of the displacement measurement for a 10 −12 m/s 2 /Hz 1/2 -level accelerometer and is also suitable for position measurement of the PM in the TianQin project. In general, the output voltage noise for the electrostatic accelerometer is at a level of 10 μV/Hz 1/2 , which is limited by the quantum noise of a 16-bit digital-to-analog converter (DAC) and the thermal noise of a voltage-driven amplifier. To suppress the output noise, a direct voltage readout scheme is adopted using a high-precision analog-to-digital converter (ADC) (i.e., 20 bits or better) to measure the voltage applied on the control electrodes, which can suppress the electric and quantum noises of the DAC and voltage-driven amplifier due to the large open loop gain in the frequency band of interest. A voltage readout noise of approximately 2 μV/Hz 1/2 was realized using this scheme [50]. Figure 7 shows that a noise level of approximately 0.6 μV/Hz 1/2 was realized using a 24-bit ADC in the ±2.5 V range. In general, the output voltage noise for the electrostatic accelerometer is at a level of 10 µV/Hz 1/2 , which is limited by the quantum noise of a 16-bit digital-to-analog converter (DAC) and the thermal noise of a voltage-driven amplifier. To suppress the output noise, a direct voltage readout scheme is adopted using a high-precision analog-to-digital converter (ADC) (i.e., 20 bits or better) to measure the voltage applied on the control electrodes, which can suppress the electric and quantum noises of the DAC and voltage-driven amplifier due to the large open loop gain in the frequency band of interest. A voltage readout noise of approximately 2 µV/Hz 1/2 was realized using this scheme [50].
High-Voltage Levitation Test
For high-precision inertial sensors, accurate performance verification on the ground is mainly limited by Earth's 1 g gravity acceleration. There are two ways to conquer this influence. One is to apply a high voltage on the upper electrodes to levitate the PM [18]; the other is to suspend the PM using a dedicated fiber. By using a high voltage to levitate the PM, ONERA has succeeded in testing the performance of a series of space accelerometers, such as the STAR, SuperSTAR, and GRADIO accelerometers. It should be noted that direct performance verification of these accelerometers on the ground using the high-voltage levitation scheme is limited at a level of 10 −8 m/s 2 /Hz 1/2 due to the residual seismic noise of the test bench and to the coupling from the strong vertical electrostatic field.
The high-levitation-voltage method is proposed to test various six-DOF control strategies, and it is also suitable for testing the engineering and flight models of accelerometers for space missions. At HUST, a titanium-alloy PM weighing approximately 70 g with a vertical gap of approximately 50 μm was levitated by a simple high-voltage actuator with an output range up to 900 V and a frequency bandwidth of 11 kHz, which is realized by an operational amplifier and a metal-oxidesemiconductor field-effect-transistor (MOSFET) combination [51]. The translation noise of the accelerometer increased to approximately 2 × 10 −8 m/s 2 /Hz 1/2 at 0.1 Hz, as shown in Figure 8, but is limited by seismic noise.
High-Voltage Levitation Test
For high-precision inertial sensors, accurate performance verification on the ground is mainly limited by Earth's 1 g gravity acceleration. There are two ways to conquer this influence. One is to apply a high voltage on the upper electrodes to levitate the PM [18]; the other is to suspend the PM using a dedicated fiber. By using a high voltage to levitate the PM, ONERA has succeeded in testing the performance of a series of space accelerometers, such as the STAR, SuperSTAR, and GRADIO accelerometers. It should be noted that direct performance verification of these accelerometers on the ground using the high-voltage levitation scheme is limited at a level of 10 −8 m/s 2 /Hz 1/2 due to the residual seismic noise of the test bench and to the coupling from the strong vertical electrostatic field.
The high-levitation-voltage method is proposed to test various six-DOF control strategies, and it is also suitable for testing the engineering and flight models of accelerometers for space missions. At HUST, a titanium-alloy PM weighing approximately 70 g with a vertical gap of approximately 50 µm was levitated by a simple high-voltage actuator with an output range up to 900 V and a frequency bandwidth of 11 kHz, which is realized by an operational amplifier and a metal-oxide-semiconductor field-effect-transistor (MOSFET) combination [51]. The translation noise of the accelerometer increased to approximately 2 × 10 −8 m/s 2 /Hz 1/2 at 0.1 Hz, as shown in Figure 8, but is limited by seismic noise.
High-Voltage Levitation Test
For high-precision inertial sensors, accurate performance verification on the ground is mainly limited by Earth's 1 g gravity acceleration. There are two ways to conquer this influence. One is to apply a high voltage on the upper electrodes to levitate the PM [18]; the other is to suspend the PM using a dedicated fiber. By using a high voltage to levitate the PM, ONERA has succeeded in testing the performance of a series of space accelerometers, such as the STAR, SuperSTAR, and GRADIO accelerometers. It should be noted that direct performance verification of these accelerometers on the ground using the high-voltage levitation scheme is limited at a level of 10 −8 m/s 2 /Hz 1/2 due to the residual seismic noise of the test bench and to the coupling from the strong vertical electrostatic field.
The high-levitation-voltage method is proposed to test various six-DOF control strategies, and it is also suitable for testing the engineering and flight models of accelerometers for space missions. At HUST, a titanium-alloy PM weighing approximately 70 g with a vertical gap of approximately 50 μm was levitated by a simple high-voltage actuator with an output range up to 900 V and a frequency bandwidth of 11 kHz, which is realized by an operational amplifier and a metal-oxidesemiconductor field-effect-transistor (MOSFET) combination [51]. The translation noise of the accelerometer increased to approximately 2 × 10 −8 m/s 2 /Hz 1/2 at 0.1 Hz, as shown in Figure 8, but is limited by seismic noise.
Fiber Suspension Test
In addition to the high-voltage-levitation scheme, another scheme, namely the fiber suspension scheme, has been studied for a long period of time at HUST. A few electrostatically-controlled torsion pendulums, including one-and two-stage torsion pendulums, have been constructed to investigate electrostatically-controlled performance [34,36,53,54]. A one-stage electrostatically controlled torsion pendulum is a simple way to test an accelerometer with three DOFs; that is, two horizontal DOFs and a highly sensitive rotational DOF, which is shown in Figure 9.
Sensors 2017, 17, 1943 11 of 18 In addition to the high-voltage-levitation scheme, another scheme, namely the fiber suspension scheme, has been studied for a long period of time at HUST. A few electrostatically-controlled torsion pendulums, including one-and two-stage torsion pendulums, have been constructed to investigate electrostatically-controlled performance [34,36,53,54]. A one-stage electrostatically controlled torsion pendulum is a simple way to test an accelerometer with three DOFs; that is, two horizontal DOFs and a highly sensitive rotational DOF, which is shown in Figure 9. A one-stage electrostatically controlled torsion balance consisting of a PM and a counterweight connected to a balance bar, as shown in Figure 10, has a higher sensitivity along the horizontal direction. It is sensitive to the force actuated on the entire PM. Because of good suppression of seismic noise coupling along the sensitive axis in closed-loop control, it has been engaged to investigate the highly sensitive translational axis of the accelerometer. In the electrostatically-controlled torsion pendulum scheme, the parasitic negative stiffness ke induced by the capacitive transducer is much larger than that of the fiber, kf, since the capacitive gaps are usually quite small (much less than 1 mm). In this case, the torsion pendulum is unstable and cannot work in the absence of the servo control. Therefore, this scheme can be used to simulate the closed-loop operation of the accelerometer in flight. From this point of view, the scheme is slightly different from that used by the Trento group, who aim to investigate the residual disturbances of the PM using a free-torsion pendulum, where ke is usually much smaller than kf and the torsion A one-stage electrostatically controlled torsion balance consisting of a PM and a counterweight connected to a balance bar, as shown in Figure 10, has a higher sensitivity along the horizontal direction. It is sensitive to the force actuated on the entire PM. Because of good suppression of seismic noise coupling along the sensitive axis in closed-loop control, it has been engaged to investigate the highly sensitive translational axis of the accelerometer.
Sensors 2017, 17, 1943 11 of 18 In addition to the high-voltage-levitation scheme, another scheme, namely the fiber suspension scheme, has been studied for a long period of time at HUST. A few electrostatically-controlled torsion pendulums, including one-and two-stage torsion pendulums, have been constructed to investigate electrostatically-controlled performance [34,36,53,54]. A one-stage electrostatically controlled torsion pendulum is a simple way to test an accelerometer with three DOFs; that is, two horizontal DOFs and a highly sensitive rotational DOF, which is shown in Figure 9. A one-stage electrostatically controlled torsion balance consisting of a PM and a counterweight connected to a balance bar, as shown in Figure 10, has a higher sensitivity along the horizontal direction. It is sensitive to the force actuated on the entire PM. Because of good suppression of seismic noise coupling along the sensitive axis in closed-loop control, it has been engaged to investigate the highly sensitive translational axis of the accelerometer. In the electrostatically-controlled torsion pendulum scheme, the parasitic negative stiffness ke induced by the capacitive transducer is much larger than that of the fiber, kf, since the capacitive gaps are usually quite small (much less than 1 mm). In this case, the torsion pendulum is unstable and cannot work in the absence of the servo control. Therefore, this scheme can be used to simulate the closed-loop operation of the accelerometer in flight. From this point of view, the scheme is slightly different from that used by the Trento group, who aim to investigate the residual disturbances of the PM using a free-torsion pendulum, where ke is usually much smaller than kf and the torsion In the electrostatically-controlled torsion pendulum scheme, the parasitic negative stiffness k e induced by the capacitive transducer is much larger than that of the fiber, k f , since the capacitive gaps are usually quite small (much less than 1 mm). In this case, the torsion pendulum is unstable and cannot work in the absence of the servo control. Therefore, this scheme can be used to simulate the closed-loop operation of the accelerometer in flight. From this point of view, the scheme is slightly different from that used by the Trento group, who aim to investigate the residual disturbances of the PM using a free-torsion pendulum, where k e is usually much smaller than k f and the torsion pendulum is self-stable even without control (the capacitive gaps are designed to be much larger than 1 mm). Another advantage of this scheme is that it is helpful to suppress the influence of the seismic noise between the fiber suspension and electrode frame at low frequency due to its common mode rejection advantage [55]. Based on the applied forces necessary to keep a suspended PM centered in translation, the force noise of translationally-free torsion pendulums due to coupling to translational ground motion such as those used in Ref [37,38] is suppressed. However, our measurements allow a more representative test of the closed-loop accelerometer operation that is needed in many experiments.
An advanced torsion balance combining the advantages of a torsion pendulum and torsion balance, called a two-stage electrostatically-controlled torsion pendulum, has been developed to investigate the performance of high-precision accelerometers. This balance allows us to test the performance of both the translational and the rotational DOFs of the inertial sensor simultaneously and also helps to investigate the cross-talk effect between DOFs on the ground, which is considered to be one of the more challenging verifications. A schematic of the two-stage electrostatically controlled torsion pendulum and actual experimental setup in a vacuum chamber is shown in Figure 11. With this fiber suspension scheme, a noise level of 10 −10 m/s 2 /Hz 1/2 has been directly verified for the accelerometer, as shown in Figure 12.
To further suppress the effect of the seismic noise in testing the performance of an accelerometer or inertial sensor on the ground with a two-stage electrostatically controlled torsion pendulum, we have proposed a possible way that the capacitive electrode cage can be suspended by another pendulum. Theoretical analysis shows that the effects of the seismic noise can be further suppressed more than 1 order using the proposed approach [56]. By suspending the electrode cage, the preliminary experiment has verified that seismic noise coupling has been further suppressed by roughly 1 order, with a performance test noise level of 2 × 10 −11 m/s 2 /Hz 1/2 .
Sensors 2017, 17, 1943 12 of 18 pendulum is self-stable even without control (the capacitive gaps are designed to be much larger than 1 mm). Another advantage of this scheme is that it is helpful to suppress the influence of the seismic noise between the fiber suspension and electrode frame at low frequency due to its common mode rejection advantage [55]. Based on the applied forces necessary to keep a suspended PM centered in translation, the force noise of translationally-free torsion pendulums due to coupling to translational ground motion such as those used in Ref [37,38] is suppressed. However, our measurements allow a more representative test of the closed-loop accelerometer operation that is needed in many experiments. An advanced torsion balance combining the advantages of a torsion pendulum and torsion balance, called a two-stage electrostatically-controlled torsion pendulum, has been developed to investigate the performance of high-precision accelerometers. This balance allows us to test the performance of both the translational and the rotational DOFs of the inertial sensor simultaneously and also helps to investigate the cross-talk effect between DOFs on the ground, which is considered to be one of the more challenging verifications. A schematic of the two-stage electrostatically controlled torsion pendulum and actual experimental setup in a vacuum chamber is shown in Figure 11. With this fiber suspension scheme, a noise level of 10 −10 m/s 2 /Hz 1/2 has been directly verified for the accelerometer, as shown in Figure 12.
To further suppress the effect of the seismic noise in testing the performance of an accelerometer or inertial sensor on the ground with a two-stage electrostatically controlled torsion pendulum, we have proposed a possible way that the capacitive electrode cage can be suspended by another pendulum. Theoretical analysis shows that the effects of the seismic noise can be further suppressed more than 1 order using the proposed approach [56]. By suspending the electrode cage, the preliminary experiment has verified that seismic noise coupling has been further suppressed by roughly 1 order, with a performance test noise level of 2 × 10 −11 m/s 2 /Hz 1/2 . Meanwhile, we have also presented an electrostatically-controlled torsion pendulum with a scanning conducting probe to measure the charge distribution and its variation with better precision and higher resolution. The schematic of this novel scheme is shown in Figure 13a; this scheme combines the scanning capability of the Kelvin probe and the high precision of the torsion pendulum. Temporal variation of the surface potential can be measured at a level of 15 μV/Hz 1/2 at 0.03 Hz, and the surface-potential distribution can be obtained at a level of 330 μV at a 0.125-mm spatial resolution, as shown in Figure 13b [57]. Meanwhile, we have also presented an electrostatically-controlled torsion pendulum with a scanning conducting probe to measure the charge distribution and its variation with better precision and higher resolution. The schematic of this novel scheme is shown in Figure 13a; this scheme combines the scanning capability of the Kelvin probe and the high precision of the torsion pendulum. Temporal variation of the surface potential can be measured at a level of 15 µV/Hz 1/2 at 0.03 Hz, and the surface-potential distribution can be obtained at a level of 330 µV at a 0.125-mm spatial resolution, as shown in Figure 13b [57]. A test bench with a low-frequency vibration-isolation system is currently being constructed [52] and is expected to help improve the on-the-ground noise verification capability by 1 or 2 orders in the near future.
In-Orbit Test
Although the accelerometer is tested on the ground by high-voltage suspension and torsion pendulum suspension, the ground working state is different from the space microgravity environment. Sufficient verification of the development technique of an accelerometer requires a spaceflight test. A flight model designated HSEA-I was developed at HUST, including a sensor box and an electronic control box, as shown in Figure 14. The accelerometer was launched aboard a technology experimental satellite in November 2013 and has been tested in orbit for more than three years to date [58].
The main objective of the HSEA-I flight experiment is to fully test the six-DOF control function of the electrostatic accelerometer over a long period of time in a space microgravity environment. The accelerometer was designed with a much higher measurement range to adapt to the anticipated microgravity level of the satellite. The tested intrinsic noise level on the ground is approximately 3 × 10 −8 m/s 2 /Hz 1/2 at approximately 0.1 Hz. A test bench with a low-frequency vibration-isolation system is currently being constructed [52] and is expected to help improve the on-the-ground noise verification capability by 1 or 2 orders in the near future.
In-Orbit Test
Although the accelerometer is tested on the ground by high-voltage suspension and torsion pendulum suspension, the ground working state is different from the space microgravity environment. Sufficient verification of the development technique of an accelerometer requires a spaceflight test. A flight model designated HSEA-I was developed at HUST, including a sensor box and an electronic control box, as shown in Figure 14. The accelerometer was launched aboard a technology experimental satellite in November 2013 and has been tested in orbit for more than three years to date [58].
The main objective of the HSEA-I flight experiment is to fully test the six-DOF control function of the electrostatic accelerometer over a long period of time in a space microgravity environment. The accelerometer was designed with a much higher measurement range to adapt to the anticipated microgravity level of the satellite. The tested intrinsic noise level on the ground is approximately 3 × 10 −8 m/s 2 /Hz 1/2 at approximately 0.1 Hz. A test bench with a low-frequency vibration-isolation system is currently being constructed [52] and is expected to help improve the on-the-ground noise verification capability by 1 or 2 orders in the near future.
In-Orbit Test
Although the accelerometer is tested on the ground by high-voltage suspension and torsion pendulum suspension, the ground working state is different from the space microgravity environment. Sufficient verification of the development technique of an accelerometer requires a spaceflight test. A flight model designated HSEA-I was developed at HUST, including a sensor box and an electronic control box, as shown in Figure 14. The accelerometer was launched aboard a technology experimental satellite in November 2013 and has been tested in orbit for more than three years to date [58].
The main objective of the HSEA-I flight experiment is to fully test the six-DOF control function of the electrostatic accelerometer over a long period of time in a space microgravity environment. The accelerometer was designed with a much higher measurement range to adapt to the anticipated microgravity level of the satellite. The tested intrinsic noise level on the ground is approximately 3 × 10 −8 m/s 2 /Hz 1/2 at approximately 0.1 Hz. The in-orbit data show that the six-DOF motions of the PM are always controlled within approximately 10 nm/Hz 1/2 at 1 Hz. Meanwhile, we use the in-orbit data to estimate the relative distance between the center-of-mass (CoM) of the accelerometer and the satellite. The basic method is to compare the output of the accelerometer with the gyroscope data during attitude maneuvering, where the accelerometer is influenced mainly by the centrifugal acceleration and the linear acceleration induced from angular motion. A least square estimation method is used to estimate the three coordinate components of the CoM, and a position-estimation accuracy of approximately 6 mm is achieved [59]. During the in-orbit test, the noise level of the sensitive axis of the accelerometer, in the normal direction of the satellite's orbital plane, is typically shown as in Figure 15. The noise level is approximately 1 order higher than the ground level. From a simulation analysis, the increase of the noise level is mainly the spectrum leakage effect from the satellite high-frequency accelerations due to the nonlinearity and the decimation aliasing effect of the accelerometer. Until now, the accelerometer has worked almost with the same noise level in space. In addition, it has also successfully recorded the structural vibrations and attitude maneuvering of the satellite. A new bias calibration method has also been built, and the HSEA-I bias is being evaluated using the in-orbit data [58]. Based on the in-flight test results, the accelerometer development technology has been verified, and the overall performance of the space electrostatic accelerometer in orbit is even better than expected.
An improved high-precision electrostatic accelerometer was designed and developed for the TISS project beginning in 2014, and it is being tested in orbit on China's first cargo spaceship, Tianzhou-1, which was launched on 20 April 2017. The electrostatic accelerometer has worked well in space until now, and we hope to make further studies of its performance in the future using longterm orbital data. The in-orbit data show that the six-DOF motions of the PM are always controlled within approximately 10 nm/Hz 1/2 at 1 Hz. Meanwhile, we use the in-orbit data to estimate the relative distance between the center-of-mass (CoM) of the accelerometer and the satellite. The basic method is to compare the output of the accelerometer with the gyroscope data during attitude maneuvering, where the accelerometer is influenced mainly by the centrifugal acceleration and the linear acceleration induced from angular motion. A least square estimation method is used to estimate the three coordinate components of the CoM, and a position-estimation accuracy of approximately 6 mm is achieved [59]. During the in-orbit test, the noise level of the sensitive axis of the accelerometer, in the normal direction of the satellite's orbital plane, is typically shown as in Figure 15. The noise level is approximately 1 order higher than the ground level. From a simulation analysis, the increase of the noise level is mainly the spectrum leakage effect from the satellite high-frequency accelerations due to the nonlinearity and the decimation aliasing effect of the accelerometer. The in-orbit data show that the six-DOF motions of the PM are always controlled within approximately 10 nm/Hz 1/2 at 1 Hz. Meanwhile, we use the in-orbit data to estimate the relative distance between the center-of-mass (CoM) of the accelerometer and the satellite. The basic method is to compare the output of the accelerometer with the gyroscope data during attitude maneuvering, where the accelerometer is influenced mainly by the centrifugal acceleration and the linear acceleration induced from angular motion. A least square estimation method is used to estimate the three coordinate components of the CoM, and a position-estimation accuracy of approximately 6 mm is achieved [59]. During the in-orbit test, the noise level of the sensitive axis of the accelerometer, in the normal direction of the satellite's orbital plane, is typically shown as in Figure 15. The noise level is approximately 1 order higher than the ground level. From a simulation analysis, the increase of the noise level is mainly the spectrum leakage effect from the satellite high-frequency accelerations due to the nonlinearity and the decimation aliasing effect of the accelerometer. Until now, the accelerometer has worked almost with the same noise level in space. In addition, it has also successfully recorded the structural vibrations and attitude maneuvering of the satellite. A new bias calibration method has also been built, and the HSEA-I bias is being evaluated using the in-orbit data [58]. Based on the in-flight test results, the accelerometer development technology has been verified, and the overall performance of the space electrostatic accelerometer in orbit is even better than expected.
An improved high-precision electrostatic accelerometer was designed and developed for the TISS project beginning in 2014, and it is being tested in orbit on China's first cargo spaceship, Tianzhou-1, which was launched on 20 April 2017. The electrostatic accelerometer has worked well in space until now, and we hope to make further studies of its performance in the future using longterm orbital data. Until now, the accelerometer has worked almost with the same noise level in space. In addition, it has also successfully recorded the structural vibrations and attitude maneuvering of the satellite. A new bias calibration method has also been built, and the HSEA-I bias is being evaluated using the in-orbit data [58]. Based on the in-flight test results, the accelerometer development technology has been verified, and the overall performance of the space electrostatic accelerometer in orbit is even better than expected.
An improved high-precision electrostatic accelerometer was designed and developed for the TISS project beginning in 2014, and it is being tested in orbit on China's first cargo spaceship, Tianzhou-1, which was launched on 20 April 2017. The electrostatic accelerometer has worked well in space until now, and we hope to make further studies of its performance in the future using long-term orbital data.
Discussion and Conclusions
In this paper, we have reviewed the electrostatic accelerometers developed by our group at HUST. The key technologies included the sensor-head manufacturing technique and realization of the high-precision capacitive position transducer and the low-noise readout circuits. The ground investigation and verification facility, including the high-voltage levitation system and several complicated torsion pendulum systems, have been set up. In particular, the performance of the 10 −10 m/s 2 /Hz 1/2 level electrostatic accelerometer can be directly validated based on the two-stage electrostatically-controlled torsion pendulum. Above all, two flight models of electrostatic accelerometers are being successfully tested in space to verify the entire system throughout its ongoing development.
Currently, we have designed a novel digital controller based on disturbance observation and rejection using the well-studied embedded model control (EMC) methodology [60], which will be tested experimentally. This method can also be used to study drag-free control for different space missions. In addition, several passive and active isolation benches are being studied; their purpose is to further suppress seismic noise in the high-voltage-levitation testing method. Meanwhile, an improved electrostatically-controlled torsion pendulum is being set up to measure the charge distribution and the variation of the PM with different materials. In order to investigate the magnetic bulk effects of the inertial sensor to be used in space gravitational wave detection missions, a massive PM combined with a pendulum system have been set up.
The above-mentioned progress and plans are important and promote Chinese space gravitational experiments, including the TISS, the EP, and space gravitational wave detection, among others. Moreover, the electrostatic accelerometer has been the key payload of several Chinese satellite gravity-measurement missions. | 16,391.2 | 2017-08-23T00:00:00.000 | [
"Physics"
] |
The Pre-Eminence of Theory Versus the European Cvar Perspective in Macroeconometric Modeling
The primary aim of the paper is to place current methodological discussions in macroeconometric modeling contrasting the 'theory first' versus the 'data first' perspectives in the context of a broader methodological framework with a view to constructively appraise them. In particular, the paper focuses on Colander's argument in his paper 'Economists, Incentives, Judgement, and the European CVAR Approach to Macroeconometrics' contrasting two different perspectives in Europe and the US that are currently dominating empirical macroeconometric modeling and delves deeper into their methodological/philosophical underpinnings. It is argued that the key to establishing a constructive dialogue between them is provided by a better understanding of the role of data in modern statistical inference, and how that relates to the centuries old issue of the realisticness of economic theories. --
1 Introduction Colander (2009) (this volume) compares and contrasts two alternative perspectives in empirical macroeconomics, and attempts to explain the extent of their in ‡uence on the discipline in terms of the incentive scheme perpetrated on the profession by US dominated journals. In broad terms his argument is that the European perspective, based primarily on the 'general-to-speci…c'Cointegrated Vector AutoRegressive (CVAR) approach, has been largely ignored by US dominated journals because it places observation before theory and requires researcher judgment to be part of the analysis". In contrast, the editorial boards of these journals have manifested a strong preference for the 'theory …rst'perspective, currently dominated by 'Dynamic Stochastic General Equilibrium'(DSGE) models, where data play only a subordinate role in 'quantifying'these models. As a result, young researchers operating in a 'publish or perish'environment would naturally avoid the European perspective because it requires hard work and judicious judgment in data modeling without any obvious professional payo¤. Instead, it is rational for empirical macroeconomists to opt for the US perspective where one only needs to demonstrate technical dexterity in solving/approximating and calibrating DSGE models. Hence, the current dominance of the DSGE in empirical macro-modeling has very little to do with the superior attributes of that perspective on either substantive or empirical grounds.
Colander's incentive-based diagnosis, although broadly right-minded, does not go far enough to bring out the deeper methodological issues and the rationale underlying the two perspectives. For instance, his analysis does not explain why the US dominated journals have adopted the 'theory …rst'perspective in the …rst place, or why the European perspective places observation before theory, as he claims, knowing that such a perspective will not lead to publications in prestigious journals. Indeed, his 'theory …rst' vs. 'data …rst' is overly simplistic and invariably misleading because neither side will consider it as adequately characterizing their respective thesis.
The US perspective is better described as a 'Pre-Eminence of Theory' (PET) standpoint, where the data are assigned a subordinate role broadly described as 'quantifying theories presumed adequate'. In contrast, the European 'general-to-speci…c' CVAR perspective attempts to give data a more substantial role in the theory-data confrontation and is more accurately described as endeavoring to accomplish the goals a¤orded by sound practices of frequentist statistical methods in learning from data. Colander's description of the European perspective requiring 'researcher judgment' gives the misleading impression that he refers to subjective judgments and skills in statistical modeling. This is misleading because any judgement/skill/claim that can be appraised independently by other researchers is not subjective in the same sense as one's choice a prior distribution re ‡ecting personal beliefs that nobody can question.
A crucial component of Johansen's (2007) call for assessing the premises of inference has nothing subjective about it, and the judgment/skills one needs concern the proper implementation of the Fisher-Neyman-Pearson (F-N-P) model-based statistical induction; see Cox and Hinkley (1974). In particular, he raises the question of validating the statistical premises to secure the reliability of the resulting inferences. Why does the pre-eminence of theory (PET) perspective currently dominate US empirical macroeconomic modeling? The short answer is that, arguably, 'it represents the status quo'with a long history in economics going back to Ricardo (1817). A case can be made that the PET perspective has dominated economic modeling for the last two centuries; see Spanos (2009a). The conventional wisdom underlying this perspective is that one builds simple idealized models which capture certain key aspects of the phenomenon of interest, and uses such models to gain insight concerning alternative economic policies. The role of the data is only subordinate in the sense that it can help to instantiate such models by quantifying them. Mill (1844) articulated an early temperate form of this perspective by arguing that causal mechanisms underlying economic phenomena are too complicated -they involve too many contributing factors -to be disentangled using observational data. This is in contrast to physical phenomena whose underlying causal mechanisms are not as complicated -they involve only a few dominating factors -and the use of experimental data can help to untangle them by 'controlling' the 'disturbing' factors. Hence, economic theories can only establish general tendencies and not precise enough implications whose validity can be assessed using observational data. These tendencies are framed in terms of the primary causal contributing factors with the rest of the numerous (potential) disturbing factors relegated to ceteris paribus clauses whose appropriateness cannot, in general, be assessed using observational data. This means that empirical evidence contrary to the implications of a theory can always be explained away as due to counteracting disturbing factors. Hence, Mill (1844) rendered theory testing via observational data impossible, and attributed to the data the auxiliary role of investigating the ceteris paribus clauses in order to shed light on the disturbing factors which prevent the establishment of the tendencies predicted by the theory in question. Marshall (1891) largely retained Mill's methodological stance concerning the predominance of theory over data in economic theorizing despite paying lip-service to the importance of data in economic modeling. Robbins (1935) reverted to Cairnes' (1888) more extreme version that pronounced data, more or less, irrelevant for appraising the truth of deductively established propositions. Indeed, both of them went as far as to claim that the deductive nature of economic theories bestows upon them a superior status than even physical theories because it is ultimately based on 'self-evident truths'derived by 'introspection'; according to Robbins (1935), p. 105: "In Economics, . . . , the ultimate constituents of our fundamental generalizations are known to us by immediate acquaintance. In the natural sciences they are known only inferentially. There is much less reason to doubt the counterpart in reality of the assumption of individual preferences than that of the assumption of the electron." Robbins was well aware of the developments in statistics during the early 20th century, but dismissed their pertinence to theory appraisal in economics on the basis of the argument that such techniques are only applicable to data which can be con-sidered as 'random samples'from a static population. Unfortunately, this argument, stemming from sheer ignorance concerning the applicability and relevance of modern statistical methods, lingers on to this day (see Mirowski, 1994). Robbins 1 was not just dismissive of any attempts to use data for theory appraisal, he jested at early attempts to quantify demand curves using an example of a 'Dr Blank investigating the demand for herrings'; see ibid., p. 107.
In modern times, echoes of that extreme version of the PET perspective can be found in Kydland and Prescott (1991): "The issue of how con…dent we are in the econometric answer is a subtle one which cannot be resolved by computing some measure of how well the model economy mimics historical data. The degree of con…dence in the answer depends on the con…dence that is placed in the economic theory being used." (ibid., p. 171) Indeed, the theory being appraised should be the …nal arbiter: "The model economy which better …ts the data is not the one used. Rather currently established theory dictates which one is used." (ibid., p. 174). The great puzzle is that Kydland and Prescott never tell us how the 'currently established theory'was instituted and whether anything could ever count against it.
During the 19th and 20th centuries one can …nd much less extreme versions of the PET perspective where data is assigned, in principle, a less subordinate role in theory appraisal. Indeed, there is no shortage of eminent economists paying lip-service to the role of the data in economic modeling, but there is a crucial disconnect between the rhetoric and the practice; with enough perseverance one would be able to …nd remarks, even by the most extreme adherents to the PET standpoint, that would allude to the 'important'role of the data in economic theorizing! What was missing from economic modeling was an appropriate modeling framework in the context of which the theory-data confrontation can be properly applied without compromising the credibleness of either source of information. This lack of an appropriate framework is most apparent in the extensive literature initiated by Friedman (1953) concerning the realisticness of economic theories, as well as the notable methodological exchanges between Keynes and Tinbergen and Koopmans and Vinning; see Spanos (2006a).
The primary di¤erence between the 19th and the later part of the 20th century is that the developments in statistical inference, associated with the Fisher-Neyman-Pearson (F-N-P) model-based approach that culminated in the 1930s, helped to shed illuminating light on the role of data in empirical modeling in ways which were unknown to Mill or Marshall. Unfortunately for economics, some of the key elements of the F-N-P statistical perspective, including the importance of statistical model validation, never made it into modern econometrics, primarily because the Cowles Commission literature solidi…ed the PET perspective in econometric modeling; see Spanos (2006a).
A strong case can be made (see Spanos, 2009a) that the numerous attempts to redress the balance and give data a more substantial role in theory testing were frustrated by several challenging methodological/philosophical problems bedeviling empirical modeling in economics since Ricardo (1917), the most crucial being: (MP1) the huge gap between economic theories and the available observational data, (MP2) the issue of assessing when a model 'accounts for the regularities in the data', (MP3) relating statistical inferences to substantive claims, hypotheses or theories.
These same problems are currently entangling the discussion between these two perspectives rendering any dialogue between them almost impossible. Due primarily to problem (MP1), early attempts to give data a more substantive role focused on data-driven models implicitly assuming that their theoretical concepts and the available data largely coincide, and relying on goodness-of-…t measures, like the R 2 , to assess (MP2). These attempts had disastrous consequences for empirical modeling in economics because they inadvertently contributed to the forti…cation of the PET perspective for a variety of reasons.
(C1) Unreliability. Data-driven correlation, linear regression, factor analysis and principal component analysis, relying on goodness-of-…t, have been notoriously unreliable when applied to observational data, especially in the social sciences.
(C2) Statistical spuriousness. The arbitrariness of goodness-of-…t measures created a strong impression that one can 'forge'signi…cant correlations (or regression coe¢ cients) at will, if one was prepared to persevere long enough 'mining'the data. This (mistaken) impression is almost universal among philosophers and social scientists, including economists.
(C3) Misplaced role for substantive information. The impression in C2 has led to widely held (but erroneous) belief that substantive subject matter (theory) information provides the only safeguard against statistical spuriousness.
Exploiting the confusions created by (C1)-(C3), the PET perspective consolidated its dominance on economic modeling and persistently charged any alternative perspective that took the data seriously, including the European CVAR approach, as yet another form of 'measurement without theory', 'data-mining'and 'hunting' for statistical signi…cance and the like.
Admonitions and rebukes concerning the devastating e¤ects of invoking invalid assumptions by Campos et al (2005), Johansen (2007) and Juselius and Franchi (2007) do not resonate well with the advocates of the PET perspective because they sound like a sermon they have heard many times before. To them these admonitions sound like a well-rehearsed complaint concerning the unrealisticness of their structural models. Indeed, numerous critics of the PET perspective have articulated the unrealisticness argument over and over again during the last two centuries, beginning with Malthus (1836) who criticized the Ricardian method as based on 'premature generalization'which occasions "an unwillingness to bring their theories to the test of experience."(ibid, p. 8).
Nevertheless, modern advocates of the PET perspective, often invoking the authority of Friedman (1953), counter that such unrealisticness is inevitable, since all models are idealizations and not faithful descriptions of reality. The abstraction/idealization argument is right-headed and perfectly legitimate at the level of the theory, but adherents of the PET perspective do not seem to appreciate the fact that if their implicit inductive premises are invalid -vis-a-vis the data -any inferences based on such premises will be highly misleading. Indeed, in light of (C1)-(C3), the PET advocates feel that they can ignore the statistical misspeci…cation issue and argue instead that what matters is the extent to which such models 'shed light'on the phenomenon of interest and help in formulating e¤ective economic policies.
What they do not seem to realize is that any assessment concerning the sign, magnitude and signi…cance of estimated coe¢ cients, however informal, constitutes an inference whose credibility is completely undermined when the estimated model is statistically misspeci…ed; an insight from the F-N-P model-based statistical induction.
The European CVAR Perspective
The European CVAR perspective has its roots in the London School of Economics (LSE) 'general-to-speci…c'econometrics tradition (see Sargan, 1964, Hendry, 2000, and can be best understood as an attempt to redress the balance between theory and data by avoiding both extreme positions: theory-driven vs. data-driven modeling. Having re ‡ected on this perspective for several years, I feel that the best way to describe this European perspective is in terms of a threefold objective (aims/aspires): (A1) to give data 'a voice of its own', independent of any economic theory, (A2) to reliably constrain economic theorizing using the data, and (A3) avoid 'foisting'the theory onto the data at the outset because it precludes any genuine theory testing.
In light of the huge gap between theory and data, objective (A3) renders the European CVAR perspective vulnerable to charges of 'data-mining' because any attempt to take the data seriously forces one to begin the modeling with a largely data-driven model like the Autoregressive Distributed Lag (ADL) and VAR models; see Hendry (1995). Indeed, the methodological problems (MP1)-(MP3) and the misleading impressions created by (C1)-(C3), have contributed signi…cantly to a genuine lack of communication between the two sides, rendering any constructive dialogue between them almost impossible. For the PET advocates the European CVAR approach is another form of data-based modeling which ignores the theory, despite their declarations to the contrary, and is highly vulnerable to problems (C1)-(C3). Worse, the aims (A1)-(A3) make little sense because for them theory is the only source of legitimate information for modeling purposes.
The key to unraveling the tangled arguments separating the two perspective is provided by distinguishing between statistical adequacy and the realisticness of the structural model in question. A closer examination of the 'testing assumptions' criticism raised by the European CVAR approach (see Johansen, 2007, Juselius andFranchi, 2007), reveals that it has two separate components one of which concerns the proper application of statistical inference and the other has to do with the empirical adequacy of the structural model vis-a-vis the data in question. The …rst component is concerned with the validity of the probabilistic assumptions comprising the inductive premises for inference. It's only the second component that relates to the centuries old realisticness criticism (see Maki, 2000). Hence, the advocates of the PET perspective cannot de ‡ect or sidestep the statistical inadequacy criticism by invoking their arguments against the realisticness of a theory criticism; the two issues are fundamentally di¤erent.
For a proper understanding of these two components and their respective roles one needs a methodological framework where these and related issues are clearly brought out. A framework that can be used to elucidate the strengths and weaknesses of both perspectives and provide the basis for a constructive dialogue between them. The same framework should also o¤er suggestions on how one might be able to address the methodological problems (C1)-(C3) mentioned above, as well as accommodate the threefold objective (A1)-(A3) of the European perspective.
3 An All-Encompassing Methodological Framework Spanos (1986), p. 17, proposed an all-encompassing methodological framework (Figure 1), devised to enable the modeler to bridge the gap between theory and data using a sequence of interconnected models with a view to delineate and probe for the potential errors at di¤erent stages of modeling; see Mayo (1996) for a similar proposal.
The key to unraveling the testing of assumptions argument is provided by drawing a clear distinction between substantive and statistical assumptions because their respective validity has very di¤erent implications for inference. The substantive assumptions pertain to the realisticness issue, but the statistical assumptions pertain to the (statistical) reliability of inference. This is because when any of the statistical assumptions are invalid for data Z 0 , inferences based on the estimated model are often unreliable because the nominal and actual error probabilities are likely to be di¤erent. The surest way to lead an inference astray is to apply a :05 signi…cance test when the actual type I error is closer to 1:0; see Spanos and McGuirk (2001).
The crucial problem in econometric modeling is that foisting the substantive information on the data by estimating the structural model M ' (z) directly, is invariably an injudicious strategy because statistical speci…cation errors are likely to undermine the prospect of reliably evaluating the relevant errors for primary inferences. When modeling with observational data, the estimated M b ' (z) is often both statistically and substantively inadequate, and one has no way to delineate the two; is the theory wrong or are the (implicit) inductive premises invalid for data Z 0 ? To avert this impenetrable quandary, the modeling framework in Figure 1 distinguishes, ab initio, between statistical and substantive information and then allows for bridging the gap between them by a sequence of interconnecting models which enable one to delineate and probe for the potential errors at di¤erent stages of modeling. From the theory side, the substantive information is initially encapsulated by a theory model and then modi…ed into a structural one M ' (z) to render it estimable with data z 0 : From the data side, the statistical information is distilled by a statistical model M (z) whose parameterization is chosen with a view to render M ' (z) a reparametrization/restriction thereof. Distinguishing between substantive and statistical assumptions is not as straight forward as it might seem at …rst sight. The problem can be seen in Ireland (2004) where the assumptions invoked: (1) all structural parameters are constant over time, (2) total factor productivity is driving the system, (4) log output, consumption, and capital are trend-stationary, (5) labor is stationary, (6) labor augmented technological progress follows a linear trend which in ‡uences the other variables identically, (7) the observable variables follow a VAR(1) process, (8) the errors are NIID, constitute a mixture of substantive and statistical assumptions; see Juselius and Franchi (2007).
The initial separation depends on having a clear-cut distinction between a structural M ' (z) and a statistical model M (z) where the former is viewed as an estimable form of a theory model (hence, built on substantive information) in view of the available data Z 0 , and the latter as a purely probabilistic construal whose structure depends solely on the statistical information contained in the data Z 0 :=(z t ; t=1; 2; :::; n); see Spanos (1986). The latter is accomplished by viewing the statistical model as a particular parameterization of a generic vector stochastic process fZ t ; t2Ng whose probabilistic structure is chosen so as to render data Z 0 a 'truly typical realization'of this process. The particular parameterization of fZ t ; t2Ng is selected so as to enable one to embed the structural model in its context.
Table 1-Normal Vector Autoregressive (VAR(1)) Model
Statistical GM: Example. In the case where the process fZ t ; t2Ng is Normal, Markov and Stationary one can show that it can be parameterized in the form of the VAR(1) model, as speci…ed in Table 1; see Spanos (1995).
However, depending on the structural model in question, one could choose another parameterization of the same process represented by the Dynamic Linear Regression model whose statistical GM, for Z t :=(X t ; y t ), takes the form: and its parameters :=( 0 ; B 0 ; B 1 ; B 2 ; V) constitute re-parameterization of :=(a 0 ; A 1 ; ) in the sense that =H( ); see Spanos (1986). It turns out that the sequence of models, theory, structural (estimable) and statistical, provides a way to foreground as well as address problem (MP1) raised above. The separation is particularly crucial because statistical adequacy [the validity of the statistical assumptions vis-a-vis data Z 0 ] is a su¢ cient condition for the reliability of inference. Indeed, one cannot even pose questions of substantive adequacy [does the structural model capture the key features of the phenomenon of interest?] unless statistical adequacy has been secured …rst. This is because statistical adequacy ensures that the relevant error probabilities are ascertainable since the actual approximate closely the nominal ones; see Spanos (2006a).
The notion of statistical adequacy replaces goodness-of-…t as the criterion for assessing whether a …tted model 'accounts for the regularities in the data', addressing problem (MP2), and at the same time shedding ample light on the problems (C1)-(C3) misleadingly invoked by the PET advocates; see Spanos (2009a). Statistical adequacy is achieved by applying thorough misspeci…cation testing to probe e¤ectively the di¤erent ways the model assumptions (e.g. [1]-[5] in Table 1) might be misspeci…ed; see Spanos (2000). Although the e¤ectiveness of misspeci…cation testing requires judicious use of graphical techniques, there is nothing subjective about the judgment needed to validate a statistical model; see Mayo and Spanos (2004).
The crucial issue here is that statistical adequacy is separate from any issues pertaining to the realisticness or the substantive adequacy of the structural model in question. In particular, statistical misspeci…cation cannot be fended o¤ using locutions like: "All models are misspeci…ed, to a greater or lesser extent, because they are by necessity mere approximations, and slight departures from assumptions will only lead to minor deviations from the optimal inferences." Such locutions are highly misleading because even seemingly minor misspeci…cations can yield major discrepancies between actual and nominal error probabilities; Spanos (2005). A statistically adequate model M (z) provides a sound basis for appraising the relevant structural model M ' (z); where the two are related via an implicit function G('; )=0; where ' 2 ; and 2 ; denote the structural and statistical parameters, respectively. This provides a link between M (z) and the phenomenon of interest via M ' (z); invariably known as identi…cation: does G('; )=0 de…ne ' uniquely in terms of ? Often, there are more statistical than structural parameters, and that enables one to test the overidentifying restrictions: (1) Rejection of the null provides evidence against the empirical adequacy of the structural model vis-a-vis data Z 0 . This view of identi…cation di¤ers from the traditional textbook notion (see Kennedy, 2008) in so far as it requires that the underlying M (z) (the reduced form) be validated vis-a-vis data Z 0 to secure the thustworthiness of the link between M ' (z) and the phenomenon of interest; Spanos (1990).
Appraising the overidentifying restrictions in (1) requires one to go beyond the statistical signi…cance to assess the substantive signi…cance in order to adequately address problem (MP3) above by circumventing the fallacies of acceptance and rejection. This comes in the form of a post-data evaluation of inference to determine the discrepancy from the null warranted by data Z 0 using severe testing reasoning; see Mayo and Spanos (2006), Spanos (2006b). Indeed, the modeling framework in Figure 1 can be used to address all three methodological/philosophical problems (MP1)-(MP3).
Viewed in the context of Figure 1, the PET perspective often ignores the right hand side; the statistical analysis steps leading to a statistically adequate model. Quantifying the structural model M ' (z) directly usually results in an estimated (or calibrated) model which is both statistically and substantively inadequate, but without any way to separate or eliminate the di¤erent sources of error arising at the di¤erent stages of modeling; theory, structural and statistical models. Hence, any inference based on such quanti…ed structural models will be invariably misleading. As a methodology of learning from data, it does not live up to standards of scienti…c objectivity that requires its theories be thoroughly tested against data; see Hoover (2006), Spanos (2009a).
A crucial consequence of distinguishing between statistical and substantive information, ab initio, is that the framework in Figure 1 encourages the empirical discovery process. One does not need to have a full-blown structural model like the DSGE to begin the empirical modeling process, as the Cowles Commission approach would have us believe. One can begin with low level theories (however vague) that identify certain potentially relevant variables Z t ; and then use a statistically adequate model M (z) to reliably constrain economic theorizing with a view to develop more adequate structural models for the phenomenon of interest. Without underestimating the di¢ culties associated with the empirical discovery process, this creates the common ground for reconciling the two perspectives.
The European CVAR perspective arguably ignores the left hand side of Figure 1 by relying on some low level theory to begin the modeling process. Once the data Z 0 have been chosen on the basis of a theory or theories, one can proceed to specify a statistical model, like a VAR (Table 1), in terms of the probabilistic structure of the underlying stochastic process fZ t ; t2Ng. This enables one to carry out the statistical analysis without any references to the structural model until a statistically adequate model is reached. At that stage one can proceed to impose data-induced restrictions, like the ones implied by cointegration, and attempt to relate the restricted model to certain low-level theories associated with the longrun steady-state and/or equilibrium-correction states; see Hendry (1995), Johansen (1996), Juselius (2006). This leaves the European CVAR perspective vulnerable to the charge that their use of substantive information is rather super…cial because the data-induced restrictions are only tangentially connected to economic theory. In their defense, advocates of the European perspective are likely to o¤er a plethora of evidence that the PET strategy give rise to structural models, like Ireland's (2004) DSGE model, which are invariably empirically incongruous; see Juselius and Franchi (2007), Hoover et al (2008).
Can the Two Perspectives Be Reconciled?
Viewing both perspectives in the context of the modeling framework in Figure 1, the advocates of the European CVAR perspective need to go the extra mile to bridge the gap between theory and data by developing structural models beyond the ones associated with data-induced restrictions. On the other hand, the adherents to the PET perspective need to develop structural models that account for the statistical regularities in the data. Statistically adequate models can be used to give data a voice of its own, to reliably constrain economic theorizing, and, one hopes, help direct the search toward more adequate structural models.
Taking Ireland's (2004) DSGE model as an example, one needs to derive explicitly the implicit reduced form and state its probabilistic assumptions (analogous to assumptions [1]-[5] in Table 1) by viewing it as a statistical model; a parameterization of the probabilistic structure of the process fZ t ; t2Ng underlying data Z 0 . Thorough misspeci…cation testing will determine if the latter is statistically adequate or not. Based on past experience, it is highly unlikely that such a model will turn out to be statistically adequate; see Juselius and Franchi (2007), Hoover et al (2008). This, by itself, provides empirical evidence against the structural model as it stands, and a respeci…cation aiming to account for the statistical regularities in data Z 0 is called for.
It is important to stress that respeci…cation in this context does not refer to 'error-…xing'widely used in traditional textbook econometrics, but postulating more appropriate probabilistic structure for fZ t ; t2Ng that would render data Z 0 a typical realization thereof. This is because the traditional 'error-…xing' strategies, such as error-autocorrelation correction and heteroskedasticity/autocorrelation consistent standard errors (see Kennedy, 2008), often render statistical unreliability worse, not better; see Spanos and McGuirk (2001), Spanos (2006a).
Assuming one can …nd such a respeci…ed statistical model, it can provide the basis for improving the original structural model using modi…cations that take into account the statistical regularities as described by the statistically adequate model. In a sense, the latter demarcates 'what there is to be explained'by potential structural models that aspire to be empirically adequate. This process might require several iterations before such a model is reached.
Conclusion
Real progress in learning from data about economic phenomena of interest can be expected when economic modelers face squarely the formidable di¢ culties in addressing all three methodological problems (MP1)-(MP3) mentioned above. The main message from the above discussion is that these challenging problems can be addressed in the context of the modeling framework shown schematically in Figure 1. The key is provided by recognizing that, although both substantive and statistical information play crucial roles in learning from data, their respective roles in empirical modeling need to be delineated and properly reconciled. The proposed reconciliation is achieved in the broader context of bridging the gap between theory and data using a sequence of interconnecting models (Figure 1). This framework creates common ground for a constructive dialogue between economic theorists and econometricians that could give rise to 'learning from data'about economic phenomena of interest.
What are the prospects that such a constructive dialogue will begin any time soon? Despite the gloomy picture painted above, I remain optimistic that the new generation of econometricians will eventually grow out of esteeming technical dexterity and begin to re ‡ect on the serious methodological issues undermining the trustworthiness of the evidence produced by the prevailing econometric modeling practice; see . The primary motive for this change is likely to be that, as things stand, the prospect of econometric modeling losing its credibility as a serious scienti…c …eld vis-a-vis other scientists as well as policy makers looms large; see Spanos (2008). | 7,075.6 | 2009-04-07T00:00:00.000 | [
"Economics"
] |
The productivity limit of manufacturing blood cell therapy in scalable stirred bioreactors
Abstract Manufacture of red blood cells (RBCs) from progenitors has been proposed as a method to reduce reliance on donors. Such a process would need to be extremely efficient for economic viability given a relatively low value product and high (2 × 1012) cell dose. Therefore, the aim of these studies was to define the productivity of an industry standard stirred‐tank bioreactor and determine engineering limitations of commercial red blood cells production. Cord blood derived CD34+ cells were cultured under erythroid differentiation conditions in a stirred micro‐bioreactor (Ambr™). Enucleated cells of 80% purity could be created under optimal physical conditions: pH 7.5, 50% oxygen, without gas‐sparging (which damaged cells) and with mechanical agitation (which directly increased enucleation). O2 consumption was low (~5 × 10–8 μg/cell.h) theoretically enabling erythroblast densities in excess of 5 × 108/ml in commercial bioreactors and sub‐10 l/unit production volumes. The bioreactor process achieved a 24% and 42% reduction in media volume and culture time, respectively, relative to unoptimized flask processing. However, media exchange limited productivity to 1 unit of erythroblasts per 500 l of media. Systematic replacement of media constituents, as well as screening for inhibitory levels of ammonia, lactate and key cytokines did not identify a reason for this limitation. We conclude that the properties of erythroblasts are such that the conventional constraints on cell manufacturing efficiency, such as mass transfer and metabolic demand, should not prevent high intensity production; furthermore, this could be achieved in industry standard equipment. However, identification and removal of an inhibitory mediator is required to enable these economies to be realized. Copyright © 2016 The Authors Journal of Tissue Engineering and Regenerative Medicine Published by John Wiley & Sons Ltd.
Introduction
Blood transfusions are one of the most common clinical interventions worldwide with~21 million donated blood components transfused each year in the USA alone. Increasing demand due to aging populations, challenges of adventitious agent screening, or requirement for specific immuno-phenotypes, has created a growing search for alternative sources to public donation. New uses for red blood cells (RBCs) such as targeted drug delivery may increase this demand further (Bourgeaux et al., 2016). There is evidence that transfusion of homogenously young RBCs may have clinical benefit by decreasing the transfusion frequency of chronically transfused patients (Bosman, 2013;Luten et al., 2008). One proposed solution to these issues is the manufacture of RBC from stem or progenitor cells potentially providing an unlimited supply of cells in an optimal age distribution (Zeuner et al., 2012).
Anucleate RBCs have successfully been produced in vitro from a variety of cell sources including haematopoietic stem cells such as cord blood CD34+ cells, adult mobilised peripheral blood, and bone marrow CD34+ cells (Neildez-Nguyen et al., 2002;Giarratana et al., 2005;Miharada et al., 2006;Fujimi et al., 2008;Giarratana et al., 2011). Recently, approaches using human pluripotent cells, both induced and embryonic, have also been reported, although challenges with control of appropriate lineage and development of adult phenotype remain (Qiu et al., 2008;Lu et al., 2008;Lapillonne et al., 2010;Dias et al., 2011;Chang et al., 2011;Kobari et al., 2012). Due to the exceptionally high numbers of erythroblast stage cells required to be maintained in viable culture in any candidate production process, common late stage manufacturing challenges exist irrespective of initial cell source.
Challenges associated with the scale-up of any cell culture bioprocess include maintaining consistency, quality and quantity of the cell product whilst minimizing cost of production (Rousseau et al., 2014;Timmins and Nielsen, 2009). This is particularly fraught with RBC production due to the requirement for relatively extreme process intensification whilst avoiding detrimental effects on cells, and where there is little understanding of the sensitivities of each stage of the progressively maturing erythroid phenotype to common bioprocess operations. In particular, robust erythroblast enucleation to produce reticulocytes and then fully mature RBCs has been problematic in vitro and the mechanisms still remain to be fully elucidated (Kingsley et al., 2004;Lee et al., 2004). With respect to cost of production, RBC is an example of a high dose product where cost of goods reduction is a priority for commercial viability. It has been estimated that one unit of cultured RBCs would cost $8000-15,000 to produce using current processes, compared to $200-230 for one unit of donated blood (Zeuner et al., 2012). The primary reason for this high cost is expensive media components required for in vitro differentiation and maturation multiplied by large culture volumes. This has led to calls for research to identify and address the fundamental barriers to efficient production of erythroid cells (Rousseau et al., 2014).
Cost effective production of RBCs will require high density cell culture. Conventional culture densities are considered high at 1 × 10 7 cells/ml, yet this would still require a 200-l final volume to produce a single unit or 2 × 10 12 cells. To achieve a final harvest of 2 × 10 12 cells in a 5-l volume will require a density of 4 × 10 8 cells/ml. Neither of these volumes accounts for the production chain to reach the final cell number, or production overage required for cell impurity or cell losses in downstream processing. Clearly there is a need to understand the productivity of RBC manufacture at scale, and the nature of the limitations, to enable the manufactured blood field to move forward. In order to address this, we have used a model system of differentiation of CB CD34+ cells to RBCs in a ml-scale stirred tank bioreactor system.
It has previously been shown that CB CD34+ cells can proliferate and differentiate to erythroid cells in a scaled down version of industry standard production equipment, the stirred microbioreactor system, Ambr ™ (Glen et al., 2013;Hsu et al., 2012;Ratcliffe et al., 2012). In the present study, the intensification limits (bioreactor operation, gas transfer, media usage) of cells in such standard equipment were explored to determine current productivity and limiting mechanisms with respect to key criteria: cost of goods (system volume, media volume per cell and process time per cell) and quality (enucleated cells). This is important to allow the field to take an informed approach to address the engineering and scientific challenges that need to be overcome to generate an economically viable product.
Materials and methods
Unless otherwise stated, reagents were purchased from Sigma-Aldrich (Dorset, UK).
Cell count and viability
Online cell counting and viability was measured using a Vi-Cell XR (Beckman Coulter, USA). Population doublings (PD) were calculated as follows: CNi ¼ start cell number; and CN ¼ end cell number:
Assessment of cell morphology
Cells (1-4 × 10 5 ) were centrifuged at 300 × g av for 6 min at RT, supernatant removed, resuspended in 200 μl of medium and centrifuged onto poly-lysine coated microscope slides (Sigma 3-16 PK centrifuge with a cytology rotor) at 60 × g av for 4 min at RT. Slides were left to air dry overnight, stained using Leishman's stain (VWR International, Radnor, PA, USA) and mounted with mounting medium and a glass coverslip. Slides were examined by bright field microscopy using an Eclipse Ti (Nikon, Tokyo, Japan) at 40× magnification.
High-performance liquid chromatography for haemoglobin expression
High-performance liquid chromatography (HPLC) globin chain separation was performed using a protocol modified from Lapillonne et al. (2010). Cells (10 6 ) were centrifuged at 300 × g av for 6 min at RT, lysed in 50 μl water, and stored at -80°C. On thaw, samples were centrifuged at 13,000 × g av at 4°C for 10 min and the lysates collected. Supernatant (10 μl) was injected onto a 1.0 × 250 mm C4 column (Phenomenex, Macclesfield, UK) with a 42% to 56% linear gradient between mixtures of 0.1% trifluoracetic acid in water (Buffer A) and 0.1% trifluoracetic acid in acetonitrile (Buffer B) at flow rate of 0.05 mL/min for 50 min (Dionex HPLC Ultimate 3000 system; Thermo Fisher Scientific, Camberley, UK). The column temperature was 50°C and the UV detector set at 220 nm.
O 2 consumption rate
Erythroblasts were taken at a series of time-points and O 2 consumption assessed using an O 2 sensitive phosphorescent probe mixed with cells at 1 × 10 7 /ml in a 96-well plate format as per manufacturer's instructions (Cayman Chemical, Ann Arbor, MI, USA). A FLUOstar Omega plate reader (BMG Labtech, Orternberg, Germany) recorded ratiometric time-resolved fluorescence (Excitation =380 ± 20 nm / Emission =650 ± 50 nm) and O 2 consumption (mg/cell.h) was calculated based on a 0.9% solubility of O 2 in saline solution at 37°C under 1 atmospheric pressure (6.7 mg O 2 /l). Maximum supportable cell density in commercial scalable systems was calculated using the formula: Where Kla = reported mass transfer coefficient of system (/h), C* = saturation O 2 concentration (6.7 mg/l), C = maintenance O 2 concentration (3.35 mg/l), and R = O 2 consumption (mg/cell.h).
System medium per cell volumetric productivity analysis
Erythroblasts were taken at day 7 of culture and volumetric productivity calculated for cultures seeded in fresh media at 3 × 10 6 /ml, 5 × 10 6 /ml and 5 × 10 6 /ml with 30% of medium replaced after 5 h: Volumetric medium productivity (volume/cell) = Media volume used/( I.e rt -I).
I (initial cell number), r (growth rate constant, h -1 ), (t) time when growth rate becomes inhibited.
Uninhibited growth rate (r) was estimated from an exponential fit to the first 12 points of each high resolution (0.75-h counts) growth curve; time of growth inhibition (t) was determined as the point at which cell numbers deviated from extrapolation of this uninhibited model.
Media exhaustion studies
Erythroblasts were taken at day 7 of culture, centrifuged at 300 × g av for 6 min at RT, and resuspended in fresh culture medium at 3 × 10 6 /ml. Medium and cells were sampled hourly. Controls were cultured without intervention; experimental supplemented concentrations at 10 h were 2250 mg/l glucose, 292 mg/l glutamine, 1.5% AB serum, 5 ng/ml stem cell factor, 0.5 ng/ml interleukin-3 and 1.5 U/ml erythropoietin alone or in combination as specified in results. Amino acids (MEM Amino acids 50× solution, M5550), vitamins (BME Vitamins 100× solution, B6891) and phosphate (sodium phosphate monobasic, S5011) were supplemented at initial concentrations. Ammonium hydroxide and lactic acid were added to erythroblast cultures at 3 × 10 5 /ml to assess the effect on cell growth (1.3 mM, 8 mM ammonia; 5 mM, 28 mM lactate). Metabolite and nutrients were measured (or verified) using a Cedex Bio HT -Bioprocess Analyser (Roche, Switzerland).
Statistics and calculations
Statistical comparisons and design of experiment statistical design were conducted using Minitab ™ software. ANOVA was used to establish P-values and Tukey's test where pairwise comparisons are stated. A minimum of n = 3, was used to power statistical comparisons. Growth rate in the presence of inhibitors was calculated from an exponential fit to a six-point data series over 18 h. Growth response to supplements was calculated by the rate of deviation from extrapolated uninhibited exponential growth. Where percent enucleation is reported, it is reported to coincide with the peak system proliferation, avoiding misleadingly high percentage enucleation figures that occur as cell numbers decline.
Erythroblast bioreactor compatibility and cell density intensification limitations
Three cell-type specific attributes, in combination with the mass transfer characteristics of a bioreactor, determine the cell density that can be supported in a culture system: tolerance to bioreactor operation (and therefore achievable mass transfer), required dissolved O 2 level, and O 2 uptake rate (OUR). Given the importance of culture intensification to RBC manufacture, each of these was determined for erythroblast culture.
3.1.1. Tolerance of erythroblast culture to bioreactor agitation and gassing Mechanical agitation and gas sparging of a cell culture improves mass transfer and therefore O 2 availability to cells. However, consequent mechanical stress can reduce cell viability or alter phenotype; in the case of erythroid lineage cells impeller tip speeds of >210 mm/s have been reported to be damaging (Chisti, 2001), and gassing can damage cells during bubble rupture. Further, gas damage can be exacerbated by mechanical agitation due to bubble break up and increased bubble to cell surface interface (Chisti, 2000). To test these operational factors, stir speeds of 300 revolutions/min (RPM; 157 mm/s) and 450 RPM (236 mm/s) in combination with O 2 delivery via sparging through the medium or the reactor headspace were investigated for effects on cell proliferation and erythroblast maturation.
Sparged and stirred bioreactors substantially reduced erythroblast proliferation relative to static culture. This effect was increased at higher tip speed with static culture total PDs (TPD) of 15.3, decreasing to 9.9 and 6.0 at 300 and 450 RPM respectively (p ≤ 0.05). In the absence of sparging, cell proliferation in the bioreactor was improved, but still reduced relative to static culture (p ≤ 0.05). However, there was no significant difference between the different tip-speeds (300 RPM, TPD = 12.0, 450 RPM (TPD = 11.9), or any measured reduction in viability, indicating that mechanical damage was unlikely to be the reason for this remaining proliferative deficit in nonsparged bioreactors ( Figure 1A). Addition of the nonionic surfactant Pluronic-F68 (PF-68) was investigated to mitigate sparging induced damage; PF-68 restored sparged bioreactor cell growth to the level of nonsparged controls, increasing tolerable O 2 input rate, and therefore increasing potential cell density (Chisti, 2000;Tharmalingam et al., 2008) (Figure 1A). Enucleated Figure 1. Erythroid cell proliferation and differentiation is affected by bioreactor operational factors that determine system mass transfer. Mechanical agitation, gas sparging and cell protective PF68 were tested for effect on growth and maturation. (A) Growth curves show gas sparging (S) with stirring (300 or 450RPM) greatly reduced cell proliferation; stirring exacerbated the negative effect of sparging but was not detrimental alone (NS). PF68 supplementation to sparged bioreactors (S, P) protected proliferation from mechanical damage and was equivalent to nonsparged bioreactors (NS). (B) Example flow cytometry plot of CD235a vs. the nuclear stain DRAQ5 shows clear identification of the enucleated population (box). (C) In a nonsparged system, higher mechanical agitation supported a higher enucleation rate after 18 days. (D) Mechanical agitation was shown to have a direct effect on enucleation by transfer of cells from static to bioreactor culture after 19 days; the parallel curves indicate this accelerated enucleation is not associated with increased enucleated cell fragility.
RBC production under each condition was evaluated by a flow cytometry assay of CD235a+/DRAQ5-cells ( Figure 1B). Although protective of growth, PF-68 had a negative impact on the percentage of enucleated cells at the end of the process. This negative effect persisted when PF-68 was removed from the cultures at Day 7 (nonsparged control enucleation =68%, sparged + PF-68 = 43%, sparged + PF-68 until Day 7 = 44%; p ≤ 0.05). In the absence of sparging, a higher tip-speed generated substantially more enucleated product ( Figure 1C). Transfer of cells from static culture to stirred culture after 19 days resulted in a rapid increase in enucleated cells demonstrating this was a direct effect of stirring on enucleation ( Figure 1D).
Effect of dissolved O 2 and pH level on erythroblast culture
The second erythroblast attribute necessary to determine maximum potential cell density is the dissolved O 2 concentration. Both O 2 and pH are reported to effect erythroid differentiation (Endo et al., 1994;McAdams et al., 1998;Sarakul et al., 2013); a matrix of pH and O 2 conditions were investigated in the bioreactor system to determine relative magnitude of effect and independence.
Lower dissolved O 2 greatly increased the percentage of enucleated cells (Figure 2A-C). At 25% O 2 there were 78% enucleated cells, which was significantly higher than the 37% enucleated cells observed at 100% O 2 (p ≤ 0.01). pH did not appear to be a significant factor affecting enucleation; however, pairwise comparison showed the difference between pH 7.3 and 7.5 to be close to significance (p = 0.14); this is in agreement with the advantage to elevated pH reported previously and our observation of the persistence of non-CD235a expressing cells at pH 7.3 (data not shown). A rise in the percentage of enucleated cells occurred with increased pH at intermediate level O 2 , indicating sensitivity to pH effect may be greater if dissolved O 2 is not optimized ( Figure 2D). pH and O 2 had no significant effect on total cell proliferation or time to maximum product yield, with the TPD ranging from 12.0 to 12.6 in all cultures and the maximum product yield achieved between 17 and 20 days.
Comparison of the bioreactor produced cells to a static culture system
The established bioreactor process (pH 7.5/50% O 2 /450 RPM/nonsparged) was compared to the control static culture system. After 21 days in culture, a large number of mature enucleated cells were observed in both systems . 25% and 50% O 2 form a statistically distinct group from higher O 2 levels (p ≥ 0.05). pH is not a statistically significant factor (pairwise comparison indicates the difference between pH 7.3 and 7.5 close to significance, p = 0.14). (B) Cell morphology clearly shows higher enucleation levels at lower O 2 (Day 19 cells cytocentrifuged, stained with Leishmans dye, observed with a Nikon Eclipse Ti microscope with a 40× objective. (C) The level of enucleation is higher throughout the culture process at low O2, not just at final harvest. (D) An interaction chart for pH and O2 suggests a rise in percent enucleation with increased pH may be more significant when O2 is at an intermediate level.
with a similar appearance to the adult donor RBC control ( Figure 3A). The mature RBCs cultured in vitro were also similar in size to adult RBC (static =8.8 μm, bioreactor =8.3 μm, adult donor control RBC = 8.5 μm; Figure 3B). The percentage of enucleated cells was higher in bioreactor cultures (78 ± 4%) compared to static (54 ± 4%; p ≤ 0.05; Figure 3C), illustrating that increased homogeneity of enucleated cell product is achieved in the bioreactor system. Analysis of haemoglobin expression showed broad equivalence between static and bioreactor systems, and comparability to other reports from cord cells (Jin et al., 2014), including significant expression of β-globins ( Figure 3D). The approximately 3 TPD deficit in proliferation in bioreactor culture relative to static culture was confirmed as previously observed ( Figure 3E).
Determining specific O 2 uptake rate of erythroblasts
The maximum cell density supportable is determined by the rate of O 2 transfer into the medium in the established bioreactor process relative to the cells OUR (Xing et al., 2009). Cell OUR was monitored throughout the CD34+ to RBC differentiation process. Maximal OUR occurred at Day 6 in both static and bioreactor culture (static =5.10 × 10 -8 μg O 2 /cell.h and bioreactor =6.34 × 10 -8 μg O 2 /cell.h; Figure 4). After this point the OUR of cells in the bioreactor declined and reached 1.69 × 10 -8 μg O 2 /cell.h by Day 19. Cells in static culture had a more variable OUR following Day 6, but this still decreased to 9.11 × 10 -9 μg O 2 /cell.h by Day 19. The known mass transfer characteristics of commercial scale culture systems (Junker, 2004;Klockner et al., 2013;Mikola et al., 2007;Nienow et al., 2013) allows calculation of the density of erythroblasts supportable in the absence of other culture limitations, and the compatibility of those systems with constraints on bioreactor operation to increase mass transfer (identified above; Table 1). Calculations are based on consumption rates of 2.3 × 10 -7 μg O 2 /cell.h to allow a significant (4-fold) safety margin and indicate that cell densities in excess of 5 × 10 8 /ml (target density to allow a sub-10 l system volume/unit of 2 × 10 12 cells) should be supportable in various commercially available bioreactors.
Erythroblast medium volumetric productivity limit
Given that O 2 availability was not the primary bioreactor limitation at current culture densities the culture medium utilisation of the system was assessed. Erythroblasts from Day 6 were placed into fresh medium in bioreactors at different densities (3 × 10 6 /ml, 5 × 10 6 /ml) and with an alternate media exchange strategy (5 × 10 6 /ml with 30% exchange after 5 h) to construct high resolution growth curves. Exponential growth models of the first 9 h (12 data points) were all equivalent for growth rate (0.05 1/h) and an excellent fit (R 2 > 96% in all cases) indicating no significant impact of initial cell density or partial media exchange on growth rate ( Figure 5A). Deviation of the data from the extrapolated model identified when growth inhibition occurred; 15.2 h (3 × 10 6 /ml), 12.4 h (5 × 10 6 /ml), 16.1 h (5 × 10 6 /ml with 30% media change after 5 h; Figure 5B). The medium replacement rate per cell produced required to keep erythroblasts in uninhibited growth was strategy dependent suggesting increased productivity from higher density culture (Table 2); this bioreactor protocol would require a lower media volume/unit produced (495 l/unit) compared to the original static laboratory protocol (662 l/unit; Table 2). Further, the bioreactor protocols maintenance of a~13.9 h cell doubling time will only require 58% of the manufacturing facility time relative to the control static process (~24 h doubling) for any given output, with substantial cost implications.
Screening of factors limiting medium volumetric productivity
Five hundred litres of media per unit of RBCs is still at least an order of magnitude below economic levels of intensification. Inhibition of cell growth by depletion of nutrients was tested by supplementation strategies of key media component groups including glucose, glutamine, serum, cytokines (EPO, SCF and IL-3), amino acids, vitamins, and phosphate. However, this had no effect on the point at which growth inhibition occurred ( Figure 6A, B). Further, only a low proportion of available glucose was depleted over the uninhibited growth period ( Figure 6D); other key nutrients including iron, glutamine, and glutamate also showed negligible consumption rates over the period prior to growth inhibition (data not shown).
The alternative to medium depletion is production of an inhibitory factor such as lactate or ammonia (Hassell et al., 1991); addition of exogenous supplements of either significantly inhibited growth rate in a linear fashion (p ≤ 0.05; Figure 6C). The effect of each factor was dependent on the level of the other with high lactate levels reducing the inhibitory effect of ammonia. However, to cause the observed inhibition of growth ammonia/lactate combinations in excess of 4 mM/15 mM respectively would be necessary; accumulated concentrations of endogenously produced ammonia (ND i.e. <0.3 mM) and lactate (~6 mM) at the point of growth inhibition were much lower ( Figure 6D). Additionally, after a brief initial higher period, the molar ratio of lactate produced to glucose used Figure 6D). Finally, cultures were screened for the production of potentially inhibitory cytokines; TGF-β, interferon-γ and tumour necrosis factor-α are prime candidates reported to inhibit erythroid growth; IL-1-β, IL-2, IL-4, IL-6 and IL-10 were also measured as potential feedback influences. TGF-β1 was the only factor secreted at a relatively high (ng/ml) level ( Figure 6E). Dosing of exogenous TGF-β1 did decrease specific cell proliferative rate but only by 9% at 10 ng/ml, a higher dose level and lower inhibition than that observed in culture ( Figure 6F). Although a substantive effect on proliferative rate was not observed, TGF-β1 did accelerate erythroblast maturation: 1 ng/ml resulted in a faster increase in CD235a expression, earlier enucleation, and 30% reduced total proliferative capacity of cells suggesting the cytokine may be responsible for the lower proliferation/higher enucleation in the bioreactor. However, the cell specific production rate of TGF-β1 was equivalent in the static and the bioreactor system, showing a rapid decline in both systems over the first 5 h, after which it remained relatively stable ( Figure 6G). Of further note, the TGF-β1 was in inactive form (bound to latency associated peptide) in either static or bioreactor culture system (≤ limit of detection 1 pg/ml i.e. ≤0.1% of total).
Discussion
RBCs as a manufactured product will not become economically viable unless fundamental barriers to cell culture efficiency are identified and addressed. The work here has shown that the barriers conventionally associated with high intensity cell production are not the primary limitations for the field; on the contrary, erythroblast metabolic characteristics indicate that gas mass transfer requirements, nutrient use and metabolite resistance will allow high intensity production in current industry standard bioreactor systems. Further, certain system attributes, such as mechanical stress, can be advantageously controlled to increase product purity. This understanding is necessary to inform future research that will progress the manufactured RBC field. Any adoption of nonindustry standard bioreactors, or new bioreactor design, should be based upon specific requirements of the intensified process. Defining production limits in current commercial bioreactor systems is a key starting point; such systems lower the risks and barrier to entry for product developers due to regulatory and industrial experience.
Most cell cultures are limited in absolute density by O 2 transfer into the system, and this will determine the minimum volumetric footprint for the manufacturing bioreactor. The low specific OUR of the erythroblasts is at least an order of magnitude beneath those reported for common cell lines (Ruffieux et al., 1998;Goudar et al., 2011). Even given the operational constraints on actively gassing and agitating the culture media this enables potentially very high intensity production. The frequency with which media needs to be exchanged to maintain uninhibited exponential growth is therefore the primary economic constraint. This does not necessarily force a large volume for the manufacturing bioreactor, but determines the total volume of medium used in a given production run. Allowing cells to drop significantly beneath uninhibited exponential growth is grossly time, and consequently cost, inefficient due to the compounding nature of cell doubling. The observed uninhibited growth rate potential is encouraging; a 13-h erythroblast doubling time enables a 4-order of magnitude increase in cell number in a week. However, the calculated rate of media exchange required to achieve this, with many minimally Figure 5. Volumetric productivity of the system is dependent on media exchange strategy (A) Cells were cultured starting at 3 × 10 6 /ml, 5 × 10 6 /ml, and 5 × 10 6 /ml including a 30% volume exchange after 5 h. Cells initially proliferated at a constant and equivalent rate under all conditions after which growth became inhibited. (B) The initial deviation of the cell numbers from the extrapolated exponential growth is approximately linear (R 2 > 95%), and can be used to approximate the time point at which growth became inhibited. depleted factors wasted and common metabolites beneath toxic levels, is economically prohibitive. A depleted medium factor or a secreted inhibitor could exhibit the same growth limiting behaviour observed. However, we have stronger evidence for the latter given the range of supplementary strategies that do not promote further cell growth. Further, the maintenance of a constant ratio of glucose consumption to Figure 6. Depletion of medium factors or production of common metabolites and cytokines are not responsible for volumetric productivity limits. (A) As previously, the deviation of cell growth from the initial exponential rate can be plotted as exponential model residuals vs. time. Supplementation after 10 h with amino acids, vitamins, or amino acids, vitamins and phosphate do not change the point at which growth becomes inhibited. (B) A wider range of supplementation strategies were tested including combinations of cytokines, serum, glucose and glutamine. The percent reduction in cells at 24 h compared to that predicted by the exponential model for each strategy is shown indicating no support of additional cell growth relative to control for any supplementation strategy. (C) Ammonia and lactate both inhibit cell growth (p ≤ 0.05). An increase in lactate concentration reduces the inhibitory effect of ammonia at high levels of the latter. (D) Lactate accumulates linearly with increased cell.time. However, at the point of growth inhibition (red dashed line) the level is not inhibitory with reference to (C). Further, glucose and lactate specific rates do not show any notable change as growth becomes inhibited. (E) A screen of cytokines present in media after cell growth inhibition indicated TGF-β1 as a primary candidate for feedback growth inhibition. (F) TGF-β1 is shown to be slightly inhibitory to erythroblast cell growth with a maximum of 9% reduction in specific growth rate over 40 h of culture and 10 ng/ml TGF-β1. (G) TGF-β1 is produced at the same specific rate in static and bioreactor cultures.
lactate production suggests this is not a metabolic limitation; such limits would be likely to disrupt the ratio (Zagari et al., 2013). TGF-β1 was present at high levels, and (as previously reported (Buscemi et al., 2011)), accelerated erythroblast maturation in a manner similar to that observed in the bioreactor when exogenously dosed in to static culture. The equivalent concentration and inactivity of the endogenous cytokine in both culture systems initially suggested it was an unlikely candidate for either growth rate inhibition or total reduced proliferation in the bioreactor. However, mechanical forces as low as 40 pN can transiently activate TGF-β1 from its latent form; it is therefore reasonably probable that there is a bioreactor specific effect whilst stirring is applied causing accelerated maturation (Buscemi et al., 2011). Alternatively, or additionally, mechanical forces have been reported to have direct integrin mediated signalling effects that can influence cell maturation or inhibitory factor potency (Schwart, 2010). Although this could not explain the inhibition of proliferative rate (given the lack of substantive effect of TGF-β1 dosing into the bioreactor on proliferation rate), other unidentified inhibitory mediators are likely to be secreted. Mechanical agitation has been reported to increase cytokine release and signalling in a number of other cell types so there is evidence that such factors could be present at higher levels, or more potent, in a stirred bioreactor (Kurazumi et al., 2011).
A further limit to RBC production in vitro is the red cell yield per starting progenitor cell; the nature of the limit is either availability or cost of the required starting cells. The contribution of the starting cells to the cost of a final RBC product depends on the proliferative capacity of the cells during differentiationevery order of magnitude in cell expansion (approximately 3.3 population doublings) achieved between starting cells and final product reduces the requirement for (and hence the impact of the cost of) the starting cells by an order of magnitude on a per product basis. Conversely, the impact on cost of the final product for production of a given cell phenotype becomes exponentially larger as the cells proliferate towards terminal differentiation i.e. 2 × 10 12 terminally mature orthochromatic erythroblasts are required to make each unit of enucleated blood, but only~2 × 10 8 cells of the progenitor phenotype from~14 PDs earlier in the process. This is important as differentiating cells have a changing profile of metabolism and other attributes that impact manufacturing productivity cost; in the case of red cells the potential to intensify would be anticipated to increase as the cells mature. The different approaches currently taken to overcome availability limitation of primary cells such as UCBpluripotent, adult stem cell, engineered progenitorwill have different production costs that will be a function of cost of input cells and the subsequent proliferative capacity and intensification profile during differentiation; very recent progress to address both adult (vs. embryonic) maturation (Fujita et al., 2016) and yield (Giani et al., 2016) from renewable sources such as pluripotent cells has been promising. Our work has focused on erythroblast intensification because it will be a key determinant of process cost and practicality irrespective of the progenitor starting cell population due to both the exceptionally high number of these cells required in culture per unit of product and their proliferative capacity (Mercier Ythier, 2015). The data discussed here are therefore limiting and relevant for any candidate red cell manufacture process.
We conclude that there are no conventional barriers (shear stress sensitivity, O 2 demand, or metabolic demand) that would prevent established bioreactor systems from producing blood at productivities under 100 l/unit, and possibly significantly higher. Further the effect of combined control of pH, oxygen, and mechanical agitation will greatly increase efficiency of final product harvest; in particular mechanical agitation, by rapidly increasing the proportion of enucleated cells, will enable peak enucleation to be engineered closer to peak culture system proliferation. This is absolutely key to reduce wastage of earlier enucleating cells, and to prevent challenging downstream processing of low purity enucleated product. However, the sensitivity of the cells to the bioprocess conditions adds risk and complexity as well as opportunity; mechanical stress may simultaneously increase enucleation whilst reducing total proliferative capacity, conventional biologics production strategies such as the addition of cell membrane protective agents appear to improve proliferation but reduce enucleation (presumably because membrane mechanics are critical for enucleation). To realize the potential efficiencies of production at suitably low risk, process scaling and intensification must be characterized for effects on all key elements of cell quality, and effort must be focused on identifying and mitigating the factor(s) that inhibit growth rate (and hence media efficiency).
Key points
• Enucleated red cells can be produced to high purity in industry standard stirred tank bioreactors at 500 l per unit of cells • Mass transfer and common metabolites are not primary limitations indicating potential for substantially higher efficiency
Conflict of interest disclosures
No authors have any conflict of interest. | 7,788.2 | 2017-04-03T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Relative Humidity Sensor Based on Double Sagnac Interferometers and Vernier Effect
A double Sagnac interferometer (DSI) and vernier effect sensor for relative humidity (RH) is proposed and experimentally implemented, in which two single Sagnac interferometers (SI1 and SI2) cascaded in parallel to construct the structure of the sensor. Owing to the approximate free spectral range (FSR) of the SI1 and SI2, the proposed sensor can produce fundamental and harmonic vernier effect. Theoretical analysis shows that the RH sensitivity of the sensor is only related to the lengths of polarization-maintaining fibers (PMFs). The experimental results show that in the range of 30∼80%RH, the RH sensitivities of the spectral envelope can reach −0.48 nm/%RH, which is 60 times higher than that measured by the single Sagnac interferometer sensor (8 pm/%RH). The experimental results are consistent with the theoretical analysis. The proposed sensor has the superiorities of high sensitivity, good repeatability, simple fabrication, and compact size, and it has potential applications in the areas of biopharmaceuticals, environmental monitoring, food processing, and microbial sensing.
I. INTRODUCTION
R ELATIVE humidity (RH) refers to the ratio of the actual water content of the air to the theoretical maximum water content, whose measurement or monitoring is particularly important in human daily life and industrial production, such as the spread of COVID-19, preservation time of agricultural and sideline products, the service life of the precision instrument and so on [1], [2], [3], [4]. With the development of science and technology, RH sensors emerges, and developed rapidly during the past twenty years, while the conventional RH sensors are mainly based on electric conduction technology, which have the disadvantages such as bulky size and difficult signal process, and then makes it less possible to be coupled to automatic controlling system. The appearance of optical fiber sensors can make up for this defect because optical fiber sensors have some advantages of high-temperature resistance, larger dynamic range, strong adaptability, anti-electromagnetic interference [5], and so on. Therefore, various fiber RH sensors with different structures have been developed and reported successively. Among them, interferometric fiber optic sensors are the fastest developing sensors in recent three years. There are four common interferometric sensors, including Mach-Zehnder interferometers (MZIs) [6], [7], [8], Fabry-Perot interferometers (FPIs) [9], [10], [11], Sagnac interferometers (SI) [12], [13], [14], Michelson interferometers (MI) [15], [16]. In the 2022 year, Xin Ding and his groups [8] proposed a balloon-like structure fiber interferometer with GO nanomaterial coating for humidity detection, whose sensitivity is 0.449 nm/%RH in the measurement range of less than 50%RH, the sensitivity isn't very high, and the sensor structure is difficult to manufacture. In the 2022 year, Hailin Chen and his companions [10] proposed a parallel optical fiber FPI and vernier effect sensor for simultaneous high sensitivity measurement of RH and temperature, the sensitivity of RH and temperature is up to −11.388 nm/%RH and 18.436 nm/°C, respectively. In the 2021 year, Yuanfang Zhao and his groups [12] proposed a sucrose concentration sensor by utilizing a fiber Sagnac interferometer with no-core fiber (SI-NCF) based on the vernier effect. In these vernier effect-based sensors, the spectrum of the sensors forms a large envelope, and the sensitivity of the sensor can be amplified by tracking the movement of the envelope.
In our previous research, we demonstrated fiber sensor configurations of cascaded hybrid types fiber interferometers [17], [18], [19]. The vernier effect of the hybrid interferometer has been significantly improved in temperature measurement. Therefore, it is used to fabricate temperature sensors based on cascaded interferometers, and high-temperature sensitivity is obtained. In these previous works, the hybrid-types fiber interferometer consisting of two different interferometers was used for temperature sensing or RH demonstrations. In this paper, a DSI for RH measurement based on the vernier effect is first proposed and demonstrated. The sensitivity of the RH sensor is improved by the vernier effect which generates between the two Sagnac interferometers with different lengths of PMF. The proposed sensor has the advantages of stable performance, simple production, and high repeatability.
II. SENSOR FABRICATION AND PRINCIPLE
In optical fiber sensing, the cascade of two interferometers can produce periodic vernier interference fringes, and the interference fringes will be changed sharply with the physical parameters (cavity length, refractive index, fringe fineness) of the cascaded interferometer. In addition, the interference fringes are periodically changed, and the sensors can be realized by demodulating the envelope of the interference fringes. The schematic diagram of the proposed RH sensor based on the vernier effect with a DSI cascaded in parallel is shown in Fig. 1, which is composed of two sections of PMF, two polarization controllers (PCs), two three-ports OCs, and a four-ports OC.
According to the coupled-mode theory, its transmission characteristic matrix can be expressed as Where: k is the coupling ratio of OC. When the incident light enters the PMF, there will be a certain angle between the PMF and the ordinary single-mode fiber (SMF). Let the angle between the PMF and the X axis of the fixed experimental coordinate system is θ 1 , that is, the coordinate rotation matrix can be expressed as The transmission matrix of PMF can be expressed as Among it, ϕ = πΔnL P MF /λ, L P MF is the length of the PMF, Δn is the effective refractive index difference between the fast and slow axes of the PMF. Because the phase difference introduced by the light field through the PMF clockwise and counterclockwise is the same, the transmission matrix of the reverse light through the PMF is the same.
Assume that the light field intensity of the incident light is In which E x and E y represent the decomposition of the input light field along the x and y axes respectively.
When light enters OC1, it can be described as Analyze the clockwise light field of the Sagnac loop as follows The electric field strength returning to OC1 port 4 is expressed as Analyze the counterclockwise light field of the Sagnac loop as follows The electric field strength returning to port 3 of OC1 is expressed as The last two beams of light meet at OC1, which can be expressed as When k 1 = k 2 = k 3 = 0.5 the transmission function of the transmission spectrum is expressed as According to the formula (11), it can be seen that the DSI's transmission spectrum is influenced by ϕ 1 = πΔn 1 L P MF 1 /λ, ϕ 2 = πΔn 2 L P MF 2 /λ , the increase of RH, Δn will change accordingly, which is finally resulting in the change of the output transmission spectrum.
The final interference spectrum is a superposition of the two individual SI, and the FSR1 and FSR2 are the FSR of SI1 and SI2, respectively, where SI1 is formed by cavity 1 and SI2 is formed by cavity 2. FSR1 and FSR2 can be expressed as: As shown in Fig. 1, two interferometers are in parallel, and the FSR of the envelope generated by the output waveform can be expressed as By controlling the length of the PMF in the cascaded DSI, the optical path lengths of the two SI are different, resulting in the fundamental vernier effect. Gomes et al. creatively introduced an advanced concept of a harmonic vernier effect to further surpass the limits of the traditional vernier effect [20]. Compared to the traditional vernier effect, the harmonic vernier effect allows a considerable sensitivity improvement and more flexible control of the sensitivity magnification factor. Assuming that all other conditions are the same in the DSI, the optical path degree of one SI is set to be i times that of the other, namely L2+iL1 (the value of i is the order of harmonic generation). The FSR of the envelope on the harmonic vernier effect can be expressed as It can be seen from formulas (13) and (14) that FSR envelope is affected by L1 and L2, therefore, the influence factors of FSR envelope are L1 and L2.
Compared with a single SI sensor, the envelope shift of cascaded configuration can be magnified with an impact factor M i , The sign of the denominator F SR 1 − (i + 1)F SR i 2 determines the direction in which the wavelength moves. When the sign F SR 1 − (i + 1)F SR i 2 is positive, the slope of the sensitivity curve of FSR2 with the Vernier effect is constant. When the sign F SR 1 − (i + 1)F SR i 2 is negative, the sensitivity curve's slope of FSR2 with the Vernier effect will change.
III. EXPERIMENT AND DISCUSSION
The RH sensing measurement system is depicted in Fig. 2. A programmable constant temperature and humidity test chamber (QING SHENG, temi880, China) is used for relative humidity measurement. Amplified spontaneous emission (ASE) acts as a broadband light source (BBS), which emits light with a wavelength range from 1530 to 1650 nm. The light is transmitted to the sensor through the optical fiber coupler. The reflection spectrum of the sensor is collected by the OSA (Yokogawa, AQ6370D), and the resolution of the OSA is set at 0.02 nm. When SI2 conducts RH experiments separately, another parallel SI1 is removed from coupler 2 and coupler 3. The splitting ratios of the coupler1, coupler2, and coupler3 are 50:50. When SI1 and SI2 are parallel for the RH experiment, because both SI1 and SI2 are sensitive to RH, to realize the vernier effect and improve RH sensitivity, the reference unit SI1 cannot be put into the test chamber.
Firstly, we put the SI2 into the test chamber to ensure sufficient contact between the sensor and RH. Under this condition, we study the SI's spectra shift response to the different RH. During the sensor calibration, we keep the temperature around 25°C and keep the polarization state of the sensor stable and unchanged, and then change the RH value by adjusting the test chamber.
During the test, the length of PMF is 7.05 m, and the birefringence of PMF is 3.85 × 10 −4 , according to the formula (12), it can be deduced that the FSR of SI2 is 0.78 nm. The measured interference spectra of the SI with different RH are depicted in Fig. 3(a). It can be noted that the resonance spectrum has a blueshift with the increasing RH value. To evaluate the RH sensitivity of the SI sensor, one typical dip at1546.8 nm is detected and the dip values at different RH are given in Fig. 3(b). When the RH increases from 30% to 80%, the wavelength shift is 0.4 nm. This result shows that the SI sensor can effectively test solutions with different RH values, and the sensitivity of the single SI sensor is 8 pm/%RH.
To improve the single SI's sensitivity, the vernier effect is introduced into the SI sensor. Fig. 4 exhibits the transmission spectrum of cascaded SI1 and SI2 in parallel with the fundamental vernier effect, where the length of PMF1 and PMF 2 is 15.05 m and 13.62 m, respectively. Due to the similarity of the FSR between the two interferometers, the transmission spectrum of the cascaded configuration has a superimposed envelope. The extinction ratio (ER) of the output spectrum is about 20 dB. In addition, the FSR of the DSI sensor based on the fundamental vernier effect is 4.5 nm near the wavelength of 1550 nm, which is 5.76 times of the single SI sensor, and the theoretical value of the FSR near 1550 nm is 4.16 nm according to the formula (13), there is a good agreement between theoretical calculation and experimental test. The FSR shows that this method is promising to improve the sensor's sensitivity.
As is depicted in formula (14), the reference SI1 has a length of L2 + iL1, and the sensing SI2 has a fixed length of L2, i denotes the order of the harmonic, and L1 is the detuning length. The first case in Fig. 4 corresponds to the fundamental vernier effect, where i = 0. Fig. 5(a)-(c) shows three cases corresponding to the first three harmonics, where L1 is 6.6 m and L2 = 0.2+iL1(i = 12,3) respectively. Fig. 5(a) is the simulated spectrum of the firstorder harmonic vernier effect. The upper envelope FSR is 30nm and the inner envelope FSR is 60nm, which is twice as large as the upper envelope FSR. Fig. 5(b) is the simulated spectrum of the second-order harmonic vernier effect. The upper envelope FSR is 30nm and the inner envelope FSR is 90nm, which is three times that of the upper envelope FSR. Fig. 5(c) is the simulated spectrum of the third-order harmonic vernier effect. It can be seen that the more order harmonic vernier effect, then it is more difficult to observe the complete harmonic spectrum period, and then it leads to detecting the wavelength drift with difficulty. The simulation results are consistent with that of the theoretical value, and the upstream standard envelope is independent of the harmonic order. The inner envelope produced by the harmonic vernier effect is i+1 times that of the upper envelope produced by the basic vernier effect, which means that when the RH changes slightly, the spectrum produced by the harmonic vernier effect will have a larger wavelength shift, and thus resulting in a significant improvement in RH sensitivity.
Firstly, the DSI sensor's humidity sensitivity is experimentally evaluated and analyzed according to the experimental setup shown in Fig. 2, the environment temperature is controlled at 25°C, the length of PMF1 and PMF2 are set at 15.05 m and 13.62 m, and PMF2 is put in the chamber. When the different RH is delivered to characterize the sensor performance based on the vernier effect, the transmission spectrum of the proposed sensor based on the fundamental vernier effect at different RH ranging from 30% to 80% is shown in Fig. 6(a). It can be seen [21] Our work from Fig. 6 that when the RH increases from 30% to 80%, the envelope dips have a blue shift to 16 nm, which is since the FSR of SI2 is smaller than that of SI1 [20]. The relationship between the envelope shift and the RH is shown in Fig. 6(b), when the relative humidity is 30% ∼ 80%, the dip wavelength displacement is 16 nm. As we can see, the RH sensitivity is −0.32 nm/%RH, which is 40 times higher than a single SI sensor without the vernier effect. As can be seen from Fig. 6(b), the linear correlation coefficient is 0.9998, which shows good linearity. Secondly, the output performance of the DSI sensor based on the first harmonic vernier effect wave is further studied. Fig. 7 shows the variation of dip wavelength drifts with the RH. it can be seen from Fig. 7 that when RH ranges from 30% to 80%, the dip wavelength shifts to a short wavelength direction. The envelope dips have a blue shift to 24 nm when the RH changes from 30% to 80%. As we can see, the RH sensitivity is −0.48 nm/%RH, which is 60 times higher than a single SI sensor without the vernier effect. To sum up, using the vernier effect, a cascaded DSI sensor can achieve sensitivity amplification, which is more convenient and economic.
Finally, the performance of the proposed sensor is compared and analyzed with those of the existing sensors shown in Table I. As can be seen from Table I, there are two schemes to be designed, one is to detect the RH directly by combing an optical interferometer and fiber Bragg grating (FBG), and Rui et al [7] proposed an FBG cascaded with balloon-like sensing structure with up-tapered MZI to realize the RH test, whose sensitivity is 6.1pm/%RH, and Yong et al [9] cascaded FBG and FPI to measure the RH and its sensitivity is 22.07 pm/%RH, which is relatively low. To improve the sensitivity, other schemes were designed to measure RH by using the optical interferometer structures and humidity-sensitive materials. Tong [6] et al. proposed a compact MZI and FPI structure assisted GQDs-PVA to realize the detection of RH solutions, the RH sensitivity is −0.132 nm/%RH. In summary, compared with other sensors, the proposed sensor has the comprehensive performance of simple manufacture, high sensitivity, high-cost performance, and competitiveness.
IV. CONCLUSION
In conclusion, we propose and demonstrate a cascaded DSI sensor based on the vernier effect for RH detection. We optimize the transmission spectrum of SI by adjusting the PMFs' length. The experimental results show that when the PMF1 and PMF2 lengths are 15.05 m and 13.62 m respectively, the sensitivity of the DSI sensor based on the fundamental vernier effect is −0.32 nm/%RH, which is 40 times higher than that of a single SI sensor. By tuning the length of PMF1 and PMF2 to 7.08 m and 15.08 m, e the first-order harmonic vernier effect is exactly to be generated, and the RH sensitivity of the DSI sensor is −0.48 nm/%RH. This method explores a novel RH sensor with the benefits of easy manufacture, good stability, and cost-effectiveness, it has potential applications in RH detection in biochemical engineering, food processing, and industry fields. | 3,984.4 | 2023-04-01T00:00:00.000 | [
"Physics"
] |
Measuring Indirect Radiation-Induced Perfusion Change in Fed Vasculature Using Dynamic Contrast CT
Recent functional lung imaging studies have presented evidence of an “indirect effect” on perfusion damage, where regions that are unirradiated or lowly irradiated but that are supplied by highly irradiated regions observe perfusion damage post-radiation therapy (RT). The purpose of this work was to investigate this effect using a contrast-enhanced dynamic CT protocol to measure perfusion change in five novel swine subjects. A cohort of five Wisconsin Miniature Swine (WMS) were given a research course of 60 Gy in five fractions delivered locally to a vessel in the lung using an Accuray Radixact tomotherapy system with Synchrony motion tracking to increase delivery accuracy. Imaging was performed prior to delivering RT and 3 months post-RT to yield a 28–36 frame image series showing contrast flowing in and out of the vasculature. Using MIM software, contours were placed in six vessels on each animal to yield a contrast flow curve for each vessel. The contours were placed as follows: one at the point of max dose, one low-irradiated (5–20 Gy) branching from the max dose vessel, one low-irradiated (5–20 Gy) not branching from the max dose vessel, one unirradiated (<5 Gy) branching from the max dose vessel, one unirradiated (<5 Gy) not branching from the max dose vessel, and one in the contralateral lung. Seven measurements (baseline-to-baseline time and difference, slope up and down, max rise and value, and area under the curve) were acquired for each vessel’s contrast flow curve in each subject. Paired Student t-tests showed statistically significant (p < 0.05) reductions in the area under the curve in the max dose, and both fed contours indicating an overall reduction in contrast in these regions. Additionally, there were statistically significant reductions observed when comparing pre- and post-RT in slope up and down in the max dose, low-dose fed, and no-dose fed contours but not the low-dose not-fed, no-dose not-fed, or contralateral contours. These findings suggest an indirect damage effect where irradiation of the vasculature causes a reduction in perfusion in irradiated regions as well as regions fed by the irradiated vasculature.
Introduction
Lung cancer is one of the most commonly diagnosed cancers and is currently responsible for the highest percentage of cancer-related deaths. In 2022, 236,760 new cases of lung cancer and 130,180 deaths are projected (21.6% of total cancer-related deaths and the most deaths of any individual cancer) [1]. Of those diagnosed, between 37 and 57% of patients will receive radiation therapy (RT) as part of their treatment depending on the stage of their cancer [1].
While external beam RT is an effective non-surgical therapy, there are risks of patients developing radiation-induced lung injuries (RILI). Approximately 33.5% of patients will develop RILI [2], which can cause significant respiratory symptoms in lung cancer patients with compromised baseline lung function. These toxicities can be introduced by damaging the lung pleura, vasculature, and airways. Exact mechanisms that cause this damage are currently not well understood, but two common resulting RILI that can develop are pneumonitis and fibrosis; these RILI can significantly decrease patient quality of life and can even be fatal [3].
One potential strategy to avoid RILI is to use functional avoidance RT treatment planning. The goal of functional avoidance radiation therapy is to mitigate radiationinduced normal tissue toxicities by selectively avoiding high-functioning regions of the lung. In current clinical trials, functional avoidance was achieved while maintaining adequate tumor coverage and satisfying clinical dose constraints to major organs [4][5][6][7]. In recent years, multiple studies have looked at using functional metrics to create risk assessments and have found that the inclusion of these metrics increased the predictive power of the toxicity predictions and radiation response output from these models [5,[8][9][10][11][12][13][14].
One challenge with performing functional avoidance is creating comprehensive predictive models for RILI. Previous work investigated the radiation dose-response of pulmonary function and have developed predictive models assessing local lung function [4,6,[15][16][17][18][19][20][21]. Lung function has been assessed using imaging modalities such as SPECT, which can provide both ventilation and perfusion information, or through the use of aerosols in CT, PET, and MRI [22]. However, for these models to become integrated into clinical practice, they must be executable in current clinical workflow. For clinical integration, CT is an exceptional option due to its high spatial resolution and routine use in treatment planning. Some groups have begun developing predictive models derived from CT, but to date, all functional-avoidance studies using CT have primarily focused on assessing only ventilation, neglecting perfusion and therefore providing an incomplete assessment of lung damage [7,18,23].
Our previous work quantified radiation-induced dose-response, correlating 4DCTderived Hounsfield Unit (HU) changes with changes in contrast from dynamic contrastenhanced CT [24]. However, this work focused on direct radiation damage without quantifying observations of indirect damage to regions supplied by damaged vasculature due to irradiation being in the base of the lung. Recent functional lung imaging work has presented evidence of there being an "indirect effect" in which regions that are unirradiated or irradiated with low dose (below dose thresholds that are known to cause damage) and supplied by highly irradiated vasculature are damaged post-RT [25,26]. However, this evidence was not commented on in those works. Additionally, this work was done using SPECT, which has the issue of producing low-resolution images prone to artifacts and attenuation, making the nuances of the indirect effect difficult to localize [23,[25][26][27]. Wallat et al. presented the first CT-derived evidence of there being an indirect effect, focusing on post-RT ventilation change due to damage to the airways [28]. Vicente et al. later developed a functional avoidance technique that incorporated this indirect ventilation effect and found that the average predicted ventilation preservation was 14.5% higher than conventional RT techniques and 11.5% higher than functional avoidance algorithms that consider only local damage [29]. These results suggest that the indirect effect is crucial to consider, but is all based on ventilation models only. As described in Ireland et al., perfusion change is also an important functional metric that has been shown to be more predictive of patient functional decline and outcome [30]. Therefore, analysis of the indirect perfusion change is needed, particularly utilizing a method that has adequate resolution to identify the source of the damage.
We hypothesize functional avoidance studies using 4DCT can model both ventilation and perfusion metrics to provide a comprehensive assessment of lung damage. In this study, we explore CT-based perfusion changes following RT, and how those changes extend beyond directly irradiated lung tissue. By establishing the first perfusion-based radiation response model incorporating indirect effects, post-RT predictive power can be improved to create superior functional avoidance treatment plans, leading to improvements in preserved function and patient outcomes.
Novel Swine Model
Swine are well suited for biomedical studies pertaining to the development/validation of diagnostic and therapeutic technologies. The genetic proximity of swine to humans combined with their overwhelming anatomical, physiological, and pathophysiological similarities make swine the ideal model for preclinical studies of novel technologies [31][32][33][34]. Additionally, swine experience expedited growth compared to humans. This feature can be leveraged in biomedical research because it results in expedited development of disease, healing, and toxicity, allowing for expedited data acquisition and development of novel technologies [33,34].
Historically, most studies looking at radiation response in the lung have used conventional swine breeds [33,35]. Due to rapid growth in adulthood, swine can reach from 550 to over 650 pounds, which makes them difficult for CT imaging. To combat this challenge, many studies use young swine in order to execute the experiments of interest. However, these swine mimic a human child and experience rapid rates of healing, development, and cell regeneration, which is not an accurate model of a typical patient with lung cancer we would treat clinically [35]. This work used a genetically modified swine breed that poses numerous benefits.
The Wisconsin Miniature Swine (WMS) possess several characteristics that make them an ideal model. WMS were created by selective crossbreeding of several swine breeds such that their weight, size, and physiology are similar to humans and their body composition can be easily manipulated [35]. As they can be easily maintained at human size for any length of time, they will remain the same size from intervention to necropsy. In addition, we were able to select swine that had lung volumes that were within the range of typical human subjects for this work.
Swine Setup
Five WMS (14.4 +/− 1.7 months old) were analyzed. The WMS were sedated to eliminate motion artifacts and mechanically ventilated to a consistent tidal volume of 1 L and respiratory rate of 15 breaths per minute, matching the average tidal volume and respiratory rate of human subjects. The swine were imaged prior to treatment and 3 months following treatment. After the 3 month post-RT imaging session, the swine were euthanized and the lungs were extracted for future histopathology analysis. All details regarding animal care and drugs administered can be found in the Supplementary Materials. The animal care practices and all experimental procedures were approved by the University of Wisconsin Institutional Animal Care and Use Committee (IACUC). The drugs and methods of anesthesia and euthanasia were approved in compliance with American Veterinary Medical Association (AVMA) guidelines for anesthesia and euthanasia of swine. Both committees assured that all procedures were in compliance with the Animal Research: Reporting of In Vivo Experiments ARRIVE guidelines.
Swine Treatment
Each WMS received a research course of 60 Gy to 95% of the planning target volume (PTV) in five fractions approved by IACUC. The PTV was centered on a vessel and airway in the right upper lobe of the subject, and the left lung was left unirradiated (max point dose < 5 Gy). The choice of PTV location was based on finding a location that allowed us to irradiate on a vessel/airway pair as cranial as possible, but while keeping max point dose in the contralateral lung below 5 Gy as well as limiting the dose to the heart to less than 1 mL receiving 10 Gy. The choice to irradiate to 60 Gy was to match the typical prescription of a human lung SBRT patient that is prescribed in our clinic. Figure 1 shows a representative dose distribution that was delivered to the subjects. Treatments were delivered on the Radixact ® linear accelerator with motion Synchrony treatment system (Accuray Incorporated, Sunnyvale, CA, USA) in order to maximize dose conformity and reduce the uncertainty of dose delivery due to respiratory motion. Radixact ® is a helical tomotherapy radiation therapy delivery system capable of delivering conformal intensity-modulated radiation therapy (IMRT). It contains an intrafraction motion management system called Synchrony ® , which has been adapted from CK Synchrony [36]. On the system, an X-ray tube and flat-panel kV imager are offset 90°from the megavoltage (MV) imager and beam. The kV imaging subsystem is used to periodically localize the target during treatment. For monitoring respiratory motion, light-emitting diodes (LEDs) were placed on the swines' chest and identified with a camera mounted to the treatment table to provide the phase of respiration. The target can then be localized without implanted fiducials near the target using a motion correlation model. Further details of the model are described in Schnarr et al. [36] Treatment fractions were delivered following a standard clinical SBRT schedule receiving each fraction with a day in between each delivery during weekdays and 2 days over the weekend. Subjects were mechanically ventilated to 15 breaths per minute during treatments.
Dynamic Contrast CT
All CT images were acquired on a Siemens SOMATOM Definition Edge CT scanner. Each swine underwent two imaging sessions (one session before receiving radiation and one 3 months post-RT). In each session, a contrast-enhanced dynamic 4DCT image was obtained.
The dynamic 4DCT images were acquired over the central 15 cm of the lung as 80 mL of iodine contrast (Omnipaque 300) was injected at a rate of 5 mL/s. The acquisition consisted of repeated scanning of the same volume at 1.5 s intervals until the contrast had washed out of the lung. In total, the dynamic 4DCT images contain between 28 and 36 frames. Acquisition began before contrast was injected to collect baseline images. After the acquisition of baseline, contrast injection began and acquisitions continued until the contrast had washed out of the lung vasculature. We believe this scanning protocol is a better indication of perfusion than the standard blood volume dual energy scan because it scans the same volume over a period of time as contrast flows in and out of the vasculature as opposed to capturing a snapshot at one time point of where the contrast is in the lung. An example of wash-in/wash-out kinetics captured by these scans is shown in Figure 2.
Regional Perfusion Analysis
Pre-and post-RT dynamic perfusion CTs were deformably registered to each other for all subjects using a B-spline deformable image registration algorithm to allow for longitudinal voxel-wise comparisons [37,38]. For each swine, 6 contours were analyzed with 6 measurements obtained using the pre-and post-RT contrast curves. Contours were placed using the pre-RT dynamic perfusion CT, using the dose distribution in MIM Software (Cleveland, OH). Contours were then deformably propagated to each frame of the dynamic perfusion CT in MIM and transferred to the previously registered post-RT scan. Figure 3 and Table 1 show and describe the locations of the measurements, and Figure 4 shows the measurements that were taken at each point. There was one swine for which the "No-Dose Fed" contour could not be analyzed due to a mistake in image acquisition. This was due to incorrect selection of the appropriate field of view for the dynamic contrast scan, and thus there was not enough overlapping anatomy imaged pre-and post-RT to create a contour meeting that classification. All other contours were able to be created on this subject. Additionally, there was one swine for which the low-dose fed, no-dose fed, low-dose not-fed, and no-dose not-fed contours all showed no contrast flow post-RT. The max dose and contralateral contours as well as other vasculature in the ipsilateral lung did, so we do not believe it was an error in acquisition. This subject is further discussed in the discussion, but due to this effect, the baseline-to-baseline time value could not be calculated for this subject as there was no clear starting or ending baseline. The other 4 swine were able to have all contours analyzed. The percent change pre-to post-RT was then calculated using Equation (1). Student paired two-tailed t-tests were used to compare the pre-and post-RT values of each measurement across the 5 swine.
Contour Name Description
Max
Calculation of Slopes
As seen in Figure 2B, the contrast curves exhibit a region where the contrast is flowing into the vasculature and a region where the contrast is flowing out. Each of these regions has a section that can be approximated as linear. Within these regions, the slopes were calculated using the change in HU from the start of the linear region to the end of the linear region as well as the time that passed during these acquisitions (recall acquisitions are acquired at fixed time frequency, so it can be derived from how many acquisitions pass in between these points). Thus, the slope can be calculated as
Area under the Curve
The area under each contrast curve was calculated using the built-in trapz function in MATLAB 2020a (Mathworks, Natick, MA, USA). This function performs numerical integration via the trapezoidal method. This method approximates the integration over an interval by breaking the area down into trapezoids with more easily computable areas [39].
Results
Imaging Results Figure 5 shows the pre-and post-RT contrast curves for each of the six vessel locations of interest. This is a representative subject that highlights the differences seen in the curves post-RT. The percent changes in each of the measurements shown in Figure 4 are summarized in Table 2. The average percent change and standard deviation values are the average and standard deviation of the percent changes in each of the five (or four in the case of the no-dose fed contour) swine. Each entry is in the form "Average (Standard Deviation)", where statistically significant values are denoted by a * and are in red. Table 2. Percent changes in each measurement for each of the 6 contours analyzed. Each entry is the average percent change and standard deviation of the percent changes in each of the 5 (or 4 in the case of the no-dose fed contour) swine. Each entry is in the form "Average (Standard Deviation)". Statistically significant values (significant at the alpha = 0.05 level) are noted with a * and in red. The main result of this work is highlighted in Figure 6. Statistically significant (p < 0.05) reductions in the area under the curve were observed in the max dose contour as well as the fed contours. These reductions in area under the curve represent an overall reduction in contrast observed in the vessel and a change in the numerator from Equation (2). This response to radiation has been reported previously in irradiated vasculature [24], where it was observed that in vasculature irradiated above 25 Gy, there appeared to be a leakage of contrast from the vasculature into the non-vessel parenchyma. The observation of reduced post-RT area under the curve in fed contours but not in not-fed contours suggests an indirect damage effect to the low or unirradiated tissues where there is leakage in the highly irradiated vessel preventing contrast (and more importantly blood) from reaching the branching vessels.
Max
Statistically significant (p < 0.05) reductions in all seven metrics were observed in the max dose contours. In the low-dose fed contours, there were statistically significant reductions in max rise, slope up, and slope down. The no-dose fed contours also saw statistically significant reductions in slope up, slope down and baseline-to-baseline difference. No contours other than the max dose showed statistically significant changes in baseline-to-baseline time or max value. The reductions in slope up (into the vasculature) and down (out of the vasculature) with no changes in baseline-to-baseline time suggest that the contrast is flowing in and out of the vessel more slowly, further suggesting that the contrast leaked out of the vessel prior to it being able to reach these vessels.
The low-dose not-fed and no-dose not-fed contours did not see a statistically significant reduction in any slope metric, nor in the max rise, max value, area, or baseline-to-baseline time. The no-dose not-fed contour did see a statistically significant reduction in the baselineto-baseline difference metric. However, we attribute this to a skewed reading from one of the subjects that had particularly small vasculature available for this contour to be placed in and likely suffered from partial volume averaging (explained further in Section 4.3).
Without this flow of oxygenated blood, we can infer that perfusion is reduced in these regions as well, since the capillary network branches off these vessels. While this is the first study to our knowledge to report this indirect damage, it has been observed previously. Both Farr et al. and Thomas et al.'s work with SPECT-CT images of the lung post-RT also experienced this effect. While not commented on directly in their work, it is reported in their results as shown in Figures 7 and 8 [25,26]. In both figures, it can be observed that the heat maps of SPECT perfusion show reductions post-RT in lowly irradiated areas inferior/supplied by the spot of maximum dose. However, these reductions are not present or are significantly less severe in regions lowly irradiated but superior/not supplied by the maximally dosed region. Furthermore, there are no significant reductions observed in the contralateral lungs. This work was done with SPECT, so the resolution is poor and it is impossible to isolate the feeding vasculature relationship as we are able to do with our methods. However, it does show evidence that there is an indirect effect matching the effect described in this work which was derived via dynamic contrast-CT. SPECT is currently considered the gold standard for perfusion imaging, so this confirmation of observations further support that our CT-derived method is a potential adequate surrogate for perfusion measures. [25]. SPECT/CT of a patient with tumor in the right lung before radiotherapy (A), planning CT with dose to the gross tumor volume in color wash, SPECT defined functional lung outlined in yellow (B), SPECT/CT 3-months post-RT (C). A dotted cyan oval is drawn to indicate a region that received low dose but did not experience perfusion decline. A magenta oval is drawn in another region that received low dose but did experience perfusion decline post-RT and is fed by an irradiated region. Reprinted with permission. [26]. Observed radiation dose-response on longitudinal perfusion SPECT/CT. (Upper row) Pre-treatment lung perfusion SPECT co-registered to planning CT and radiation isodose line rainbow overlay. (Lower row) Three-month post-treatment perfusion SPECT co-registered to planning CT and radiation isodose lines (rainbow overlay). SPECT window/level were normalized to out-of-field integral uptake. Regions within the treatment field show reductions in uptake that are correlated with radiation dose magnitude and spatial distribution. A dotted cyan oval is drawn to indicate a region that received low dose but did not experience perfusion decline. A magenta oval is drawn in another region that received low dose but did experience perfusion decline post-RT and is fed by an irradiated region. Reprinted with permission.
Contralateral Lung
No metric saw statistically significant changes in the contralateral lung contours. Changes in max rise, max value, baseline-to-baseline time, and difference in baselines were all below 6%. The average changes in the slopes had larger magnitudes but were not statistically significant and were skewed by one subject who had a shorter baseline-tobaseline time, which thus caused the slope to increase (change in the time variable from Equation (2)).
Use of a Novel Swine Model
The novelty of the swine model use in this work allows for more direct translation into human studies than previous animal models. Previous work has already established a strong correlation between the WMS and human radiation-induced lung density changes [24]. The WMS at the size used in this work had lungs that matched human adult lungs. Additionally, the WMS at the size used (matching human adult lungs) were swine in their early adulthood (14.4 +/− 1.7 months old). Swine reach sexual maturity at approximately 5 months of age, so a 14-month-old WMS is close to a human in their late twenties, maybe very early 30s. Using a conventional breed of swine as previous studies have done would have resulted in the swine being approximately 3 months of age (in order to match the size of human lungs). Given that swine reach sexual maturity at 5 months of age, a 3-month-old swine is equivalent to a pre-pubescent human child at 6-8 years of age. A conventional swine at this age has a rate of development where tissue remodeling and size changes are very rapid. The swine's ability to heal and response to radiation damage (i.e., pathophysiology) would not mimic that of a human adult. The WMS allowed us to more closely model the pathophysiology observed in a human adult. A 14-month-old WMS is close to a human in their mid-late twenties, which more accurately mimics a patient we would expect to treat.
Additionally, using swine was ideal due to the ability to provide a more controlled experiment than would be possible in humans. It has been established previously that regional perfusion can vary with tidal volume [40]. Having the ability to fix the tidal volume of the swine's lungs at breath hold allowed for all image acquisitions and measurements to be consistent and helped minimize confounding variables in the measurements and isolate the change in perfusion.
Benefits of Dynamic Perfusion CT
The current "gold standard" of perfusion imaging is SPECT, such as the work done and displayed in Figures 7 and 8 [25,26]. However, SPECT imaging struggles with spatial and temporal resolution due to patient respiratory motion as well as the intrinsic resolution of the detectors. Performing measurements using CT and at breath hold as done in these subjects helps mitigate these concerns.
Previously reported CT-derived perfusion data have used pulmonary blood volume (PBV) scans. The novel contrast-enhanced CT protocol used in this work presents an added benefit of full kinetic analysis that PBV scans do not provide as well as an improvement in the amount of vasculature captured. A PBV scan only provides a snapshot in time at what is believed to be the time where the contrast concentration in the vasculature is the highest. Different vasculature, however, reach this maximum concentration of contrast at different points in time due to the time delay that occurs for contrast to flow into the smaller vessels. This effect is illustrated in Figure 9 below. Notice the different colored contours placed in different vessels of the CT and the corresponding curves in the plot on the right. It is clear that no one timepoint adequately captures all vasculature. However, with this method, we perform repeated scanning and thus can analyze the kinetics of each vessel individually, providing a more comprehensive assessment of the lung.
Limitations of the Study
There are a few features that contributed toward some of the larger variations in measurements in some of the metrics analyzed and are limitations of the study conducted.
In addition to the features discussed below, a main limitation of this study is that the sample size is small. While results are convincing, this should be repeated in a larger population to reduce variation due to single-subject variability.
Partial Volume Effect and Registration Error
Naturally, the vessels that branched from the primary vessels become smaller in diameter as they continue to branch. For all subjects, the low and no-dose vasculature (both fed and not-fed) were much smaller than the max dose vessel. This caused the contour created in this vessel to be smaller and thus less voxels to be averaged over in calculating the average HU. We did our best to place contours only in the lumen of the vessel, but during registration of these contours between frames as well as between time points, there is the possibility that there was some partial volume effect taking place where part of the vessel wall was contained in the contour, thus lowering the average HU of the contour and affecting the accuracy of some of the HU-based measurements (max value, max rise, slopes, and baseline-to-baseline difference). However, since for a given subject the pre-and post-RT anatomy was the same vessel chosen, the area under the curve metric should be robust to this effect, and since the area under the curve results echoed those sensitive to this effect, we still have confidence that the overall trends seen in the animals are insensitive to this noise.
This effect can be seen moderately in Figure 5, where it can be observed that some curves are not smooth but rather more jagged. The partial volume effect particularly manifested in one subject where the second baseline value was recorded as lower than the initial baseline value in the no-dose fed contour. Physiologically, this cannot be explained as the original baseline values were taken prior to any contrast injection, and the injection would not cause a reduction in HU. Additionally, this was observed in the pre-RT scan in only one contour, so it cannot be attributed to any radiation effect. For this subject, we attribute this observation to partial volume effect and registration error of the small vasculature between frames of the dynamic 4DCT.
No Flow of Contrast in One Swine
One swine did not show any contrast flow during the post-RT scan in the low-dose fed and no-dose fed contours (they did show contrast flow through the max dose contour as well as contralateral contour). Thus, the HU values of the "curve" remained very close to baseline. This caused the max rise, area, and baseline-to-baseline difference values for that subject to be near 0 and the baseline-to-baseline time to be not computable, which contributed to larger variation in those measurements (especially no-dose fed, which had a lower sample size of 4 instead of 5). In that subject, the max dose contour's metrics also showed larger reductions than the other subjects (66% reduction in the max rise compared to the 41% average and 81% reduction in area compared to the 56% average). These results suggest that this subject saw a more severe response where more contrast leaked out of the highly irradiated vessels than the others, causing no measurable contrast to reach these branches. It is also worth noting that the contralateral lung's percent change in area under the curve for this subject was positive (38%), suggesting a compensatory effect in this subject.
If we were to exclude this subject from analysis, some of the values in Table 2 would change in the contours that saw no flow. The area under the curve metric, our comprehensive metric, would change in the low-dose fed contour (−65 +/− 24% to −58 +/− 23%), nodose fed contour (−55 +/− 27% to −42 +/− 3%), low-dose not-fed contour (−36 +/− 54% to −18 +/− 42%), and no-dose not-fed contour (−24 +/− 39% to −7 +/− 9%). Even with the exclusion of this subject, only the fed contours show statistically significant changes. It is unclear as to exactly why this particular swine saw this response where the others did not, and it would be useful to conduct further studies with more subjects to determine if other subjects exhibit this response, and if so, what correlations between those subjects can be seen to predict this severe response.
Comment on "Not-Fed" Contours
The reductions reported in Table 2 show that, while not statistically significant, the average percent reductions in area under the curve in the not-fed contours were not 0. These contours also had a wide variability in results. In addition to the partial volume effects being more prominent in these vessels' contours, we believe a variable inflammatory effect is being observed that is affecting these vessels. Multiple studies have shown that radiation can induce an inflammatory effect, but the severity, onset, and time span of this effect is variable by patient [3]. Previous work has also shown that the 3-month timepoint in the swine is equivalent to somewhere between the 6-and 12-month response seen in a human [24]. This timepoint is known to have transient effects present where inflammation may be still resolving in some subjects, yet fully resolved in others. With inflammation, shifting of the vasculature can occur in the lung parenchyma, including some deformation and constriction of the smaller vasculature. Therefore, if a given subject experienced a more severe inflammatory effect, the vessels may experience a reduction in blood flow until the inflammation subsides. However, since this is variable by subject, some will show no change to this vasculature since the vessel itself was not damaged.
Overestimation of Perfusion Reductions
Due to features of the irradiation scheme chosen in this work, it is possible that the values reported in this study overestimate the changes in perfusion that would be experienced in a typical patient. The first feature of note is the fractionation scheme. In this work, 5 × 12 Gy was chosen, but there are other prescriptions that human lung SBRT patients could receive that may yield differing results due to the radio-biological changes that would occur from a change in fractionation. The second feature is the fact that we irradiated directly on a vessel. In general, vessels are not targeted since it is the central lesion that is being treated. However, in many cases, vessels do end up receiving high doses in the resulting dose distributions. Additionally, current dose toxicity reports such as RTOG 0813 only report dose constraints and toxicity results for the great vessels [41], so vasculatures of the size irradiated in our work are currently not standard practice to consider when developing treatment plans. However, this work clearly shows there is a consequence to irradiating these vessels, particularly if those vessels feed other regions of the lung. The purpose of this work was to characterize the penalty of irradiating these smaller vasculatures that feed other large regions as well as those that do not in order to understand the consequences of each. This information could potentially aid decisions in which vasculature get irradiated if it is possible to manipulate. Therefore, while our results may be over-estimations depending on the exact clinical scenario, they provide an upper bound for the potential damage that could be caused.
Clinical Impact
This work quantifies an anatomical response to radiation dose in an animal model that has been previously established as a surrogate for human response [24]. Additionally, we used a novel contrast-enhanced CT protocol that allowed for a full kinetic analysis of each vessel. This is an improvement on previous conventional pulmonary blood volume techniques, as those techniques only provide a snapshot in time, and different vasculatures will reach their maximum concentration of contrast at different points in time due to the time delay that occurs for contrast to flow into the smaller vessels. Other groups have demonstrated radiation-induced changes in perfusion using SPECT [16], and their results have suggested an indirect effect of low-irradiated or unirradiated vasculature fed by highly irradiated vasculature experiencing a reduction in perfusion [25,26]. However, these studies did not comment on this effect and were not measured with a method that had the spatial and temporal resolution to quantify or pinpoint the cause. Our work uses a novel CT-based technique that can isolate the reduction to the vasculature involved. Knowledge of this response and the damage that is caused to the fed regions of the lung could help to aid treatment planning decisions to avoid major vasculature. Previous work using this animal model connected the changes observed in contrast to metrics derived on non-contrast 4DCT [24]. The ability to infer these changes from 4DCT would immensely aid translation to a clinical setting since 4DCTs are already routinely collected for treatment planning and would not require the acquisition of additional scans. These results would also present an opportunity for predictive models to be built to predict the functional cost of irradiating major vessels and allow for superior functional avoidance therapy.
After the 3-month post-RT scan, the swine lungs were extracted from the animal for future pathology studies. This work will provide further insight regarding the physiological response of these subjects and the damage done to the vasculature in each of the contours analyzed. Future work will also include a clinical trial analyzing the response of a larger sample size of these novel swine as well as an analysis of contrast-enhanced scanning on humans. This will enable faster development of predictive models that would be able to be validated on existing human subject data from this trial. From there, clinical trials assessing the effectiveness of intervention mechanisms on human subjects may be initiated.
Conclusions
It has been previously established that radiation induces changes in pulmonary anatomy post-RT. However, it has not been fully established what the indirect effect to anatomy fed by highly irradiated regions is. This work measured a reduction in perfusion in irradiated vascular regions as well as regions fed by the irradiated vasculature in five WMS. All work was done using a WMS model that has previously been established as a surrogate for analyzing radiation-induced changes in humans treated with SBRT. These measurements combined with previous work present a potential bio-marker for analyzing functional changes in perfusion that can be derived from 4DCT as opposed to requiring additional scans outside of clinical protocol. This would allow these metrics to be considered in functional avoidance therapy and could provide a significant benefit to patient outcome. Acknowledgments: The authors would like to thank the following people: the students of the University of Wisconsin Veterinary school for their assistance with animal husbandry, Accuray for providing a Radixact system for research purposes and assisting with technical support on the system throughout the swine treatments, and Jessica Miller and Michael Lawless for their assistance in developing the Dynamic Contrast CT protocol. | 8,395.8 | 2022-07-30T00:00:00.000 | [
"Medicine",
"Physics"
] |
Mapping and modeling the semantic space of math concepts
Mathematics is an underexplored domain of human cognition. While many studies have focused on subsets of math concepts such as numbers, fractions, or geometric shapes, few have ventured beyond these elementary domains. Here, we attempted to map out the full space of math concepts and to answer two specific questions: can distributed semantic models, such a GloVe, provide a satisfactory fit to human semantic judgments in mathematics? And how does this fit vary with education? We first analyzed all of the French and English Wikipedia pages with math contents, and used a semi-automatic procedure to extract the 1,000 most frequent math terms in both languages. In a second step, we collected extensive behavioral judgments of familiarity and semantic similarity between them. About half of the variance in human similarity judgments was explained by vector embeddings that attempt to capture latent semantic structures based on cooccurence statistics. Participants’ self-reported level of education modulated familiarity and similarity, allowing us to create a partial hierarchy among high-level math concepts. Our results converge onto the proposal of a map of math space, organized as a database of math terms with information about their frequency, familiarity, grade of acquisition, and entanglement with other concepts.
Introduction
Mathematical cognition is a vast domain of human knowledge, essential to daily life as well as to scientific inquiry, and yet vastly underexplored compared to other domains of language or culture.Most cognitive studies tackle specific and narrow subdomains of elementary mathematics such as integers (Dehaene, 2011;Eger, 2016;Kutter, Bostroem, Elger, Mormann, & Nieder, 2018;Shepard, Kilpatric, & Cunningham, 1975), fractions (Behr, Lesh, Post, & Silver, 1983;Ni & Zhou, 2005;Siegler, Fazio, Bailey, & Zhou, 2013) or geometric shapes (Dillon, Huang, & Spelke, 2013;Izard, Pica, Spelke, & Dehaene, 2011;Sablé-Meyer, Ellis, Tenenbaum, & Dehaene, 2022), and only a few researchers have explored higher-level mathematical concepts in professional mathematicians (Amalric & Dehaene, 2016;Zeki, Romaya, Benincasa, & Atiyah, 2014).Without attempting to perform an extensive review, it seems fair to say that, in between those two extremes lies a vast space of mathematical concepts that remains largely unexplored from the cognitive viewpoint, to such an extent that they are not even systematically listed.Although a few dictionaries of mathematical concepts are available (Clapham & Nicholson, 2009; Dictionnaire des mathématiques, 2019), they are often technical, aimed for experts, and therefore uneasy to use as a lexical source for cognitive research.
Here, our goal is to take a first step toward a systematic study of the basic vocabulary of math cognition and how it varies with education, ranging from primary school to university level concepts.Inspired by the THINGS initiative (Hebart et al., 2019), where researchers from different labs collectively gathered a large database on object recognition including behavioral similarity judgements, fMRI, MEG and EEG (in humans and non-human primates) studies as well as modeling by deep neural networks, we started by creating a dataset of 1000 mathematical words that could then be used for future studies using behavioral and brain imaging methods.Importantly, we used an unbiased computational method, GloVe (Pennington, Socher, & Manning, 2014), to analyze a large mathematical corpus and extract the most frequent words for mathematical concepts, with the goal to obtain an objective picture of the space of math concepts.
From this first step, we obtained a lexicon of the most frequent math concepts, their frequency, their tentative age of acquisition, and a vector-based representation of their putative semantics, based on distributional cooccurence statistics.We then aimed to provide a first test of the cognitive validity of these measures.To this aim, we collected ratings of semantic similarity, which are often used as a marker of the organization of mental representations.This idea dates back to the seminal work of Shepard and Chipman (Shepard & Chipman, 1970).Using the principle of second order isomorphismwhich states that there is a relation between the similarities of the internal representation of two objects and the similarities of the corresponding external objects one can construct psychological representations of the structure of a set of stimuli by collecting subjective similarity measures and analyzing them, for instance using multi-dimensional scaling (Shepard, 1980) or more sophisticated feature-reconstruction methods (Hebart, Zheng, Pereira, & Baker, 2020).This approach was first applied to numbers by Shepard et al. (1975), who presented pairs of numbers to participants in various notations (e.g.Arabic numerals, number words, dot patterns) and asked them to rate their conceptual similarity.They showed that the similarity ratings only depended on the judgment task, not the original notation, and that MDS enabled retrieving a conceptual space for the numbers, organized by interpretable semantic dimensions such as number magnitude and odd-even status.
Here, we extend this logic to math concepts.We asked people to provide similarity ratings between pairs of math words, and compared these similarities with those predicted by semantic embeddings learnt by the GloVe algorithm on a large math corpus.Although it may seem obvious that math concepts should behave similarly to other concepts, brain imaging indicates that math sentences activate a network of brain areas entirely different from classical language regions (Amalric & Dehaene, 2016, 2018, 2019), thus leaving open the possibility that a different logic or language may be needed to account for the mental organization of math concepts (Dehaene, Al Roumi, Lakretz, Planton, & Sablé-Meyer, 2022a).
A third goal of our study was to evaluate the impact of math education on such similarity ratings and their putative underlying vector representations.The mental representation of math concepts changes dramatically with education (Carey, 1988(Carey, , 2009;;Dehaene, 2011;Siegler & Opfer, 2003).Using brain imaging, Amalric and Dehaene (2016) showed a drastic enhancement of brain activity in a large-scale mathresponsive network in professional mathematicians compared to other adults with more limited math education.Longitudinal, cross-sectional, and cross-cultural comparisons indicate that even the representation of simple integers exhibits massive change in the course of education to counting and the number line (Halberda & Feigenson, 2008;Opfer & Siegler, 2007;Piazza, De Feo, Panzeri, & Dehaene, 2018;Piazza, Pica, Izard, Spelke, & Dehaene, 2013;Pica, Lemer, Izard, & Dehaene, 2004).Here, we tested the possibility that similarity ratings would also show a sensitivity to education and could therefore serve as a sensitive marker of the expansion of the conceptual space of mathematics.
In summary, we aimed to provide a 1000-word vocabulary of math concepts covering all levels and domains.Anticipating on the results, we showed that similarity ratings collected during an online experiment are well captured by GloVe embeddings of the concepts of the vocabulary, and that the quality of the fit increases with participants' education.In addition, we showed that the spatial layout of the embeddings makes sense and can be used to make predictions about how humans organize their mental map of math concepts.
Creation of a vocabulary of 1000 words in French and English
We first created a math vocabulary of 1000 French words by manually reviewing the 39,345 most frequent words in the French Wikipedia math articles.Then, using GloVe (Pennington et al., 2014), we obtained vector embeddings for the words of this vocabulary.Three distinct embeddings were obtained from three different corpora: the above math corpus (math embedding); a non-math corpus consisting of all non-math pages of the French Wikipedia (non-math embedding); and their concatenation (global embedding) (Fig. 1A).Our logic was that many words have both math and non-math meanings.For a given word, we hypothesized that its math embedding would primarily carry information about its math meaning (e.g."ring" in the context of commutative algebra), while its non-math embedding would carry information about its meaning in everyday language (e.g."ring" as in engagement ring).We also estimated, for each word of the vocabulary, its school grade of acquisition (hereafter referred to as "word grade") by examining in which class the corresponding concept was introduced according to the national French curriculum.Finally, we used the above corpora to obtain estimates of each word's log frequency per million in math and non-math corpora.A summary of the main characteristics of the vocabulary is provided in Table 1.The vocabulary was also translated into English, for which the same measures were obtained.The math vocabulary is available online from https://osf.io/dxg2w.
Behavioral data collection
We ran a massive online experiment (n = 1230 participants) to collect familiarity ratings for a subset of 429 words of the vocabulary and similarity ratings for 3756 pairs of these words (Fig. 1B).The selected words were nouns, numbers or symbols which have a clear and precise mathematical meaning (see Methods).Our goal was to probe whether (1) the familiarity ratings from humans with different levels of education could be predicted by our estimated word grade and frequency in the math corpus; and (2) the similarity of GloVe embeddings is a good predictor of human similarity ratings (Fig. 1C).All participants gave an estimate of their math education level, which ranged from primary school (n = 4) to PhD (n = 133) (see Methods).This experiment was also run in English for the English vocabulary (n = 174 participants).
Before anything else, participants answered a short survey and provided their last grade of education in mathematics and a self-assessment of their current math skills on a scale from 1 to 10.We found a moderate relation between these two variables, both in French (Spearman's r S (1228) = 0.54, p < .001)and in English (Spearman's r S (172) = 0.37, p < .001).For this reason, in the rest of this work, we used participants' education as an indicator of their math level.
Familiarity ratings are predicted by the estimated grade of acquisition
Each participant first rated their familiarity with 50 words drawn randomly from the 429 selected for the experiment.Ratings were provided on a discrete 9-level Likert scale from 0 (totally unknown concept) to 8 (fully mastered concept) (see Fig. 1B for the full calibration labels).
The results, shown in Fig. 2, support two conclusions: (1) for a given participant's math education level, familiarity ratings decrease as word grade increases; and (2) for a given word grade, familiarity ratings increase as participant education increases.
We confirmed those conclusions statistically by entering all of the participants' familiarity ratings into a rank-order linear regression with participant education, word grade, word log frequency per million (in the math corpus), and their interactions as independent variables (see Methods).We found that these variables predicted familiarity ratings (R 2 = 0.35, F(7, 61792) = 4673, p < .001).All variables and their twoways interactions were significant predictors (p < .001),and the threeways interaction also was significant (p = .02).Out of all the predictors, word grade had the largest effect (β = − 1.01, t(61792) = − 116.42, p < .001):as seen on Fig. 2, familiarity ratings decreased monotonically as the predicted word grade increased.This effect was modulated by a main effect of participant education (β = 0.67, t(61792) = 77.90,p < .001)indicating that familiarity ratings increased with education, and an interaction of education and word grade (β = 0.32, t(61792) = 36.33,p < .001),revealing that the effect of word grade was steeper in less educated participants.As can be seen on Fig. 2, familiarity ratings became much flatter, although still increasing with education, when participants' education fell above the postulated word grade.For instance, the familiarity rating of words assumed to be learned in 11-12th grade (green curve in Fig. 2) reached a plateau around 7 once participants' education reached or exceeded college level.
Finally, word frequency also had an effect on familiarity ratings (β = 0.33, t(61792) = 41.81,p < .001),but it was smaller than that of participant education and word grade.This effect indicated that, over and above word grade, the more frequent a word was in the math corpus, the more participants declared to be familiar with it.An interaction with word grade (β = 0.36, t(61792) = 43.53,p < .001)indicated that the effect of frequency on familiarity ratings was higher for more advanced math words, which is unsurprising because among the advanced words, the least frequent tend to be niche words, applicable only to a narrower domain of math, and therefore less familiar on average.There was also a small but significant interaction of frequency with participant education, indicating that higher education attenuated the impact of frequency (β = − 0.05, t(61792) = − 5.74, p < .001).This suggests that, while familiarity with math concepts is driven by frequency for participants with little math education, higher-level participants gained enough experience to mitigate the effect of frequency when judging their familiarity with a given concept: for them, even rare terms can be highly familiar.
These observations were replicated in English (R 2 = 0.28, F(7, 8792) = 498.7,p < .001),except for the interaction of word frequency and participant education which did not reach significance.
Item response theory
Mean familiarity ratings provide only a coarse idea of how advanced a math concept is.Likewise, a self-reported education level provides only a coarse estimate of a person's math knowledge.Furthermore, those two parameters are linked: mean familiarity depends on the education level of the specific participants tested.We reasoned that itemresponse theory (IRT; Cai, Choi, Hansen, & Harrell, 2016) could disentangle those parameters and provide a more refined estimation of them.IRT can jointly infer a latent ability for each participant, roughly capturing a person's math knowledge, as well as a difficulty and a discrimination parameter for each word, evaluating respectively the overall likelihood that a word is judged familiar, and the amount of variation in this familiarity rating as a function of participants' math knowledge.
Because the IRT package that we used (mirt in R) requires dichotomous outputs, we dichotomized the familiarity ratings into "unknown" (0, 1, 2 and 3 ratings) versus known (above 3).We reviewed manually all predictions made by the IRT algorithm and discarded 20 % of all items for which the algorithm had extreme values of discrimination or difficulty parameters which did not fit with the data.Indeed, IRT cannot converge when human ratings are very skewed toward one value (either always known, or always unknown).Manual review of the data led us to only keep the first nine deciles of IRT discrimination parameters and the central 95 % of IRT difficulty parameters (2.5th to 97.5th percentiles) for further analyses.After this curation phase, we were left with 289 words out of our original 363 for which the IRT algorithm converged.The resulting estimates are reported in the vocabulary database, and plots are provided in File S1.
For each word tested, we obtained two difficulty estimates, based either on mean familiarity rating or on IRT, and two discrimination estimates: the standard deviation (STD) of familiarity ratings across participants and the IRT discrimination estimate.We then compared the two approaches and correlated the descriptive parameters with those derived by the IRT algorithm.We found that IRT difficulty parameter and mean familiarity rating were strongly correlated across words (Spearman's r S (287) = 0.90, p < .001).IRT difficulty parameter was also correlated with word grade (Spearman's r S (287) = 0.51, p < .001)and, to a lesser extent, to word frequency (Spearman's r S (287) = 0.14, p < .001).Likewise, the IRT-estimated participant latent ability and the reported participant education were highly correlated across participants (Spearman's r S (1228) = 0.61, p < .001).However, the IRT discrimination parameter was not well estimated by the STD of familiarity ratings (p = .57),suggesting that IRT indeed provided a finer-grained approach.
We used the IRT results to manually select a subset of 80 words covering various difficulty levels, whose discriminability was high, and which were therefore highly selective of a given participant's math knowledge (they are flagged in the database as "80 high-discriminability items").In the future, we propose that collecting familiarity ratings on those words and analyzing them with IRT could serve as a quick assessment of a participant's math knowledge.Similarly, the word difficulty and discrimination parameters that we collected could be used to select, within our vocabulary, a subsample of words adequate to participants of a given education range.
-GloVe embeddings predict human math similarity ratings
We then turned to the similarity rating part of the experiment.Note that, to avoid asking ratings for unknown words, each participant only rated on a continuous Likert scale the similarity of word pairs at or below their grade level.Thus, more educated participants were asked to rate a large number of word pairs (20 pairs per grade level).Furthermore, for each word, in order to cover a large spectrum of similarities, we selected 3 pairs within each of four categories: (1) very similar pairs, as predicted by their GloVe embeddings; (2) moderately similar pairs, close to the mean similarity; (3) pairs of words with orthogonal embeddings; (4) pairs of words with opposite embeddings, i. e. negative cosine similarity (see Methods).This manipulation was adopted after piloting showed that a random choice of word pairs would have generated an excessive number of "unrelated" judgements, thus making the task uninteresting for participants.Furthermore, it allowed for a first test of GloVe similarity: non-parametric Kruskal-Wallis (H(3, n = 3756) = 1520.11,p < .001)followed by pairwise Dunn's tests with Bonferroni correction showed that similarity ratings showed systematic pairwise differences across those four categories (mean ratings = 3.2, 1.9, 1.2 and 1.0 respectively; all p's < .001).In particular, even the negative cosine pairs were rated as slightly, but significantly less similar than orthogonal items.These observations were also replicated in English except for the negative cosine versus orthogonal comparison (mean ratings = 2.9, 1.9, 1.5 and 1.4 respectively; non-parametric Kruskal-Wallis: H(3, n = 2421) = 531.93,p < .001;Dunn's test: all p's < .001except for the negative cosine versus orthogonal categories).
The next question we asked was whether human similarity ratings could be continuously predicted by the similarity of their GloVe embeddings.Following Pereira et al. (2016), we tested both euclidean distance and cosine similarity, which are respectively defined as: In all following work, as a measure of fit, we computed Spearman's rank-order correlation r S between the similarity of 50-dimensional GloVe embeddings and the mean human similarity rating for each pair of words.Rank-order correlation seemed more appropriate, given that the Likert scale used to collect similarity ratings is not necessarily linear (see below).
When computing the correlation for word pairs aggregated over percentile bins of predicted similarity (Fig. 3A), we found a strong correlation for both the cosine similarity and the euclidean distance (euclidean: Spearman's r S (3754) = − 0.56, p < .001;cosine: Spearman's r S (3754) = 0.66, p < .001).Note that the correlation was positive for the cosine similarity while it was negative for the euclidean distance as the latter is a measure of dissimilarity, so it should decrease as the similarity ratings increase.The correlation was slightly better for cosine, so it is the measure we used for subsequent analyses (ΔAIC = 734.31,p < .001).However, similarly to what was observed by Pereira et al. (2016), the difference between the cosine similarity and euclidean distance was small.
An interesting observation, visible in Fig. 3, is that human similarity ratings were not linearly related to GloVe cosine similarity.Indeed the curves for French and English were both convex, meaning that, for small values of similarities predicted by GloVe (between 0 and 0.4), human ratings varied little with GloVe similarities (though still significantly: Spearman's r S (2061) = 0.37, p < .001),whereas for larger GloVe cosine similarities, they changed in a much steeper fashion.
We compared those GloVe fits with an estimate of the noise ceiling in our data (see Methods), in order to get an idea of how good those fits were relative to the explainable variance.To this end, we recomputed the correlations on the unaggregated, trial-by-trial data, without averaging the ratings for each pair of words across participants.This noise ceiling can be thought of as the fraction of variance in the data from one participant that could be accounted for by all the other participants (leave-one-out cross validation).We found that (1) the single-trial similarity ratings were highly reliable, exhibiting an explainable variance (R 2 ) or noise ceiling of 42.84 % across participants; (2) the cosine similarity of 50-dimensional GloVe embeddings, on the other hand, led to a single-trial correlation of 20.28 %, i.e. about half of the explainable variance.Thus, while GloVe provides a first-approximation model of math concepts, it still leaves much to be explained.
Comparing different embeddings for math concepts
We examined the impact of the dimensionality of GloVe embeddings (Fig. 4A).The above analyses were based on 50-dimensional embeddings, which is GloVe's default, but we wondered whether increasing Fig. 3.The perceived similarity between two math concepts is well predicted by the similarity of their GloVe 50-dimensional embeddings.The plot shows human similarity rating (on a scale from 0 to 5, y axis) as a function of their predicted similarity based on the cosine of the angle between their 50-dimensional vector embeddings derived from the math corpus (x axis), (A) in French (n = 3756); and (B) in English (n = 2421).Data were averaged over percentile bins.Marginal distributions of the x and y variables are also shown.
the dimensionality of the embeddings would also increase the correlation of their cosine similarities with human ratings.
The results showed that, for French words, the Spearman correlation coefficient increased with the number of dimensions of GloVe embeddings up to approximately 50 dimensions, with a sharp initial increase for dimensionality 1 to ~15, and then reached a plateau for 50 to 500dimensional embeddings.Therefore, the choice of 50-dimensional embeddings was felicitous.We observed the same trend for English words, although both the noise ceiling and the correlation coefficients were lower.
We also probed whether embeddings derived from the math corpus were better at predicting human similarity ratings than those derived from the non-math and global corpora.We therefore computed the rank correlation between the similarity ratings and the cosine similarity of the 50-dimensional embeddings derived from the math, non-math and global corpora.We indeed found that the math corpus was a better predictor, followed by the global and the non-math corpora (math: Spearman To get a finer-grained view of the differences between the different corpora, we repeated the same analysis for each word grade of acquisition.We expected that (1) basic concepts might be modeled equally well by all corpora, but (2) as word grade increases, embeddings derived from the math corpus would become better predictors of human similarity ratings while correlations with embeddings derived from the other corpora would drop.These predictions were partly confirmed (Fig. 4B): the correlation of the cosine of embeddings derived from the non-math corpus with human ratings decreased as the word grade increased (Spearman's r S (6) = − 0.83, p = .010),but the Spearman correlation between cosines from global corpus and word grade was not significant.Furthermore, embeddings derived from the non-math and global corpora were consistently poorer predictors of human ratings than those derived from the math corpus (exact one-sided Wilcoxon signed-rank test: math vs non-math: W = 36, p = .004;math vs global: W = 36, p = .004),even for very basic math concepts studied in primary school.In addition, the correlation between the cosine of math embeddings and human ratings also decreased as the word grade increased (Spearman's r S (6) = − 0.87, p = .007),probably due to the fact that advanced concepts were only rated by a smaller number of highly educated participants and, as a consequence, the estimation of their human similarity rating might be noisy.
To counter this possibility, we also analyzed the quality of GloVe fits as a function of education level.We hypothesized that the math corpus would be a better predictor of expert mathematicians than of people with a lower level of math education.Indeed, the math corpus comprised very advanced math content such as category theory or algebraic topology.Conversely, the non-math corpus should well predict the ratings of mathematically uneducated people, but not those of more advanced mathematicians, as the math words in the non-math corpus were either very low level (e.g.fractions or basic shapes) or advanced concepts which have several meanings outside the math domain (e.g."ring" or "field").We ran the same fine-grain analysis as for word grade (Fig. 4C) and found that these predictions were verified: all corpora predicted ratings of non-educated participants equally well, and then diverged for educated participants.The correlation for the math corpus increased with participants' level of education (Spearman's r S (8) = 0.85, p = .002),while that for the non-math and global corpora decreased as participants' level of education increased (global: Spearman's r S (8) = − 0.65, p = .043;non-math: Spearman's r S (8) = − 0.88, p < .001).
Visualizations of the semantic space of math concepts
Given the relatively good fit of the GloVe vectors to human judgements, we close our work with an analysis of the geometry of those vectors, as a proxy for human representations of the semantic space of math concepts (in the future, this model could be compared to actual measurements using fMRI or MEG; Kriegeskorte, Mur, & Bandettini, 2008;Kriegeskorte et al., 2008).
We first tried to obtain a global visualization of the semantic space of math concepts.To this end, we created a 2D map following the procedure in Pereira et al. (2018).First, we separated the 1000 vectors in 18 clusters using spectral clustering (von Luxburg, 2007).Then we projected all vectors along with the cluster centers in two dimensions using t-SNE (van der Maaten & Hinton, 2008).Finally, we used Voronoi tessellation around the projected centers to visualize the boundaries between the clusters.The number of clusters ( 18) was chosen out of a range from 2 to 100 using the elbow method.The resulting map is shown on Fig. 5.We were able to label all clusters with a tentative semantic description.Different regions were dedicated to analysis, algebra, arithmetic, geometry, and, more tentatively, for arithmetics and to linear algebra.Thus, numbers had a dedicated cluster (Fig. 5B).Likewise, an entire area of the map was dedicated to proper nouns, which were separate from the rest of the concepts (for instance, "Burnside" was closer to "Weierstrass" than they were respectively to other group theory and analysis concepts, see Fig. 5B).However, some proper nouns, especially those that can be used as adjectives were also sometimes integrated to the cluster related to their field (e.g."Poisson" is located in a probability cluster, as shown on Fig. 5B).
Interestingly, all terms relating to geometry occupied several clusters in a large and distinct sector of the map (bottom right-hand corner), very distant to those dedicated to numbers (top right-hand corner) or logic (left-hand corner) (Fig. 5A).This observation parallels Amalric andDehaene (2016, 2019) brain-imaging finding that, when judging the truth of math sentences, partially distinct cortical sites were found for sentences bearing on geometry relative to other domains such as algebra, arithmetic, or topology, within an otherwise highly integrated and overlapping cortical network for all math concepts.Similar to GloVe, the human brain may group together words that frequently cooccur, thus grouping together terms of geometry because they involve a partially distinct language of shapes with a strong visuospatial content (Amalric et al., 2017;Dehaene et al., 2022a;Sablé-Meyer et al., 2022).
It is worth noting that the map continued to make sense when we increased the number of clusters.Indeed, we reproduced this methodology with 100 clusters, and found that the above mentioned clusters were refined in an interesting way.For instance, proper nouns were also classified into sub-clusters depending on the area of math that they relate to (e.g."Fermat" and "Markov" fell in different clusters, as the former worked on number theory and the later worked on probability).Similarly, the number cluster was split into smaller adjacent clusters distinguishing between and small numbers, multiples of ten and powers of ten.
We then turned to more local visualizations of the semantic space to get a better idea of the organizations of specific domains.The first things we looked into were numbers.We projected the embeddings of numbers from 1 to 100 (only those whose French name only consists of one word, which excluded for instance 70 or 21) on their first principal component (PC1) (Fig. 6A).We found that this projection ordered numbers by their magnitude.Furthermore, larger numbers were grouped together, suggesting a logarithmic organization (correlation with log(n): Pearson's r (20) = 0.90, p < .001).Following earlier work indicating that highdimensional embeddings can be projected onto oriented axes for properties such size or ferocity (Grand et al., 2022), we also looked at the line joining the embeddings of "one" to "billion" and projected the embeddings of the other numbers on this axis.Indeed, this projection revealed a systematic organization of other numbers based on their relative size (Fig. 6B).Again, numbers were roughly organized by magnitude, in a compressive manner.Overall, these findings indicate that GloVe recovers a compressive, quasi-logarithmic representation of numerical magnitude similar to the one that behavioral and neuroscience research has identified as lying at the core of intuitions of number in both humans and animals (Dehaene, 2003(Dehaene, , 2011;;Dehaene & Marques, 2002;Kutter et al., 2018;Nieder, 2021;Piazza, Izard, Pinel, Le Bihan, & Dehaene, 2004;Siegler & Opfer, 2003).The fact that even very large numbers such as "hundred", "thousand", "million" or "billion" were placed appropriately in the same alignment fits with developmental research indicating that 6-year-old children can already tell which of two such numbers is larger (Cheung & Ansari, 2023), and suggests that statistical word distributional properties may suffice to develop such an extension of magnitude knowledge to very large numbers.
We then looked into the association between numbers and other interrelated math concepts.GloVe is known to capture the meaning specified by the juxtaposition of two words (e.g. the difference between "king" and "queen" is the same as that between "man" and "woman") (Mikolov, Chen, Corrado, & Dean, 2013).We first focused on the correspondence between numbers and shapes (e.g."three" and "triangle", Fig. 5. Global visualizations of the vector space for math semantics.We used the GloVe embeddings of the French math vocabulary to visualize the organization of the putative semantic space by projecting it in two dimensions using spectral clustering and t-SNE.(A) Each region was associated with a tentative label describing the words contained in the cluster.(B) We also show five words contained in different regions: numbers, proper nouns, matrix calculus, euclidean geometry and probabilities.
"four" and "square"; Fig. 6C).In Fig. 6C-D, we projected the embeddings on their first two principal components and drew a line to connect related concepts.The vectors for numbers and shapes were indeed systematically related (Fig. 6C): in GloVe embeddings, a similar vector is needed to go from "three" to "triangle" as to go from "four" to "square".In addition, PC1 made a clear distinction between numbers and shapes, while PC2 ordered numbers by their magnitude and shapes by their number of sides.
A similar analysis for numbers and fractions (e.g."two" and "half", "three" and "third", etc) revealed different results (Fig. 6D).Note that, contrary to English, the French for the fractions 1 / 3 ("tiers") and ¼ ("quart") are different from those for the ordinals 3rd ("troisième") and 4th ("quatrième").On Fig. 6D, we see that the correspondence was not well captured by GloVe.PC1 still made a clear distinction between numbers and fractions, but along PC2, numbers were grouped together (still ordered by magnitude), while fractions were separated from each other.More work will be required to examine if a systematic but nonlinear relationship between numbers and their corresponding fractions can be found using more sophisticated methods such as tensor decomposition (McCoy, Linzen, Dunbar, & Smolensky, 2019).
Finally, we wanted to probe whether higher-level math concepts were also accurately represented in the math semantic space computed by GloVe.We selected sixteen concepts from distinct, advanced math fields, namely complex analysis and logic, and asked whether the concepts from the same domain were consistently more similar than concepts from different domains.The correlation matrix between the sixteen concepts is shown on Fig. 6E.We found that the intra-domain GloVe similarity was indeed greater than the inter-domain similarity (intra: μ = 0.34, σ = 0.17; inter: μ =0.13, σ = 0.14; t(118) = 7.34, p < .001).Fig. 6.Partial visualizations of the vector space for math semantics.We used the GloVe embeddings of the French math vocabulary to visualize the organization of specific concepts in this putative semantic space.(A-B) Projection of the vector embeddings of numbers from 1 to 100 on their first principal component (PC1) (A), or on the line joining the two vectors for "one" and for "billion" (B).In panel A, the x axis is the log of the number magnitude in base 10, which shows a tight though non-linear correspondence with PC1.(C -D) Projection of the embeddings of two groups of related concepts on their first two principal components: numbers and shapes (C), and numbers and fractions (D).(E) Cosine similarity matrix for two domains of advanced math: complex analysis and logic (8 words each).
Discussion
We created a comprehensive 1000-word vocabulary of math concepts in French, covering basic and advanced levels and provided vector embeddings for words in this vocabulary.We then validated the embeddings by showing that they explain up to 47 % of the explainable variance from human similarity ratings.In addition, we proved that education affects both familiarity and similarity ratings: the higher the math education level, the more accurate the prediction of similarity ratings by GloVe embeddings extracted from Wikipedia math pages.We also showed that the spatial layout of the vectors makes sense and captures important regularities in number concepts, geometric words, and other higher-level concepts.Importantly, we provided a translation of the vocabulary into English and proved that the main results were replicated in both languages.
Possible use of the vocabulary and vectors
The purpose of this database is to enable a more systematic exploration of math cognition, beyond the elementary concepts of numbers, geometry and algebra on which the vast majority of current cognitive neuroscience research is concentrated.Our approach may help provide a standardized approach to this field, cover the mathematical domain in an unbiased manner, and increase the comparability between different studies.Indeed, it could be desirable that studies tackling the cognition of advanced math all share a common subset of stimuli.This would be useful not only for the purpose of reducing bias inherent in the choice of stimuli, but also to join efforts to bring multiple converging data to bear on the same problem (e.g.behavioral, developmental, brain-imaging, intracranial recordings…), in a similar fashion to the THINGS initiative (Hebart et al., 2019).We believe that the dataset we propose would be useful in this regard, as we endeavored to make it exhaustive and unbiased.
Our dataset may also be used to devise benchmarks, test sets, and stimulifor instance, we provide a reduced set of 80 highdiscriminability items that may suffice to quickly determine a participant's level of math knowledge.Following a method similar to Pereira et al. (2018), one could use the embeddings we provide to ensure a full coverage of the entire math semantic space.
Limitations of this work
The first limitation of this work concerns the vocabulary itself.A subset of mathematical words were reviewed and manually selected.The threshold of 1000 words was arbitrary, and rare words could have been missed, although we tried to improve the vocabulary at different stages in the process.The grade level we assigned is necessarily only approximate and, since it was based on the French national curriculum, could be different in other countries.Furthermore, the very notion of a grade level may not be appropriate for ambiguous words (e.g."order") or very frequent words (e.g."line"), as the depth of their understanding and, indeed, their very meaning evolves with education.Furthermore, our focus on single words may not do full justice to the combinatorial nature of mathematical concepts.To take just one example, the concept of "vector space", being expressed by two words, was not represented in the present embeddings.In spite of these limits, the fact that we observed meaningful variations of both familiarity and similarity with grade and education indicates that the present work provides a useful first approximation to an exhaustive dictionary of the most frequent math concepts.
The second limitation concerns the embeddings.As explained above, GloVe vectors only accounted for 43 % of the noise ceiling of participants' similarity ratings.Further work will be needed to increase the percentage of explained variance.One possible solution could be to use deep neural networks instead of embeddings derived from single-word cooccurence statistics.In recent years, Transformer models (Vaswani et al., 2017) have drawn a lot of attention and have been shown to be able to predict brain activations in fMRI studies (Caucheteux & King, 2022;Pasquiou et al., 2022;Schrimpf et al., 2021), although their capacity to capture even elementary mathematical knowledge remains highly debated, and specialized math models may be required (Anand et al., 2024;Peng, Yuan, Gao, & Tang, 2021).Further investigation is needed in this direction.Another option would also be to obtain distributed semantic representations from a larger corpus.Indeed, math articles are only a small fraction of Wikipedia, and adding more content to the corpus (Bourbaki textbooks for instance) may be beneficial to the estimation of the embeddings, at least for mathematically advanced participants.Language-of-thought approaches to mathematics (Dehaene, Al Roumi, Lakretz, Planton, & Sablé-Meyer, 2022b;Goodman, Tenenbaum, & Gerstenberg, 2014;Piantadosi, Tenenbaum, & Goodman, 2012;Sablé-Meyer et al., 2022), which focus on the hierarchical compositional nature of mathematical concepts, may also provide a more appropriate cognitive foundation for the construction of higher level concepts in the course of education.
Regarding the visualizations of the semantic space of GloVe vectors, the 2-dimensional views that we obtained with t-SNE or PCA should only be taken as indicative, as the projections of a high-dimensional space may vary with the tool used, the subset of words under consideration, or even the random seed inputted to the projection algorithms.It should always be remembered that the full data lives in high dimensions.Finally, it must also be noted that our behavioral study of conceptual familiarity and similarity focused only on nouns.The psychological validity of the GloVe representations obtained for verbs, adjectives or proper names remains to be evaluated.
Future directions
We provided a comparison between human similarity ratings and GloVe embeddings.A natural continuation would be to compare the present embeddings and similarity ratings to vectors obtained from the human brain using brain imaging or intracranial recording techniques.We would then be able to leverage representational similarity analysis (Kriegeskorte, Mur, & Bandettini, 2008) tools to gain insight in the organization of math concepts in the brain, including their topographic organization on the cortical surface of individual subjects, going beyond existing group fMRI studies of mathematicians (Amalric & Dehaene, 2016, 2019).Ultimately, this could lead to the creation of a mathspecific brain viewer, similar to that proposed by Huth et al. (2016).
Creation of the corpora
The corpora were extracted from HuggingFace's Wikipedia 20220301.frdataset.In order to divide the dataset into a math and a non-math corpus, we parsed all the pages and located them in the wikipedia_fr_all_maxi_2022-04.zimdump.A bot then decided whether each page was math or not.To do so, it reached the bottom of the page and searched for an occurence of one of the following strings: "Portail des mathématiques"; "Portail de la géométrie"; "Portail de l'analyse"; "Portail de l'algèbre"; "Portail des probabilités et de la statistique"; "Arithmétique et théorie des nombres"; "Portail de la logique"; "Portail de l'informatique théorique".These labels indicate that a page belongs to the portal of mathematics or theoretical computer science.
The math (resp.non-math) pages were aggregated to form the math (resp.non-math) corpus, and the two corpora were also merged to form the global corpus.
In total, the math corpus comprised 16,455 pages out of the 2,402,095 included in the dataset, and the non-math corpus comprised 2,236,840 pages (the remaining pages were redirections and disambiguation articles).
Extraction of the vocabulary and generation of the embeddings
In order to ensure that the vocabulary did not contain several occurrences of the same word (e.g.singular and plural for a noun, or infinitive and conjugated forms for a verb), we performed a lemmatization step on each corpus (math, non-math and global).Two lemmatization passes were carried out using Python spacy's fr_core_news_md model.The lemmatized math corpus was then provided as an input to the GloVe pipeline (Pennington et al., 2014).The pipeline has three steps: it first creates a 50,000-word vocabulary sorted by decreasing frequency, then builds a cooccurrence matrix on the vocabulary words in the corpus and finally runs GloVe on this matrix.Roughly, GloVe uses the cooccurrence matrix of a vocabulary in a corpus to derive semantic vectors for each word by taking into account ratios of cooccurrences.The window size for cooccurrence was set to 15 words.As reported in the main text, the number of dimensions of the embedding vectors was varied from 1 to 500 (all values between 1 and 50, and from 50 to 500 with an increment of 50).
The vocabulary output by GloVe on the math corpus actually contained only 39,345 words, because the GloVe pipeline did not count words with fewer than five occurrences in the corpus.We then applied the same procedure to the non-math and global corpora, while imposing the vocabulary to correspond to the output from the math corpus.
Words of the vocabulary were reviewed manually in decreasing order of frequency by a person with extensive math training.Only the 1000 first words which belonged to the math domain (whether elementary or advanced) were kept, thereby constituting the final thousand-word math vocabulary.
The vocabulary was then complemented with word frequency in the math and non-math corpora.We also estimated words' school grade of acquisition ("word grade") by examining when the corresponding concept was introduced in the French national curricula (the following levels were distinguished: primary school, 6-7th grade, 8-9th grade, 10th grade, 11-12th grade, bachelor, licence, master), grammatical category (number, symbol, noun, name, adverb, adjective, verb or any combination of these when applicable) and meta-information such as whether the words are meta-mathematical meta-math (e.g."theory" or "example") and whether they have several different math meanings (polysemy, e.g."tangent").
Selection of the pairs
For the behavioral experiment, we only kept those numbers, nouns and symbols which were not flagged as too polysemic or meta-math, leaving us with 429 words.As these numbers would have yielded 91,378 different pairs, we did not attempt to measure their full similarity matrix.Furthermore, a random sampling of this large matrix would have yielded a vast majority of unrelated words, thus making the experiment quite monotonous for participants.Instead, to select target pairs for a given participant, we computed the distribution of cosine similarities between pairs of words with the same grade of acquisition using 50-dimensional GloVe vectors obtained from the math corpus.For each of the 429 target words, we identified the three furthest words (cosine similarity close to − 1), closest words (cosine similarity close to 1), most orthogonal words (cosine similarity close to 0) and words whose similarity was closest to the average similarity between words of the same grade (n = 14,790, μ = 0.24, σ = 0.20).With this procedure, we obtained a total of 3756 pairs that we used for the similarity rating experiment (see below, and Table S2).
Design of the online experiment
The experiment was coded in PHP (server side) and JavaScript (client side) using the jsPsych library.It was hosted on the secure server of NeuroSpin and no personal data was collected.
In the design of this experiment, pairs of words from 6 to 7th grade and 8-9th grade, 10th grade and 11-12th grade, and licence and master were merged.This yielded five groups of grade of acquisition for the words of our vocabulary.The experiment consisted of three different parts.The first was a short demographic survey, in which we asked participants, among other things, their last grade of education in mathematics and a selfassessment of their current math skills on a scale from 1 to 10.In the second part, participants were asked to judge how familiar they were with 50 words chosen at random from our pool of 429 words.The familiarity ratings were made on a discrete scale from 0 to 8. To make the scale as objective as possible for participants 0 was labeled as "totally unknown", 2 as "familiar word but unfamiliar concept", 4 as "vague idea", 6 as "familiar concept", and 8 as "fully mastered concept".
Finally, in the last part, participants were shown 20 pairs of words at each grade of education at or below theirs (e.g.participants who reported a high-school level of math education were shown 20 word pairs from primary school, 20 pairs of 6-9th grade and 20 pairs of 10-12th grade).The similarity judgements were made on a continuous scale from 0 to 5, and labels indicated that 0 meant "totally unrelated" and 5 "'closely related".The similarity rating block was preceded by a training block consisting of 8 pairs of words from everyday life ("fathermother", "boat -car", "table -god", "toothbrush -television", "Gandhi -Hitler", "stool -crown", "clock -Julius Caesar", "carrot -asparagus") covering the whole scale of possible similarities, presented in a randomized order, and 1 pair from the sailing jargon ("jib -capstan"), always presented last.This training period ensured that scale was calibrated in the same way for all participants.The order of presentation of words within each pair was randomized across participants.
Analysis pipeline
All analyses were run in Python 3.11 using the numpy, statsmodels, scipy and scikit-learn libraries.The plots were obtained using the matplotlib and seaborn modules.
Rank linear regressions
Because many of our variables were not quantitative nor linearly ordered, we first transformed them into ranks before entering them into regressions or general linear models (GLMs).We used the scipy.stats.rankdatafunction in Python, with the default option which assigns the mean rank to ties.We then used the statsmodels.regression.linear_model.OLS class to fit the GLMs, and manually added an intercept (as it is not added by default) using the statsmodels.tools.add_constantfunction.
Noise ceiling
In order to get an approximation of the amount of noise in the behavioral data, we computed a noise ceiling.The ceiling captures the average amount of variance present in one participant's responses that cannot be explained by the average responses of the other participants.
To compute the noise ceiling, we performed a leave-one-out cross validation.For each individual participant, we computed the correlation between its answers and the average answers of the other participants.We then computed the average R 2 .
In the case of noise ceiling in subgroups (for instance for groups of participants with the same level of math education), there sometimes was not enough overlap between the stimuli seen by one particular participant and all the others.Therefore, when computing noise ceiling in any given subgroup, we correlated the answers of each participant of
Fig. 1 .
Fig. 1.Experimental design.(A) Creation of corpora and embedding.(B) Behavioral experiment with a subset of words: familiarity and similarity ratings.(C) Relation to research questions.
Fig. 2 .
Fig. 2. Familiarity ratings as a function of participant education and word grade.Note that, for greater readability, the dependent variable (rating of word familiarity) is on the x axis.Colors represent different categories of words, sorted according to the grade at which they are introduced in the French curriculum.
Fig. 4 .
Fig. 4. Factors affecting the correlation between human similarity ratings and the similarities predicted by GloVe embeddings.(A) Percentage of variance explained by a rank correlation of human similarity ratings against the cosine angle of GloVe math embeddings, as a function of the dimensionality of those embeddings.The dashed line shows the noise ceiling, estimated as the fraction of variance in the data from one participant that could be accounted for by all the other participants (leave-one-out cross validation, see Methods).50-dimensional embeddings were used in the rest of this work.(B) Variation in the percentage of explained variance depending on word grade of acquisition and the corpus used to compute the embeddings; (C) Same, depending on participant education level.
Table 1
Summary of the main characteristics of the French vocabulary.
Note: Frequency is expressed in Log10 units per million. | 11,029.2 | 2024-09-26T00:00:00.000 | [
"Mathematics"
] |
Mapping the Density of States Distribution of Organic Semiconductors by Employing Energy Resolved–Electrochemical Impedance Spectroscopy
Although the density of states (DOS) distribution of charge transporting states in an organic semiconductor is vital for device operation, its experimental assessment is not at all straightforward. In this work, the technique of energy resolved–electrochemical impedance spectroscopy (ER‐EIS) is employed to determine the DOS distributions of valence (highest occupied molecular orbital (HOMO)) as well as electron (lowest unoccupied molecular orbital (LUMO)) states in several organic semiconductors in the form of neat and blended films. In all cases, the core of the inferred DOS distributions are Gaussians that sometimes carry low energy tails. A comparison of the HOMO and LUMO DOS of P3HT inferred from ER‐EIS and photoemission (PE) or inverse PE (IPE) spectroscopy indicates that the PE/IPE spectra are by a factor of 2–3 broader than the ER‐EIS spectra, implying that they overestimate the width of the distributions. A comparison of neat films of MeLPPP and SF‐PDI2 or PC(61)BM with corresponding blends reveals an increased width of the DOS in the blends. The results demonstrate that this technique does not only allow mapping the DOS distributions over five orders of magnitude and over a wide energy window of 7 eV, but can also delineate changes that occur upon blending.
DOI: 10.1002/adfm.202007738
consequence, charge transport in OSs is slowed down as compared to that in coun terpart perfect molecular crystals. It is well established that charge carriers move via incoherent hopping within a density of state (DOS) distribution. [8][9][10][11] The broader the DOS is, the lower is the charge carrier mobility and the higher is the associated activation energy as well as the time after which a dynamic process equilibrates, such as the motion of a sheet of charge carriers injected from an electrode. Map ping the DOS to determine the energies of the electron and hopping transporting states is therefore crucial for material characterization. [12,13] Disorder in OSs is manifested in the broadening of their absorption and photo luminescence spectra of OSfilms. It reflects (i) the local variation of the van der Waals coupling of a singlet or triplet state to the polarizable environment, [8] (ii) structural variations of a chromophore, for example, the variation length of the effective conjugation length of a conjugated polymer, [14] and (iii) dynamic effects such as thermally activated rotational or vibrational motion within the chromophore. [15,16] Such contributions toward energetic dis order will also affect the DOS of valence and conduction states that control hole and electron motion. Moreover, DOS distribu tions are likely to change when going from neat to blended films. A further source of broadening of the tail of an absorption of a
Introduction
Organic semiconductors (OSs) are the key active element in todays' photocopiers and are gaining increasing importance in optoelectronic devices such as organic lighting emitting diodes, [1][2][3] organic solar cells (OSCs) [4][5][6] and organic field effect transistors. [7] For practical reasons they are amorphous or polycrystalline films. This implies structural disorder. As a semiconductor arises when absorption occurs not from the 0-0 vibrational state but-thermally activated-from phonon coupled states. This gives rise to exponential Urbach tails of the absorp tion spectra. [17,18] However, since the energies of Urbach tails are typically around a few meV only, this effect is relevant only in fairly ordered semiconductors such as molecular or inorganic crystals, or hybrid materials such as perovskites [19] and would otherwise be buried under static and dynamic level broadening.
Unfortunately, the DOS of valence and conduction states in OSs is not amenable to direct optical absorption as it is case in inorganic semiconductors. The reason is that-owing to the weak electronic coupling in OSs-photoexcitation creates exci tons rather than charge carriers, so that the direct transition from the valence (HOMO) DOS to the conduction (LUMO) DOS is not observed. A simple way to estimate the degree of dis order in OSs is to measure the temperature dependence of hole and electron transport. However, this yields only upper values for the widths of relevant DOSs because both dynamic disorder and structural relaxation may contribute to broadening, beyond the already existing static distribution in the DOS. [10,11] More over, it does not provide information on the detailed structure of the DOS. An alternative method toward DOS mapping is photoemission (PE) [20] and inverse PE (IPE). [21,22] It is experi mentally demanding and data analysis may be complicated because electrons emitted from different layers of the OS have slightly different energies since the polarization energies of molecules next to vacuum are diminished. [23,24] In this work we apply the technique of energy resolvedelectrochemical impedance spectroscopy (EREIS) to determine the DOS of selected OSs. EREIS is a novel electrochemical impedance technique bordering on voltammetry. [25,26] Usually, the organic semiconductor probed by voltammetry is dissolved in solution. In an EREIS measurement, in contrast, the OS is deposited as a film, where solid state effects prevail (Figure 1). Thus, the molecules are probed in the local environment of the film, so that the EREISspectrum is a direct reflection of the DOS of the hole (HOMO) and electron (LUMO) transporting states. In the current work we will use MeLPPP and P3HT as a donortype conjugated polymer and PCBM and SFPDI 2 as rep resentative electron acceptors in the form of neat films as well as in blends. We will demonstrate that the EREIS technique does not only allow to map the DOS functions over as many as five decades but also to delineate changes which occur upon blending. We find that the examined DOS distributions are usually of Gaussian character but in case of PCBM there is an exponential tail.
The ER-EIS Experiment
EREIS is a spectroscopic method to map the electronic struc ture of an organic solid in the contact with an electrolyte via a redoxreaction. [27][28][29][30] It evolved out of the electrochemical impedance spectroscopy and advanced general voltammetric techniques such as square wave voltammetry [31][32][33] or other reported voltammetric modulation approaches. [31,34] We briefly outline its operational principle and refer to refs. [26,28] for fur ther details. For the measurement, a 3electrode electrochem ical cell is used. Thus, a thin film, typically about 100 nm, of the OS is deposited by spincoating onto a conducting substrate, in our case indiumtinoxide (ITO) covered glass or doped Si. The OS film is covered by a liquid electrolyte that is contained by an inert, insulating frame. An Ag/AgCl reference and a Pt auxiliary wire electrode are inserted into the electrolyte, while the ITO or Si serves as working electrode. A DC voltage ramp between reference and working electrode is swept, modulated by an AC voltage of suitably chosen frequency, and the resulting cur rent is recorded. The measured impedance Z meas is a result of the Helmholtz layer that forms at the electrolyte/OS interface when a voltage is applied. Reversible charge transfer from ions of the electrolyte to countercharges in the OS in the vicinity of the interface will occur once the applied voltage compensates the difference between the energies of the relevant states, and this gives rise to the real component of Z meas . Under steady state conditions this interfacial recombination current has to be balanced by a current flowing through the OS that carries an exit contact, in our case the ITO or doped Si electrode. A crucial condition is that the rate limiting step must be charge exchange at the electrolyte/OS interface rather than the charge transport toward to the exit contact. This implies that the voltage drop across the OS bulk be negligible relative to the voltage required to drive the injection at the interface. In turn, this requires a percolating network of transport states and implies charge transport via drift and diffusion under space charge limited (SCL) conditions.
The setup for a EREIS measurement may appear similar to that of an electrochemical fieldeffect transistor. [35] In both cases, the organic semiconductor film is covered by an elec trolyte with a Ag/AgCl reference electrode and a Pt auxiliary electrode inserted into it. However, there are several differences between the two methods. In the electrochemically gated field effect transistor, the recorded signal is typically the direct cur rent flow from source to drain electrode while a potential is applied at the gate electrode. In these measurements, counte rions that diffuse into the semiconductor from the electrolyte play a role. The electrochemical impedance measurement, in contrast, is conducted using an alternating current signal such as to prevent significant counterion effects. Further measures to avoid intercalation of ions include measuring the whole spectrum in two steps, always in a new electrochemical cell, with one measurement being used to obtain the branch for hole transporting states and one for the electron transporting states, as detailed in refs. [26][27][28].
Experimentally, the charge transfer resistance at the elec trolyte/OS interface is measured by superimposing a peri odic perturbation of the applied potential. Upon scanning the applied voltage one can assess the energetic position, and thus is the differential charge transfer resistance and is the differential spacecharge capacity in the bulk of the polymer. [28] L is the sample thickness, S is the area of the working electrode, and [A] is the concentration of the OS redox species dissolved in the solvent. V f is the volume frac tion of respective charge carriers near the surface with an effective thickness of the acceptor/donor layer next to the interface of typically 1-2 nm. [36] The k et (E) is the electron transfer rate. [37] The use of the chargetransfer resistance data R ct (E F ) (Equation (2)) and the spacecharge capacitance data C sc (E F ) (Equation (3)) constitutes two complementary approaches, which probe different parts of the conjugated polymer film, both providing information on the DOS func tion g(E F ). The R ct reflects the redox process that takes place near the surface of the OS and the electrolyte whereas C SC traces the bulk of the conjugated polymer film. For the data presented below, we evaluated the chargetransfer resistance data. We confirmed that the same results are obtained for the DOS function when evaluating the differential spacecharge capacitance.
The P3HT DOS Probed by Photoemission and by ER-EIS
In order to determine the electrical gap of a P3HT film Deibel et al. [22] had measured the spectra of PE as well IPE. They provide reference information on the DOS distribution. It was straightforward to apply the EREIS technique to study P3HT and compare the results, thereby extending our earlier work. [25,27,38] In Figure 2a we present the spectrum we obtain from our impedance measurements for the DOS function g(E F ) while Deibel et al.'s results are shown in Figure 2b. For the sake of easy comparison the original PE and IPE spectra have been replotted on a semilogarithmic scale. The edges to the spectra have been fitted to Gaussian lineshapes (see Table 1). It is gratifying that the tails of the Gaussians approxi mately coincide (see also the dashed lines in Figure 2). How ever, the PE and IPE spectra are significantly broader and the separation between their maxima has increased from 3.18 to 3.9 eV. We also note the smaller dynamic range of the PE/IPE measurement.
The Donor Polymer MeLPPP with a Fullerene and a Non-Fullerene Acceptor
In order to characterize the DOS distributions of a conjugated donor polymer and, importantly, to delineate changes that can occur upon blending we use the laddertype MeLPPP. It is one of the least disordered polymers as evidenced by the narrow absorption and emission spectra. [16] This facilitates to uncover morphologyrelated changes of the DOSdistributions. As representative filmforming acceptors we use PC(61)BM and SFPDI 2 . Figure 3 shows the compilation of the EREIS spectra for the DOS function g(E F ) for the hole and electron transporting states of radical cation and anion states. We attribute the high Figure 2. The DOS function g(E F ) for HOMO and LUMO states of a P3HT film inferred from either a) ER-EIS or b) photoemission and inverse photoemission (data from Deibel et al. [22] ). The colored dashed lines indicate Gaussian fits with the solid colored line indicating the data points considered in the fit. The arrows indicate the center position of the Gaussians. The black dashed lines serve to ease comparison between DOS tails obtained from both methods. Table 1. The center energy E 0 and the standard deviation σ obtained from the Gaussian fits to the edges of the HOMO and LUMO attributed parts of the DOS function for a P3HT film. Also given is the energy gap Photo (low) energy edge of DOS function for the hole (electron) states to the HOMO (LUMO). The data derived from the spectra are summarized in Table 2. The spectra for a neat MeLPPP film are presented in Figure 3a. Both the parts of the DOS func tion attributed to the HOMO as well as the part attributed to the LUMO are of Gaussians extending over five decades in amplitude, recognizing, though, that it is principally difficult to distinguish whether a low energy tail has a Gaussian or an exponential shape. The standard deviations (σ) of HOMO and LUMOattributed g(E F ) functions are 50 and 60 meV (see Table 2).
The HOMOattributed part of the DOS function g(E F ) of a SFPDI 2 neat film is of Gaussian shape over two orders of magnitude but it carries a weak tail (Figure 3b) while the LUMOattributed part of the DOS function g(E F ) is also a per fect Gaussian with a standard deviation of 55 meV. The gap between the centers of the HOMO and LUMO attributed parts is 2.54 eV, that is, 0.17 eV higher than a literature value inferred from cyclic voltammetry. [39] Remarkably, the standard deviation of the optical absorption is about 150 meV, that is, significantly larger than the DOS values for the charge transporting states.
In Figure 3c we show the DOS function for a neat PC(61)BM film. Over two orders of magnitude the HOMOattributed part is of Gaussian shape with σ = 95 meV with an exponential tail while the LUMOattributed part is a pure Gaussian with σ = 65 meV. The electrical gap, defined by the separation between the maxima of the parts, is 2.6 eV.
In blends with MeLPPP and either PC(61)BM or SFPDI 2 the high energy edge of the lower energy part of g(E F ) is associated with the HOMO of the donor (MeLPPP) (Figure 3d,e)). The Gaussian character of the HOMO of MeLPPP is preserved but there is additional state broadening σ (70 and 63 meV instead of 50 meV) and tail states appear at the high energy side of the HOMO roughly 0.3 eV above the center of the bulkDOS. Their relative concentrations are roughly 0.001 (MeLPPP:SFPDI 2 ) and 0.01 (MeLPPP:PCBM). In the blend, the lowest energy feature in the higher energy part of g(E F ) is the LUMO of the acceptor, while the higher energy feature around −2 eV can be associated with the MeLPPP LUMO. In both cases there is a significant broadening of the LUMO of MeLPPP. Moreover, in the MeLPPP:PCBM blend, additional features can be discerned, centered at −3.60 and −3.10 eV, with σ = 75 and 110 meV, respectively, that that are barely visible in neat PC(61)BM (see also Figure 4b below for ease of comparison, and Supporting Information).
Comparison Between DOS Distribution Inferred from ER-EIS and Photoemission
The DOS distributions of HOMO and LUMO distributions of a neat P3HT film inferred from EREIS are of Gaussian shape with σ parameters of 63 meV (HOMO) and 168 meV (LUMO). The electrical gap is 3.18 eV and-since the energy of the singlet exciton is about 2.1 eV, the exciton binding energy is about 1.1 eV, that is, close to that of MeLPPP and classic molecular crystals. [24] The fact that the width of the LUMO distribution is unusually broad can be ascribed to the broader distribution of effective conjugation length in a P3HT film. [40] However, the PE and IPE spectra are significantly broader than the EREIS spectra, and the gap between the peak positions are increased from 3.18 to 3.9 eV. The observation that upon going from EREIS to PE/ IPE the HOMO distribution increases from 60 to 260 meV and the LUMO distribution increases from 170 to 360 meV demon strates that EREIS and PE/IPS monitor different phenomena. An EREIS experiment probes the electronhole transfer directly from an electrolyte to the OS. On the other hand, PE probes the ejection of a highly excited electron into the gasphase and IPE probes the dissipation of electrons upon entering the solid. [41] Therefore the respective response functions in PE and IPE are the convolutions of the DOS functions and the escape/dissipa tion functions. In the case of PE, electron ejection is likely to be affected by the electron scattering and the decrease of the polar ization energy of an electron at the interface between the OS and vacuum. [23] Both effects will broaden the response function and increase the apparent ionization energy. A PE spectrum is, therefore, unable to probe the width of the DOSdistributions for HOMO states. Studies on charge transport and theoretical studies support this reasoning. [12] Suppose the width of the HOMODOS distribution of a P3HT film were as broad as 260 meV, then the hole mobility would-based upon the Gaussian disorder model (ref. 8)-be about 10 −17 cm 2 Vs −1 , in striking disagreement with experiment. [42,43] There is indeed consensus that width of distributions of hole and electron trans porting states are typically around 100 meV. [44]
Neat Films
In all cases the DOS functions of HOMO and LUMO states shown in Figure 3a-c are either full Gaussians or Gaussians that extend into weak tails with a characteristic energy of about 100 meV, that is, at least one order of magnitude larger than those of typical Urbach tails. Provided that the DOS function indeed reflects the DOS, this confirms that the DOS widths are due to static and dynamic disorder. The simplest approach to analyze the DOS distributions is the pointsite concept. [8] It implies that either a point charge or a localized exciton polarize their environment via van der Waals coupling with coupling energies featuring an r −4 (charge) and r −6 (exciton) dependence on distance, respectively. Imposing a random distribution of the intermolecular separations translates into a distribution of Gaussian character because the polarization energies depend on a large number of coordinates each varying randomly and therefore the central limit theorem applies. It is easy to show that in this case the ratio of the standard deviations of the DOSs for charges (σ c ) and excitons (σ exc ) is σ c /σ exc = 4P c /6P exc where P c and P exc are the polarization energies of charges and exci tons, respectively.
A vapor deposited PC(61)BM film is a system in which the pointsite model may provide a reasonable estimate for ana lyzing the DOS distributions of charge carriers and excitons. The electron affinity (EA) of PC(61)BM in the gasphase and in the solid are 2.63-2.65 eV [45,46] and 3.84 eV, [47] respectively. Table 2. The center energy E 0 and the standard deviation σ obtained from the Gaussian fits to the edges of the HOMO and LUMO attributed DOS functions g(E F ) for neat films of MeLPPP, SF-PDI, 2 and PC(61)BM, as well as for the donor-acceptor blends. Also given is the energy gap E g = E 0 (HOMO) − E 0 (LUMO). For the blends, center energies to multiple Gaussian fits are given for the LUMO, corresponding to donor or acceptor or different acceptor phases (c.f. Figure 3d, The difference between the EA values in the gasphase and the solid, that is, the polarization energy of a radical anion, is 1.2 eV like in many organic solids. [24] The value for the EA in solid PC(61)BM, −3.86 agrees well with the average value meas ured by different techniques. The standard deviations (σ) of the HOMO/LUMODOSs inferred from EREIS spectra are 95 and 90 meV, respectively. They are significantly lower than the value 130 meV that Tummals et al. [13] obtained from molecular dynamics simulations. On the other hand, the σ value for the singlet excitons in PC(61)BM, inferred from the delayed fluo rescence spectra at 295 K, is only 40 meV. [48] To be consistent with the point site model (see above) polarization energy of the singlet exciton in the solids had to be 350 meV. This is indeed the difference between the energy of the singlet state of the anthracene molecule in the gasphase (3.45 eV) [49] and in the solid (3.1 eV), [24] which can be taken as a reference value (in lack of corresponding data on PC(61)BM).
The width of the LUMODOS should be reflected in the temperature dependence of the electron mobility. From tem perature dependent SCL electron transport in PC(61)BM diodes Mihailetchi et al. [50] concluded that electron transport is consistent with the extended Gaussian disorder model and extracted a σ value of 77 meV. This value is 15% lower than the width of the LUMODOS. A possible reason for this discrep ancy is that in the SCL transport mode tail states of the LUMO DOS are partially filled. This should diminish the experimen tally determined width of the electron DOS. Moreover, the film investigated by Mihailetchi et al. and our film have not been prepared using precisely the same deposition conditions, so that the film morphology can vary slightly.
The HOMODOS of a PC(61)BM film is of Gaussian shape over only two orders of magnitude and features a broader tail at higher energies. Such exponential tails have occasionally been observed. [51] They are a signature of traps that exist in systems in which the intrinsic HOMODOS is beyond 6 eV. [52] The posi tion of the peak of the HOMODOS is 6.4 eV and is consistent with literature values. [53][54][55][56] Since the ionization energy in the gas phase is 7.59 eV, [57] the polarization energy of the radical cation in PBCM is close to 1.2 eV, that is, the same as the polari zation energy of the radical anion. This yields an electrical gap of 2.6 eV while PE and IPE yield 2.37 eV. [56] From the action spectrum of intrinsic photogeneration we obtained E g = 2.45 ± 0.05 eV. [58] Combined with the fact that hole transport in PCBM is traplimited we conjecture that the E g value inferred from the maxima of HOMOLOMO DOSs of EREIS spectra refer to the intrinsic gap while the electrical gap determined from the tail of PE as well as photoconduction spectra are affected by the contribution of hole traps. Summing up, we argue that the point site model provides a reasonable basis for analyzing the DOSs for exciton as well as charge transporting states in a small molecule system such as vapor deposited PCBM film.
Let us now consider a SFPDI 2 film. SFPDI 2 is a more extended molecule as compared to PC(61)BM. The LUMODOS of SFPDI 2 is a perfect Gaussian over 4 decades with a vari ance of 55 meV while the HOMODOS is somewhat broader (σ = 83 meV) and carries a weak tail. Remarkably the σ value for the absorption spectrum, (shown in the SI) is 0.15 eV. This demonstrates that the DOS distributions for charge carriers can be narrower than those of neutral excitations. Even this also indicates that in this case the pointsite model is unsuitable for estimating the DOS distribution for charge carriers based upon absorption or PL spectra because the exciton is more spread out than in a spherical molecule like PC(61)BM. [59] Next we turn to the EREIS spectra for the HOMO and LUMO DOS distributions of MeLPPP films. For MeLPPP the LUMO DOS is a perfect Gaussian with a standard deviation of 60 meV while the HOMODOS is a somewhat distorted Gaussian with 50 meV. The absorption and fluorescence spectra are also Gauss ians with a variance around 50 meV somewhat depending on film preparation. [60] It seems that the σ values for charge carriers and singlet excitons are comparable. The point site model is clearly inappropriate for estimating the ratio of the spectral widths. This is because in conjugated polymers, the main contribution of disorder broadening stems from local variation of conjugations. This conju gationinduced variation translates into the site energies. A grati fying test of internal consistency is that the separation between the centers of the HOMO and LUMODOSs is 3.8 eV and agrees with the threshold energy for intrinsic photogeneration. [61] Since the energy of the singlet exciton is 2.72 eV, the binding energy of exciton is about 1.2 eV like in conventional molecular solids.
In this context it is worth recalling earlier work on thermally stimulated luminescence (TSL) on thick films of MeLPPP. In such a TSL study one excites a sample at lower temperature to generate electronhole pairs. They are metastable because elec trons are deeply trapped. Upon raising the temperature from 5 K onward one observes delayed emission. It originates from the thermally activated release of holes from shallow intrinsic traps, that is, the HOMODOS, and subsequently recombines with trapped electrons. Therefore, the temperature depend ence of the TSL signal reflects the HOMODOS of the MeLPPP host. By analyzing the TSL signal the shape and the width of the HOMODOS are recovered. The result confirms that at low temperatures (≈50 K) the DOS of a thick MeLPPP film has a Gaussian shape with σ = 54 meV. [62] This is in favorable agree ment with the value of 50 meV inferred from the EREIS study.
We note that we do not observe any indication of trap states in the spectrum shown in Figure 3a. This is remarkable since electrontransport in MeLPPP has been found to be strongly traplimited, with trap concentrations expected to be around 10 18 cm −3 . [63] It is not fully clear why these traps are not evident in the EREIS spectra, and this is subjected to further investiga tion. Possibly this is related to the dominance of the counter balancing hole current in MeLPPP, or poor injection of charges directly into the trap states.
Blended-Films
It is instructive to compare the DOS distributions of MeLPPP:PCBM and MeLPPP:SFPDI 2 blends with those of the parent neat films ( Figure 4) 3) The weak, barely noticeable shoulder at −3.60 eV in the LUMO of PCBM becomes stronger in the blend with MeLPPP, so that it appears as a peak with the same intensity than the −3.85 eV peak of the LUMO in the neat PCBM. We tentatively assign the −3.85 eV feature to crystallike domains and the −3.60 eV feature to more disordered PCBM. The ra tionale behind this reasoning is that upon progressive order ing the ionization energy of a solid decreases. Formation of ordered domains of PCBM in bulkhetero junction OSCs in addition to more amorphous and possibly even intercalated structures is a wellestablished phenomenon. [64][65][66] In passing we note that the feature at −3.10 eV has also acquired more intensity compared to the neat PCBM film. Since this is more than half of an eV above the features attributed to LUMOs of differently ordered PCBM morphologies, we consider the −3.10 eV feature is more likely to arise from some different orbital.
Conclusions
Since that for PCBM both the ionization energy and the EA have been measured in the gasphase employing PE tech niques, the EREIS spectrum of a PCBM film provides values for the polarization energies of radical cations and anions as well as their standard deviations of the pertinent DOS distri butions. Combined with PL spectroscopy we conclude that the simple pointsite model, which rests upon the notion that disorder originates from of fluctuations of van der Waals coupling, is enough to understand disorder effects in PCBM. This is in contrast with films of conjugated polymers because there is an additional spreading of site energies due statistical variations of the effective conjugations length. The mutual consistence between different probes of disorder phenomena demonstrates that impedance spectroscopy is indeed a valu able technique to map the DOS distributions for charge car riers. Its particular advantage is that it allows mapping the DOS over five orders of magnitude. Its dynamic range is thus greater than that of PE spectroscopy and it is much cheaper and easier to employ. We find that in all systems we looked at, the central portion of measured HOMO and LUMO distributions are of Gaussian shape, occasionally carrying broader tails that are associated with extrinsic defects and can act as charge carrier traps. Impor tantly, the EREIS technique is able to interrogate the DOS dis tributions in neat films as well as in blends. It is therefore a method to probe changes in the HOMO/LUMO distributions upon blending, albeit subject to the condition that there is a percolation path of charges across the film. An example is the broadening of the LUMODOS of MeLPPP upon blending with PCBM.
By comparing DOS distributions of a P3HT film inferred from PE and electrochemical impedance measurements, respectively, we find that widths of the DOS distributions deter mined using PE and IPE are typically by a factor 2-3 larger than those inferred from EREIS spectra The likely reason is that in the former case there is additional spectral broadening in the course of electron ejection from the OS to vacuum (PE) and dissipation of the excess energy in the injected electron (IPE). Therefore, PE and IPE overestimate the disorder in an OS.
Experimental Section
For the ER-EIS method, the electrochemical microcells had a volume of about ≈200 µl. The active organic semiconductor was deposited on top of either an ITO covered glass or highly doped Si (n+ or p+) substrates with deposited organic semiconductor thin film. The solution of 0.1 m TBAPF6 in anhydrous acetonitrile was used as the supporting electrolyte. The active organic semiconductor electrode area was 12 mm 2 . The potential of the working electrode with respect to the reference Ag/AgCl electrode was controlled via a potentiostat. A Pt wire was used as the counter electrode. The potential recorded with respect to the reference Ag/AgCl electrode was recalculated to the local vacuum level assuming the Ag/AgCl energy versus vacuum value of 4.66 eV. An impedance/gain-phase analyzer, Solartron analytical, model 1260 (Ametek, Berwyn, USA), run in the usual three-electrode regime. The AC harmonic voltage signal frequency was usually 0.5 Hz, its rms value was 100 mV, and the sweep rate of the DC voltage ramp was 10 mV s −1 . Bode and Cole-Cole diagrams in the frequency range of 0.01-1 MHz were used for the preliminary ER-EIS frequency adjustment.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 7,207.2 | 2020-12-06T00:00:00.000 | [
"Materials Science",
"Physics",
"Chemistry"
] |
A Concretization of an Approximation Method for Non-Affine Fractal Interpolation Functions
: The present paper concretizes the models proposed by S. Ri and N. Secelean. S. Ri proposed the construction of the fractal interpolation function (FIF) considering finite systems consisting of Rakotch contractions, but produced no concretization of the model. N. Secelean considered countable systems of Banach contractions to produce the fractal interpolation function. Based on the abovementioned results, in this paper, we propose two different algorithms to produce the fractal interpolation functions both in the affine and non-affine cases. The theoretical context we were working in suppose a countable set of starting points and a countable system of Rakotch contractions. Due to the computational restrictions, the algorithms constructed in the applications have the weakness that they use a finite set of starting points and a finite system of Rakotch contractions. In this respect, the attractor obtained is a two-step approximation. The large number of points used in the computations and the graphical results lead us to the conclusion that the attractor obtained is a good approximation of the fractal interpolation function in both cases, affine and non-affine FIFs. In this way, we also provide a concretization of the scheme presented by C.M. P˘acurar.
Introduction
The notion of fractal interpolation has been introduced by Barnesley in [1] (see also [2]) and it represents a different interpolation method, which results in functions that are continuous, but not necessarily differentiable at every point. , where x i are sorted in an ascending order such that its graph is the attractor of an iterated function system. The significance of FIFs is emphasized by the numerous research directions that have been broadly studied ever since they were introduced. Among these directions, we mention the hidden variable fractal interpolation, which was introduced by Barnsley et al. (see [3]) and generates functions which are not self-referential; thus, being much more less restrictive (see [4][5][6]), the extension to a countable iterated function systems (a notion introduced in [7][8][9]) to obtain the corresponding FIFs (see [10,11]) and the replacement of the fixed point result (Banach fixed point theorem), which guarantees the existence of the FIF with different fixed point results (see [12][13][14][15]).
Among the different types of FIFs existing in the literature, there were studied affine FIFs (see [1]), but also non-affine FIFs (see [16]). However, if for the affine case, there have been studies undertaken towards the computational part (see [17][18][19][20][21]), as far as we know, there have not yet been any studies related to non-affine FIFs in this respect.
The aim of the present paper is to offer a concretization of an approximation method for non-affine fractal interpolation functions. Starting from the results in [8,12], in this paper, we propose two different algorithms to produce the fractal interpolation functions in both cases affine and non-affine FIFs. The theoretical context we were working in, suppose a countable set of starting points and a countable system of Rakotch contractions. Due to the computational restrictions the built algorithms, in applications, have the weakness that they uses a finite set of starting points and a finite system of Rakotch contractions. With this respect, the attractor obtained is a two-step approximation. The big amount of points used in the computations and the graphical results lead us to the conclusion that the attractor obtained is a good approximation of the fractal interpolation function in both affine and non-affine cases. In this way, we also provide a concretization of the scheme presented by C.M. Pȃcurar (see [15]).
In this study, we want to solve also the problem of viewing a big set of data, generated by the iterations schemes mentioned above, in order to better understand the theoretical knowledge in the function plotting field. We study the nature of data plotting in C++ regarding its pros and cons (limitations). The scope of the application is to generate graphs for various functions and to observe the steps taken by the algorithm in order to obtain the correct plotting. Using C++ (one of the fastest and most memory efficient languages) and Qt (a C++ cross-platform framework for GUI -Graphical User Interface), we developed an application that puts into use most of the modern features offered by C++ (especially, C++11 issues).
Mathematical Preliminaries
Let (X, d) be a metric space. Definition 1. The map f : X → X is called a Picard operator if f has a unique fixed point x * ∈ X (i.e., f (x * ) = x * ) and lim for every x ∈ X, where f [n] denotes the n-times composition of f with itself.
for every x, y ∈ X. The smallest C in the above definition is called Lipschitz constant and it is defined as for every x, y ∈ X.
3.
A map f : X → X is called ϕ-contraction if there exists a function ϕ : for every x, y ∈ X. 4.
Every Banach contraction is Lipschitz where the Lipschitz constant is smaller than 1.
2.
Every Banach contraction is a ϕ-contraction, for for every t > 0.
In [22], the following fixed point result was proved.
Theorem 1. Every Matkowski contraction on a complete metric space is a Picard operator.
Iterated Function Systems
Hutchinson introduced the notion of iterated function systems in [23]. Secelean extended the notion to countable iterated function systems, composed of a countable number of constitutive functions (see [8]).
Definition 3.
Let (X, d) be a compact metric space and the continuous functions f n : X → X. The system of all functions f n is called a countable iterated function system (CIFS), which will be denoted by S = {( f n ) n≥0 } Let P cp (X) be the class of all non-empty compact subsets of X. The fractal operator associated to S is the map F S : P cp (X) → P cp (X) defined as If the functions f n are Matkowski contractions (or Rakotch contractions, or ϕ-contractions, or Banach contractions), the fractal operator associated to the CIFS S is a Picard operator and its unique fixed point is called the attractor of S, which will be denoted by A S .
Countable FIFs
Let (Y, d) be a compact metric space and the countable system of data where the sequence (x n ) n≥0 is strictly increasing and bounded and m = lim n→∞ x n , and the sequence (y n ) n≥0 is convergent. We make the notation M = lim n→∞ y n . Let us recall from [15], the way we can construct a family of functions associated to the system of data (1): ] be a family of contractive homeomorphisms such that We can now define the family of functions ( f n ) n≥0 Given the same aforementioned framework, there exists an interpolation function f * corresponding to the system of data (1) such that its graph is the attractor of the CIFS In the particular case that Y is a compact real interval, Y ⊂ (0, ∞), we can choose the non-affine functions f n as follows (see [15]): For the affine case, when Y is a compact real interval, one can choose the functions f n as follows (see [10]):
Applied Technologies. Motivation (Pros)
Qt is a widget toolkit for creating graphical user interfaces as well as cross-platform applications that run on various software and hardware platforms such as Linux, Windows, macOS, Android or embedded systems with little or no change in the underlying codebase, while still being a native application with native capabilities and speed. Qt Creator is a cross-platform C++, JavaScript and QML integrated development environment which simplifies Graphical User Interface (GUI) application development. It includes a visual debugger and an integrated WYSIWYG (What You See Is What You Get) GUI layout and forms the designer. The editor has features such as syntax highlighting and autocompletion.
One of the main problems encountered during development was using user-input flexible functions, that is why we prioritised adding a fairly robust mathematics parsing engine written in C++. We chose CmathParser (https://github.com/NTDLS/CMathParser, accessed on 15 February 2021) that provides a robust collection of functions and structures that give users the ability to parse and evaluate various types of expressions. Although it is fairly lightweight, CMathParser can interpret a various list of mathematical functions and operations and its performance is convenient in relation to the advantages that this engine brings. The mathematical functions used need to follow a specific syntax (for example, √ x is SQRT(x)), this is why we found it useful to read our functions from a file. We opted for reading from an XML (Extended Markup Language) file because we can use the tags in our advantage and clearly define every function and every parameter for that function.
Below, Listing 1, is an example for an XML file accepted by the application: Qt C++ widget for plotting and data visualization. It has no further dependencies and is well documented. This plotting library focuses on making good looking, publicationquality 2D plots, graphs and charts, as well as offering high performance for real time data visualization applications.
Technical Notes on Performance
Our target regarding the application's performance focused around optimising the algorithm in a way that it brings up the graphs as soon as possible. To increase the performance, we used multithreading: we use all the available threads on the CPU and developed the algorithm in a way that favors concurrency. The algorithm finds out how many threads are available and uses them (for a CPU with 4 threads, the algorithm will use a maximum of 4 threads on max load) although it is possible to start any number of threads (the OS scheduler will put them in a priority queue). A program that starts 100 threads for 100 tasks on a 4 threaded CPU will be less performant than the same program that splits those 100 tasks in a way that the CPU takes 4 tasks at a time.
This, Listing 2, is a threading code snippet that spawns as many threads as available to generate points: The number of used threads impacts performance, and overall, the quality and time; see Figure 1. It can also be seen that as the number of points increases, the difference between the time associated with running with 8 threads versus 16 threads increases considerably.
Limitations (Constraints)
At the present time, the only known limit of the application is the time required to generate and plot the points when they are billions or even more in number. The next examples were run on a i7-7700HQ with 8 threads. RAM memory is not really relevant because the algorithm is very CPU heavy.
For the probabilistic scheme, generating and plotting 100,000 points are made in approximately 2-3 s, and for every p * 100,000 (p is a signed integer), the time will be approximately p * x, where x ∈ [2,3].
For the deterministic scheme, things become challenging. At a low level, the time for generating and plotting points will be the same as for Step 1. The difference occurs in the number of points the scheme generates for parameters k, n, p. leading to run time fluctuations and may occur due to the insertion and processing of points in the files.
Countable Fractal Non-Affine Interpolation Schemes
We start the study producing an approximation for the fractal countable non-affine interpolation scheme in two different ways. Let us consider the positive, increasing, convergent sequence (x n ) n∈N , the convergent sequence (y n ) n∈N defined by (x n ) n∈N , (y n ) n∈N given by the following: and the sequence of non-affine functions given by the following: for all (x, y) ∈ [x 0 , m] × Y . The argument of the function sinus we used when defining y n was imposed by the fact that the Mathematical Function Library demands us to work in radians.
The absolute value of sinus in the definition of y n was imposeed by the fact that every second coordinate of the points obtained in the process must be positive in order to have a Rakotch contraction. The result obtained is an approximation of the countable non-affine interpolation schemes. For the desired approximation, we take the following subsets The two schemes we used are the probabilistic interpolation scheme and the deterministic interpolation scheme. The probabilistic scheme is described in Algorithm 1.
The deterministic scheme we are triyng to apply is given by the Algorithm 2. The probabilistic scheme (Algorithm 1), after 100,000 steps, leads us to the result shown in Figure 2. In Figure 4, a graph with the plotting of the interpolation function in every step of the algorithm is shown.
Countable Fractal Affine Interpolation Schemes
We made the study producing an approximation for the fractal countable affine interpolation scheme in the same two ways as for the non-affine case. Let us consider the positive, increasing, convergent sequence (x n ) n∈N , the convergent sequence (y n ) n∈N defined by x n = 3 √ n + 1 √ n + 1 , y n = cos 180 · n π + 1 √ n + 1 and the affine sequence of functions ( f n (x, y)) n∈N for all (x, y) ∈ [x 0 , m] × Y . The argument of the function cosinus we used when defining y n was imposed by the fact that the Mathematical Function Library demands us to work in radians.
For the desired approximation, we take the following subsets The probabilistic scheme (Algorithm 1), after 100,000 steps, leads us to the result shown in Figure 5. The deterministic scheme produces, for k = 100 , n = 100 and p = 3, the graph in Figure 6.
The time for obtaining the points in these conditions was 595.631 s and the time for plotting the function was 1594.470 s. The graph with the plotting of the interpolation function in every step of the algorithm is given in Figure 7.
The red graph is for the function given by the random generated 100 points. Graph 2 is the function for 10,000 thousand points (step for p = 1), graph 3 is for the function after step p = 2 (1,000,000 points) and the yellow graph is the final one-the same as in Figure 7.
Conclusions
The main conclusion of this study is that the algorithms presented give similar approximations of the FIFs for both schemes, affine and non-affine. For the probabilistic scheme (Algorithm 1), significant results are obtained for more than 10,000 steps and the time elapsed to plot the graph is less than two seconds. The deterministic scheme (Algorithm 2) permits the study of the variation of FIFs, step by step, but, in order to obtain significant results one must perform more than three steps. The time elapsed in this case is more than 1000 s. In the applications presented, both algorithms have an imposed number of steps.
Further studies are to be made in order to obtain a condition for stopping the algorithms if some conditions are fulfilled .
Besides the two algorithms, the computer application is a useful tool for plotting big sets of data generated by the iteration schemes described above through functions made in C++. It truly shows the capabilities of this programming language and it pushes it to the maximum using threading and modern programming techniques. | 3,602.2 | 2021-04-01T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Modeling a measurement-device-independent quantum key distribution system.
We present a detailed description of a widely applicable mathematical model for quantum key distribution (QKD) systems implementing the measurement-device-independent (MDI) protocol. The model is tested by comparing its predictions with data taken using a proof-of-principle, time-bin qubit-based QKD system in a secure laboratory environment (i.e. in a setting in which eavesdropping can be excluded). The good agreement between the predictions and the experimental data allows the model to be used to optimize mean photon numbers per attenuated laser pulse, which are used to encode quantum bits. This in turn allows optimization of secret key rates of existing MDI-QKD systems, identification of rate-limiting components, and projection of future performance. In addition, we also performed measurements over deployed fiber, showing that our system's performance is not affected by environment-induced perturbations.
Introduction
From the first proposal in 1984 to now, the field of quantum key distribution (QKD) has evolved significantly [1,2]. For instance, experimentally, systems delivering key at Mbps rates [3] as well as key distribution over more than 100 km [4,5] have been reported. From a theoretical perspective, efforts aim at developing QKD protocols and security proofs with minimal assumptions about the devices used [6]. Of particular practical importance are two recently developed protocols that do not require trusted single photon detectors (SPDs) [7,8]. One of these, the so-called measurement-device-independent QKD (MDI-QKD) protocol, has already been implemented experimentally [9][10][11][12]. Hence, it is foreseeable that it will play an important role in the future of QKD, and it is thus important to understand the interplay between experimental imperfections (which will always remain in real systems) and system performance to maximize the latter.
In this work, we derive a widely applicable mathematical model describing systems that implement the MDI-QKD protocol. The model is based on facts about our [9], and other existing experimental setups [10][11][12], and takes into account carefully characterized imperfect state preparation, loss in the quantum channel, as well as limited detector efficiency and noise. It is tested by comparing its predictions with data taken with a proof-of-principle QKD system [9] employing time-bin qubits and implemented in a laboratory environment. Our model, which contains no free parameter, reproduces the experimental data within statistical uncertainties over three orders of magnitude of a relevant parameter. The excellent agreement allows optimizing central parameters that determine secret key rates, such as mean photon numbers used to encode qubits, and to identify rate-limiting components for future system improvement. In addition, we also find that the model accurately reproduces experimental data obtained over deployed fibers, showing that our system minimizes environment-induced perturbation to quantum key distribution in real-world settings.
This paper is organized in the following way: In section 2 we detail some of the side-channel attacks (i.e. attacks exploiting incorrect assumptions about the working of QKD devices) proposed so far and review technological countermeasures. In section 3 we briefly describe the MDI-QKD protocol, which instead exploits fundamental quantum physical laws to render the most important of these attacks useless. Our model of MDI-QKD systems is presented in section 4. This section is followed by an in-depth account of experimental imperfections that affect MDI-QKD performance and a description of how we characterized them in our system (section 5). Section 6 shows the results of the comparison between modelled and measured quantities, and section 7 details how to optimize the performance of our MDI-QKD system using the model. Finally, we conclude the article in section 8.
Side-channel attacks
A healthy development of QKD requires investigating the vulnerabilities of QKD implementations in terms of potential side-channel attacks. Side-channels in QKD are channels over which information about the key may leak out unintentionally. One of the first QKD side-channel attacks proposed was the photon number splitting (PNS) attack [13] in which the eavesdropper, Eve, exploits the fact that attenuated laser pulses sometimes include more than one photon to obtain information about the key. This attack can be detected if the decoy state protocol [14][15][16] is implemented. In the decoy state protocol, Alice varies the mean photon number per pulse in order to allow her and Bob to distill the secret key only from information stemming from single photon emissions. More proposals of side-channel attacks followed, including the Trojan-horse attack [17], for which the countermeasure is an optical isolator [17], and the phase remapping attack [18], for which the countermeasure is phase randomization [18]. Later on, attacks that took advantage of SPD vulnerabilities were also proposed and demonstrated [19][20][21][22]. For example, the time-shift attack [20] exploits a difference in the quantum efficiencies of the SPDs used in a QKD system. This attack can be prevented by actively selecting one of the two bases for the projection measurement, as well as by monitoring the temporal distribution of photon detections [20]. Another example is the detector blinding attack [22] in which the eavesdropper uses high intensity pulses to modify the performance (i.e. blind) the SPDs. It can be detected by monitoring the intensity of light at the entrance of Bob's devices with a photodiode [22][23][24]. Nevertheless, due to its power, the blinding attack is currently of particular concern.
It is important to mention that open side-channels do not necessarily compromise the security of the final key if the information that Eve may have obtained through an attack is properly removed during privacy amplification. However, as technological fixes (as discussed above) or additional privacy amplification can only thwart known attacks, it is important to develop and implement protocols that use a minimum number of assumptions about the devices used to implement the protocol. An important example is the measurement-device-independent QKD protocol, which we will introduce in the next section.
The measurement-device-independent quantum key distribution protocol
The MDI-QKD protocol is a time-reversed version of entanglement-based QKD. In this protocol, the users, Alice and Bob, are each connected to Charlie, a third party, through a quantum channel, e.g. optical fiber (see Fig. 1). In the ideal version, the users have a source of single photons that they prepare randomly in one of the BB84 qubit states [25] |0 , |1 , |+ and |− , where |± = 2 −1/2 (|0 ± |1 ). The qubits are sent to Charlie where the SPDs are located. Charlie performs a partial Bell state measurement (BSM) through a 50/50 beam splitter and then announces the events for which the measurement resulted in a projection onto the state. Alice and Bob then publicly exchange information about the used bases (z, spanned by |0 and |1 , or x, spanned by |+ and |− ). Associating quantum states with classical bits (e.g. |0 , |− ≡ 0, and |1 , |+ ≡ 1) and keeping only events in which Charlie found |ψ − and they picked the same basis, Alice and Bob now establish anticorrelated key strings. (Note that a projection of two photons onto |ψ − indicates that the two photons, if prepared in the same basis, must have been in orthogonal states.) Bob then flips all his bits, thereby converting the anti-correlated strings into correlated ones. Next, the so-called x-key is formed out of all key bits for which Alice and Bob prepared their photons in the xbasis; its error rate is used to bound the information an eavesdropper may have acquired during photon transmission. Furthermore, Alice and Bob form the z-key out of those bits for which both picked the z-basis. Finally, they perform error correction and privacy amplification [1,2] to the z-key, which results in the secret key.
The advantage of the MDI-QKD protocol over conventional prepare-and-measure or entangled photon-based QKD protocols is that, in the case of Charlie performing an ideal (partial) BSM as described above, detection events are uncorrelated with the final secret key bits. This is because a projection onto |ψ − only indicates that Alice and Bob sent orthogonal states, but does not reveal who sent which state. As a result, Charlie (or Eve) is unable to gain any information about the key from passively monitoring the detectors. Furthermore, a measurement that is different from the ideal BSM leads to an increased error rate and thus to a smaller, but still secret, key once privacy amplification has been applied. Notably, it does not matter wether the difference is due to experimental imperfections or to an eavesdropper (possibly Charlie himself) trying to gather information about the states that Alice and Bob sent by replacing or modifying the measurement apparatus. Hence, all detector side channels are closed in MDI-QKD.
In the ideal scenario introduced above, Alice and Bob use single photon sources to generate qubits. However, it is possible to implement the protocol using light pulses attenuated to the single photon level. Indeed, as in prepare-and-measure QKD, randomly varying the mean photon number of photons per attenuated light pulse between a few different values (so-called decoy and signal states) allows making the protocol practical while protecting against a possible PNS attack [7,26]. The secret key rate is then given by [7]: where h 2 is the binary entropy function, f indicates the error correction efficiency, Q indicates the gain (the probability of a projection onto |ψ − per emitted pair of pulses [27]) and e indicates error rates (the ratio of erroneous to total projections onto |ψ − ). Furthermore, the superscripts, x or z, denote if gains or error rates are calculated for qubits prepared in the xor the z-basis, respectively. Similarly, the subscripts, µ and σ , show that the quantity under concern is calculated or measured for pulses with mean photon number µ (sent by Alice) and σ (sent by Bob), respectively. Finally, the subscript 11 indicates quantities stemming from detection events for which the pulses emitted by Alice and Bob contain only one photon each. Note that Q 11 and e 11 cannot be measured; their values must be bounded using either a decoy state method, or employing qubit tagging [13]. However, the latter yields smaller key rates and distances than the former. Shortly after the original proposal [7], a practical decoy state protocol for MDI-QKD was proposed [26]. It requires Alice and Bob to randomly pick mean photon numbers between two decoy states and a signal state. One of the decoy states must have a mean photon number lower than the signal state, while the other one must be vacuum. A finite number of decoy states results in a lower bound for Q x,z 11 and an upper bound for e x 11 , which in turn gives a lower bound for the secret key rate in Eq. (1). We will elaborate more on decoy states in section 7.1.
The model
Our model takes into account imperfections present in a typical QKD system. Regarding the sources, located at Alice and Bob, we take into account imperfect preparation of the quantum state of each photon. Furthermore, we consider transmission loss of the links between Alice and Charlie, and Bob and Charlie. And finally, concerning the measurement apparatus at Charlie's, we consider imperfect projection measurement stemming from non-maximum quantum interference on Charlie's beam splitter, detector noise such as dark counts and afterpulsing, and limited detector efficiency. See also [28] for another model describing MDI-QKD performance, but with a more restrictive set of imperfections and not yet tested against actual experimental data.
In the following paragraphs we present a detailed description of our model. It relies on the assumption of phase randomized laser pulses at Charlie's. While Alice and Bob generate coherent states in our proof-of-principle setup, this assumption is correct as the long fibres used to connect Alice and Bob with Charlie introduce random global phase variations (we will discuss the impact of the lack of phase randomization at Alice's and Bob's on the security of distributed keys in section 8). We note that, in order to facilitate explanations, we have adopted the terminology of time-bin encoding. However, our model is general and can also be applied to MDI-QKD systems implementing other types of encoding [11].
State preparation
In the MDI-QKD protocol, Alice and Bob derive key bits whenever Charlie announces a projection onto the |ψ − Bell state. We model the probability of a |ψ − projection for various quantum states of photons emitted by Alice and Bob as a function of the mean photon number per pulse (µ and σ , respectively) and transmission coefficients of the fiber links (t A and t B , respectively). We consider photons in qubit states described by: where |0 and |1 denote orthogonal modes (i.e. early and late temporal modes assuming timebin qubits), respectively. Note that |ψ describes any pure state [29] and the presence of the m x,z and b x,z terms in Eq. (2), as opposed to using only one parameter, is motivated by the fact that they model different experimentally characterizable imperfections. In the ideal case, m z ∈ [0, 1] for photon preparation in the z-basis (in this case, the value of φ z is irrelevant), m x = 1 2 and φ x ∈ [0, π] for the x-basis, and b x,z = 0 for both bases. Imperfect preparation of photon states is modelled by using non-ideal m x,z , φ x,z and b x,z for Alice and Bob. The parameter b x,z is included to represent the background light emitted and modulated by an imperfect source. Furthermore, in principle, the various states generated by Alice and Bob could have differences in other degrees of freedom (i.e. polarization, spectral, spatial, temporal modes). This is not included in Eq. (2), but would be reflected in a reduced quality of the BSM, which will be discussed below.
Conditional probability for projections onto |ψ −
A projection onto |ψ − occurs if one of the SPDs after Charlie's 50/50 beam splitter signals a detection in an early time-bin (a narrow time interval centered on the arrival time of photons occupying an early temporal mode) and the other detector signals a detection in a late time-bin (a narrow time-interval centered on the arrival time of photons occupying a late temporal mode). Note that, in the following paragraphs, this is the desired detection pattern we search for when modeling possible interference cases or noise effects. Also, note that we assume that Charlie's two single-photon detectors have identical properties. A deviation from this approximation does not open a potential security loophole (in contrast to prepare-and-measure and entangled photon based QKD), as all detector side-channel attacks are removed in MDI-QKD.
We build up the model by first considering the probabilities that particular outputs from the beam splitter (at Charlie's) will generate the detection pattern associated with a projection onto |ψ − . The outputs are characterized by the number of photons per output port as well as their joint quantum state. The probabilities for each of the possible outputs to occur can then be calculated based on the inputs to the beam splitter (characterized by the number of photons per input port and their quantum states, as defined in Eq. (2)). Note that for the simple cases of inputs containing zero or one photon (summed over both input modes), we calculate the probabilities leading to the desired detection pattern directly, i.e. without going through the intermediate step of calculating outputs from the beam splitter. Finally, the probability for each input to occur is calculated based on the probability for Alice and Bob to send attenuated light pulses containing exactly i photons, all in a state given by Eq. (2). The probability for a particular input to occur also depends on the transmissions of the quantum channels, t A and t B . We note that this model considers up to three photons incident on the beam splitter. This is sufficient as, in the case of heavily attenuated light pulses and lossy transmission, higher order terms do not contribute significantly to projections onto |ψ − . However, we limit the following description to two photons at most: the extension to three is lengthy but straightforward and follows the methodology presented for two photons.
Detector noise Let us begin by considering the simplest case in which no photons are input into the beam splitter. In this case, detection events can only be caused by detector noise. We denote the probability that a detector indicates a spurious detection as P n . Detector noise stems from two effects: dark counts and afterpulsing [32]. Dark counts represent the base level of noise in the absence of any light, and we denote the probability that a detector generates a dark count per time-bin as P d . Afterpulsing is an additional noise source produced by the detector as a result of prior detection events. The probability of afterpulsing depends on the total count rate, hence we denote the afterpulsing probability per time-bin as P a , which is a function of the mean photon number per pulse from Alice and Bob (µ and σ ), the transmission of the channels (t A and t B ) and the efficiency of the detectors (η) located at Charlie (see below for afterpulse characterization). The total probability of a noise count in a particular time-bin is thus P n = P d + P a . All together, we find the probability for generating the detection pattern associated with a projection onto the |ψ − -state, conditioned on having no photons at the input, specified by "in", of the beam splitter, to be : Here and henceforward, we have ignored the multiplication factor (1-P n ) ∼ 1 [30], which indicates the probability that a noise event did not occur in the early time-bin (this is required in order to see a detection during the late time-bin assuming detectors with recovery time larger than the separation between the |0 and |1 temporal modes). Note that the probability conditioned on having no photons at the inputs of the beam splitter equals the one conditioned on having no photons at the outputs (specified in Eq. (3) by the conditional "out").
One-photon case Next, we consider the case in which a single photon arrives at the beam splitter. To generate the detection pattern associated with |ψ − , either the photon must be detected and a noise event must occur in the other detector in the opposite time-bin, or, if the photon is not detected, two noise counts must occur as in Eq. (3). We find where η denotes the probability to detect a photon that occupies an early (late) temporal mode during an early (late) time-bin (we assume η to be the same for both detectors).
Two-photon case We now consider detection events stemming from two photons entering the beam splitter. The possible outputs can be broken down into three cases. In the first case, both photons exit the beam splitter in the same output port and are directed to the same detector. This yields only a single detection event, even if the photons are in different temporal modes (the latter is due to detector dead time. Note that as our model calculates detections in units of bits per gate, modeling a dead-time free detector is straightforward.). The probability for Charlie to declare a projection onto |ψ − is then In the second case, the photons are directed towards different detectors and occupy the same temporal mode. Hence, to find detections in opposite time-bins in the two detectors, at least one photon must not be detected. This leads to P(|ψ − |2 photons, 2 spatial modes, 1 temporal mode, out) = In the final case, both photons occupy different spatial as well as temporal modes. In contrast to the previous case, a projection onto |ψ − can now also originate from the detection of both photons. This leads to P(|ψ − |2 photons, 2 spatial modes, 2 temporal modes, out) = In order to find the probability for each of these three two-photon outputs to occur, we must examine two-photon inputs to the beam splitter. We note that it is possible for the two photons to be subject to a two-photon interference effect (known as photon bunching) when impinging on the beam splitter. As this quantum interference can lead to an entangled state between the output modes, the calculation must proceed with quantum mechanical operators. We consider three cases: two photons arrive at the same input of the beam splitter, one photon arrives at each input of the beam splitter and the two photons are distinguishable, and one photon arrives at each input of the beam splitter and the two photons are indistinguishable. For ease of analysis, we first introduce some notation: where b x,z 1,2 and m x,z 1,2 are the parameters introduced in Eq. (2); the subscripts label the photon (one or two) whose state is specified by the parameters. Furthermore, p x,z (i, j) is proportional to finding photon one before the beam-splitter in temporal mode i and photon two in temporal mode j, where i, j ∈ [0, 1]. Finally, b x,z norm is a normalization factor. First, considering the situation in which the two photons impinge from the same input on the beam splitter, one has the state whereâ † (0) andâ † (1) are the creation operators for a photon in the |0 or |1 state, respectively. Evolving this state through the standard unitary transformation for a lossless, 50/50 beam splitter, described byâ † → (ĉ † +d † )/ √ 2 (whereĉ † andd † are the two output modes of the beam splitter), one finds that with probability 1/2 the two photons exit the beam splitter in the same output port (or spatial mode) and with probability 1/2 in different ports. Furthermore, with norm we find the photons in different spatial modes and in the same temporal mode, and with probability B = [p x,z (0, 1) + p x,z (1, 0)]/2b x,z norm we find the photons in different spatial and temporal modes. By symmetry, we find the same result if the two photons arrive from the other input mode of the beam splitter.
Second, consider the situation in which the two photons come from different inputs, and are completely distinguishable in some degree of freedom. This can be modelled by starting with the input state whereb † is the creation operator for a photon in the second input mode of the beam splitter. One can then evolve the state with the beam splitter unitary described byâ † → (ĉ † +d † )/ √ 2 (as before) andb † → (−ê † +f † ) √ 2, whereĉ † andê † correspond to the same spatial output mode but with distinguishability in another degree of freedom, and similarly for the other spatial output mode described byd † andf † . One finds the same result as for the previous case, described by Eq. (10): P (|ψ − |2 photons, 2 spatial modes, non-interfering, in) = P(|ψ − |2 photons, 1 spatial mode, in) The definition reflects that there is no two-photon interference in both cases. Finally, consider the case in which the two photons impinge from different inputs are indistinguishable, and interfere on the beam splitter. This can be modelled by considering the same input state as in Eq. (11), but using a beam splitter unitary described byâ † → ( In this case, the probabilities of finding the outputs from the beam splitter discussed in Eqs. (5-7) depend on the difference between the phases φ x,z 1 and φ x,z 2 that specify the states of photons one and two, ∆φ x,z ≡ φ x,z 1 − φ x,z 2 . Note that, due to the two-photon interference effect, finding the two photons in different spatial modes and the same temporal mode is impossible. We are thus left with the case of having two photons in the same output port (the same spatial mode), which occurs with probability C = [p x,z (0, 0) + p x,z (1, 1) + 0.5(p x,z (0, 1) + p x,z (1, 0)) + p x,z (0, 1)p x,z (1, 0) cos(∆φ x,z )]/b x,z norm , and the case of having the photons in different temporal and spatial modes, which occurs with probability D = [0.5(p x,z (0, 1) + p x,z (1, 0)) − p x,z (0, 1)p x,z (1, 0) cos(∆φ x,z )]/b x,z norm . This leads to P(|ψ − |2 photons, interfering, in) = C × P(|ψ − |2 photons, 1 spatial mode, out) + D × P(|ψ − |2 photons, 2 spatial modes, 2 temporal modes, out).
4.3. Aggregate probability for projections onto |ψ − Now that we have calculated the conditional probabilities of a detection pattern indicating |ψ − for various inputs to the beam splitter, let us consider with what probability each case occurs. This requires that we know the photon number distribution of the pulses arriving at Charlie's beam splitter from Alice and Bob, which can be computed based on the photon number distribution at the sources and the properties of the quantum channels. For the following discussion, we assume that the channels from Alice to Charlie, and from Bob to Charlie are characterized by the loss t A and t B , respectively, yielding pulses with number distribution D and mean photon number, µt A and σt B , respectively. This is equivalent to assuming that no PNS attack takes place, which was ensured by performing experiments with the entire setup (including the fiber transmission lines) inside a single laboratory in which no eavesdropping took place during the experiments. We limit our discussion to the cases with two or less photons at the input of the beam splitter (but recall that the actual calculation includes up to three photons). Hence, the cases we consider and their probabilities of occurrence, P O , are given by: • 0 photons at the input from both sources: • 1 photon at the input from Alice and 0 photons from Bob: • 0 photons at the input from Alice and 1 photon from Bob: • 2 photons at the input from Alice and 0 photons from Bob: • 0 photons at the input from Alice and 2 photons from Bob: • 1 photon at the input from both sources: where we denote the probability of having i photons from a distribution D with mean number µ as D i (µ). For each of these cases, we have already computed the probability that Charlie obtains the detection pattern associated with the |ψ − -state for arbitrary input states of the photons (as defined in Eq. (2)). When zero or one photons arrive at the beam splitter, Eq. (3) and Eq. (4) are used, respectively. In the case in which two photons arrive from the same source, Eq. (12) is used. Finally, in the case in which one photon arrives from each source at the beam splitter, Eq. (13) would be used in the ideal case. However, perfect indistinguishability of the photons cannot be guaranteed in practice. We characterize the degree of indistinguishability by the visibility, V , that we would observe in a closely-related Hong-Ou-Mandel (HOM) interference experiment [33] with single-photon inputs. Taking into account partial distinguishability, the probability of finding a detection pattern corresponding to the projection onto |ψ − is given by Equations 3-14 detail all possible causes for observing the detection pattern associated with a projection onto the |ψ − Bell state, if up to two photons at the beam splitter input are taken into account. We remind the reader that all calculations in the following sections take up to three photons at the input of the beam splitter into account. To calculate the gains, Q x,z µσ , using these equations, we need only substitute in the correct values of µ, σ , t A , t B , m x,z , b x,z , and ∆φ x,z for the cases in which Alice and Bob both sent attenuated light pulses in the x-basis or z-basis, respectively. The error rates, e x,z µ , can then be computed by separating the projections onto |ψ − into those where Alice and Bob sent photons in different states (yielding correct key bits) and in the same state (yielding erroneous key bits). More precisely, the error rates, e x,z µσ , are calculated as e x,z µσ = p x,z wrong /(p x,z correct + p x,z wrong ) where p x,z wrong (p x,z correct ) denotes the probability for detections yielding an erroneous (correct) bit in the x (or z)-key.
Characterizing experimental imperfections
The parameters used to model our system are derived from data established through independent measurements. To test our model, the characterization of experimental imperfections in our MDI-QKD implementation [9] is very technical at times. It can be broken down into timeresolved energy measurements at the single photon level (required to extract µ, σ , b x,z and m x,z for Alice and Bob, as well as dark count and afterpulsing probabilities), measurements of phase (required to establish φ x,z for Alice and Bob), and visibility measurements. In the following paragraphs we describe the procedures we followed to obtain these parameters from our system.
Our MDI-QKD implementation
In our implementation of MDI-QKD [9] Alice's and Bob's setups are identical. Each setup consists of a CW laser with large coherence time, emitting at 1550nm wavelength. Time-bin qubits, encoded into single photon-level light pulses with Poissonian photon number statistics, are created through an attenuator, an intensity modulator and a phase modulator located in a temperature controlled box. More precisely, the intensity modulator is used to tailor pulse pairs out of the cw laser light, the phase modulator is used to change their relative phase, and the attenuator attenuates these pulses to the single-photon level. The two temporal modes defining each time-bin qubit are of 500 ps (FWHM) duration and are separated by 1.4 ns. Each source generates qubits at 2 MHz rate.
We emphasize that our qubit generation procedure justifies the assumption of a pure state in Eq. (2). Indeed, all photons, including background photons due to light leaking through imperfect intensity modulators, have to be generated by the CW lasers whose coherence times exceeds the separation between the temporal modes |0 and |1 [31]. Note that in all experiments reported to date [9][10][11][12] background photons always add coherently to the modes describing qubits, making our pure-state description widely applicable.
The time-bin qubits are sent to Charlie through an optical fiber link. The link consisted of spooled fiber (for the measurements in which Alice, Bob and Charlie were all located in the same laboratory) or deployed fiber (for the measurements in which the three parties were located in different locations within the city of Calgary). We remind the reader that all pulses arriving at Charlie's are phase randomized, due to the use of long fibers. Charlie performs a BSM on the qubits he receives using a 50/50 beamsplitter and two SPDs. See Figure 2. Note that, in order to perform a Bell state measurement the photons arriving to Charlie must be indistinguishable in all degrees of freedom: polarization, frequency, time and spatial mode. The indistinguishability of the photons is assessed through a Hong-Ou-Mandel interference measurement [33]. As our system employs attenuated laser pulses, the maximum visibility we can obtain in this measurement is V max = 50% (and not 100% as it would be with single photons) [34]. In our implementation the visibility measurements resulted in V = (47 ± 1), irrespective of whether they were taken with spooled fiber inside the lab, or over deployed fiber.
Time-resolved energy measurements
First, we characterize the dark count probability per time-bin, P d , of the SPDs (InGaAsavalanche photodiodes operated in gated Geiger mode [32]) by observing their count rates when the optical inputs are disconnected. We then send attenuated laser pulses so that they arrive just after the end of the 10 ns long gate that temporarily enables single photon detection. The observed change in the count rate is due to background light transmitted by the intensity modulators (whose extinction ratios are limited) and allows us to establish b x,z (per time-bin) for Alice and Bob. Next, we characterize the afterpulsing probability per time-bin, P a , by placing the pulses within the gate, and observing the change in count rate in the region of the gate prior to the arrival of the pulse. The afterpulsing model we use to assess P a from these measurements is described below.
Once the background light and the sources of detector noise are characterized, the values of m x,z can be calculated by generating all required states and observing the count rates in the two time-bins corresponding to detecting photons generated in early and late temporal modes. Observe that m z=1 for photons generated in state |1 (the late temporal mode) is zero, since all counts in the early time-bin are attributed to one of the three sources of background described above. Furthermore, we observed that m z=0 for photons generated in the |0 state (the early temporal mode) is smaller than one due to electrical ringing in the signals driving the intensity modulators. Note that, in our implementation, the duration of a temporal mode exceeds the width of a time-bin, i.e. it is possible to detect photons outside a time-bin (see Figure 3 for a schematical representation). Hence, it will be useful to also define the probability for detecting a photon arriving at any time during a detector gate; we will refer to this quantity as η gate .The count rate per gate, after having subtracted the rates due to background and detector noise, together with the detection efficiency, η gate (η gate , as well as η, have been characterized previously based on the usual procedure [32]), allows calculating the mean number of photons per pulse from Alice or Bob (µ or σ , respectively). The efficiency coefficient relevant for our model, η, is smaller than η gate . Finally, we point out that the entire characterization described above was repeated for all experimental configurations investigated (the configurations are detailed in Table 2). We found all parameters to be constant in µσt A t B , with the obvious exception of the afterpulsing probability. . Sketch (not to scale) of the probability density p(t) for a detection event to occur as a function of time within one gate. Detection events can arise from a photon within an optical pulse (depicted here as a pulse in the late temporal mode), or be due to optical background, a dark count, or afterpulsing. Also shown are the 400 ps wide time-bins. Within the early time-bin only optical background, dark counts and afterpulsing give rise to detection events in this case. Note that the width of the temporal mode exceeds the widths of the time-bins.
Phase measurements
To detail the assessment of the phase values φ x,z determining the superposition of photons in early and late temporal modes, let us assume for the moment that the lasers at Alice's and Bob's emit light at the same frequency. First, we defined the phase of Bob's |+ state to be zero (this can always be done by appropriately defining the time difference between the two temporal modes |0 and |1 ). Next, to measure the phase describing any other state (generated by either Alice or Bob) with respect to Bob's |+ state, we sequentially send unattenuated laser pulses encoding the two states through a common reference interferometer. This reference interferometer featured a path-length difference equal to the time-difference between the two temporal modes defining Alices and Bob's qubits. For the phase measurement of qubit states |+ and |− (generate by Alice), and |− generated by Bob), first, the phase of the interferometer was set such that Bob's |+ state generated equal intensities in each output of the interferometer (i.e. the interferometers phase was set to π/4). Thus, sending any of the other three states through the interferometer and comparing the output intensities, we can calculate the phase difference. We note that any frequency difference between Alice's and Bob's lasers results in an additional phase difference. Its upper bound for our maximum frequency difference of 10 MHz is denoted by φ f req .
Measurements of afterpulsing
We now turn to the characterization of afterpulsing. After a detector click (or detection event, which includes photon detection, dark counts and afterpulsing), the probability of an afterpulse occuring due to that detection event decays exponentially with time. The SPDs are gated, with the afterpulse probability per gate being a discrete sampling of the exponential decay. This can be expressed using a geometric distribution: supposing a detection event occurred at gate k = −1, the probability of an afterpulse occuring in gate k is given by P k = α p(1 − p) k . Thus, if there are no other sources of detection events, the probability of an afterpulse occuring due to a detection event is given by ∑ ∞ k=0 α p(1 − p) k . In a realistic situation, the geometric distribution for the afterpulses will be cut off by other detection events, either stemming from photons, or dark counts. In addition, the SPDs have a deadtime after each detection event during which the detector is not gated until k ≥ k dead (note that time and the number of gates applied to the detector are proportional). The deadtime can simply be accounted for by starting the above summation at k = k dead rather than k = 0. However, for an afterpulse to occur during the k th gate following a particular detection event, no other detection events must have occured in prior gates. This leads to the following equation for the probability of an afterpulse per detection event: where: and P d,gate denotes the detector dark count probability per gate (as opposed to per time-bin), and µ avg (µ, σ ,t A ,t B ) expresses the average number of photons present on the detector during each gate as follows: where b A and b B characterize the amount of background light per gate from Alice and Bob, respectively, and the factor of 1 2 comes from Charlie's beam splitter. The terms in the sum of Eq. (15) describe the probabilities of neither having an optical detection (γ), either caused by a modulated pulse or background light, nor a detector dark count (υ) in any gate before and including gate k, and not having an afterpulse in any gate before gate k (ρ), followed by an afterpulse in gate k (P k ). Equation (15) takes into account that afterpulsing within each time-bin is influenced by all detections within each detector gate, and not only those happening within the time-bins that we post-select when acquiring experimental data.
The afterpulse probability, P a,gate , for given µ, σ , t A and t B can then be found by multiplying Eq. (15) by the total count rate P a,gate = µ avg (µ, σ ,t A ,t B )η gate + P d,gate + P a,gate P(a,det).
This equation expresses that afterpulsing can arise from prior afterpulsing, which explains the appearance of P a,gate on both sides of the equation. Equation (18) simplifies to Finally, to extract the afterpulsing probability per time-bin, P a (µ, σ ,t A ,t B ), we note that we found that the distribution of afterpulsing across the gate to be the same as the distribution of dark counts across the gate. Hence, Fitting our afterpulse model to the measured afterpulse probabilities, we find α = 1.79 × 10 −1 , p = 2.90 × 10 −2 , and P d P d,gate = 4.97 × 10 −2 for k dead = 20. The fit, along with the measured values, is shown in Figure 4 as a function of the average number of photons arriving at the detector per gate µ avg (µ, σ ,t A ,t B ). A summary of all the values obtained through these measurements is shown in Table 1.
6. Testing the model, and real-world tests
Comparing modelled with actual performance
To test our model, and to verify our ability to perform, in principle, QKD with deployed (realworld) fiber, we now compare the model's predictions with experimental data obtained using the QKD system characterized by the parameters listed in Table 1. We performed experiments in two configurations: inside the laboratory using spooled fiber (for four different distances between Alice and Bob ranging between 42 km and 103 km), and over deployed fiber (18 km). The first configuration allows testing the model, and the second configurations shines light on our system's capability to compensate for environment-induced perturbations, e.g. due to temperature fluctuations. For each test, three different mean photon numbers (0.1, 0.25 and 0.5) were used. All the configurations tested (as well as the specific parameters used in each Table 1. Experimentally established values for all parameters required to describe the generated quantum states, as defined in Eq. (2), as well as two-photon interference parameters and detector properties.
Parameter
Alice's value Bob's value b z=0 = b z=1 (7.12 ± 0.98) × 10 −3 (1.14 ± 0.49) × 10 −3 b x=− = b x=+ (5.45 ± 0.37) × 10 −3 (1.14 ± 0.49) × 10 −3 m z=0 0.9944 ± 0.0018 0.9967 ± 0.0008 π + (0.075 ± 0.015) π − (0.075 ± 0.015) Parameter Value test) and the results obtained are listed in Table 2. In Figure 5 we show the simulated values for the error rates (e z,x ) and gains (Q z,x ) predicted by the model as a function of µσt A t B . The plot includes uncertainties from the measured parameters, leading to a range of values (bands) as opposed to single values. The figure also shows the experimental values of e z,x and Q z,x from our MDI-QKD system in both the laboratory environment and over deployed fiber.
Considering the data taken inside the lab, the modelled values and the experimental results agree within experimental uncertainties over three orders of magnitude. This shows that the model is suitable for predicting error rates and gains. In turn, this allows us to optimize performance of our QKD systems in terms of secret key rate (see section 7). In particular, the model allows optimizing the mean photon number per pulse that Alice and Bob use to encode signal and decoy states as a function of transmission loss, and identifying rate-limiting components.
Furthermore, the measurement results over deployed fibre are also well described by the same model, indicating that this more-difficult measurement worked correctly. The increased difficult across real-world fiber arises due to the fact that BSMs require incoming photons to be indistinguishable in all degrees of freedom (i.e. arrive within their respective coherence times, with identical polarization, and with large spectral overlap). As we have shown in [9], time-varying properties of optical fibers in the outside environment (e.g. temperature dependent polarization and travel-time changes) can remove indistinguishability in less than a minute. Active stabilization of these properties is thus required to achieve functioning BSMs and, in fact, three such stabilization systems were deployed during the MDI-QKD measurements presented here (more details are contained in [9]). That our measurement results agree with the predicted values of the model demonstrates that the impact of environmental perturbations on the ability to perform Bell state measurements is negligible (which is the same conclusion drawn in [9]).
Decoy-state analysis
To calculate secret key rates for various system parameters, which allows optimizing these parameters, first, it is necessary to compute the gain, Q z 11 , and the error rate, e x 11 , that stem from events in which both sources emit a single photon. We consider the three-intensity decoy state method for the MDI-QKD protocol proposed in [26], which derives a lower bound for the secret key rate using lower bounds for Q x,z 11 and an upper bound for e x 11 . Note that we assume here that the the only effect of imperfectly generated qubit states on the secret key rate that we consider . The difference in gains and error rates in the x-and the z-basis, respectively is due to the fact that, in the case in which one party sends a laser pulse containing more than one photon and the other party sends zero photons, projections onto the |ψ − Bell state can only occur if both pulses encode qubits belonging to the x-basis. The Bell state projection cannot occur if both prepare qubits belonging to the z-basis (we ignore detector noise for the sake of this argument). This causes increased gain for the x-basis and, due to an error rate of 50% associated with these projections, also an increased error rate for the x-basis.
here is that it increases the error rates (further considerations require advancements to security proofs, which are under way [26,35]) increases of error rates. We denote the signal, decoy, and vacuum intensities by µ s , µ d , and µ v , respectively, for Alice, and, similarly, as σ s , σ d , and σ v for Bob. Note that µ v = σ v = 0 by definition. This decoy analysis assumes that perfect vacuum intensities are achievable, which may not be correct in an experimental implementation. However, note that, first, intensity modulators with more than 50 dB extinction ratio exist, which allows obtaining almost zero vacuum intensity, and second, that a similar decoy state analysis with non-zero vacuum intensity values is possible as well [28]. For the purpose of this analysis, we take both channels to have the same transmission coefficients (that is t A = t B ≡ t), according to our experimental configuration, and Alice and Bob hence both select the same mean photon numbers for each of the three intensities (that is µ s = σ s ≡ τ s , Additionally, for compactness of notation, we omit the µ and σ when describing the gains and error rates (e.g. we write Q z ss to denote the gain in the z-basis when Alice and Bob both send photons using the signal intensity). Under these assumptions, the lower bound on Q x,z 11 is given by where the various D i (τ) denote the probability that a pulse with photon number distribution D and mean τ contains exactly i photons, and Q x,z 0 (τ d ) and Q x,z 0 (τ s ) are given by Table 2. Measured error rates, e x,z µσ , and gains, Q x,z µσ , for different mean photon numbers, µ and σ (where µ = σ ), lengths of fiber connecting Alice and Charlie, and Charlie and Bob, A and B , respectively, and total transmission loss, l. The last set of data details realworld measurements using deployed fiber. Uncertainties are calculated using Poissonian detection statistics.
The error rate e x 11 can then be computed as where the upper bound holds if a lower bound is used for Q x 11 . Note that Q x,z 11 , Q x,z 0 (τ d ), Q x,z 0 (τ s ) and e x 11 (Eqs. (21-24)) are uniquely determined through measurable gains and error rates.
Optimization of signal and decoy intensities
For each set of experimental parameters (i.e. distribution function D, channel transmissions and all parameters describing imperfect state preparation and measurement), the secret key rate (Eq. (1)) can be maximized by properly selecting the intensities of the signal and decoy states (τ s and τ d , respectively). Here we consider its optimization as a function of the total transmission (or distance) between Alice and Bob. We make the assumptions that both the channel between Alice and Charlie and the channel between Bob and Charlie have the same transmission coefficient, t, and that Alice and Bob use the same signal and decoy intensities.
We considered values of τ d in the range 0.01 ≤ τ d < 0.99 and values of τ s in the range τ d < τ s ≤ 1. An exhaustive search computing the secret key rate for an error correction efficiency f = 1.14 [36] is performed from 2 km to 200 km total distance (assuming 0.2 dB/km loss), with increments of 0.01 photons per pulse for both τ s and τ d . For each point, the model described in section 4 is used to compute all the experimentally accessible quantities required to compute secret key rates using the three-intensity decoy state method summarized in Eqs. (21-24). In our optimization, we found that, in all cases, τ d = 0.01 is the optimal decoy intensity. We attribute this to the fact that τ d has a large impact on the tightness of the upper bound on e x 11 in Fig. 6. a) Optimum signal state intensity, τ s , and b) corresponding secret key rate as a function of total loss in dB. The secondary axis shows distances assuming typical loss of 0.2 dB/km in optical fiber without splices. The optimum values for µ s for small loss have to be taken with caution as in this regime the model needs to be expanded to higher photon number terms.
Eq. (24) (this is due to the fact that all errors in the cases in which both parties sent at least one photon, which increases with τ d , are attributed to the case in which both parties sent exactly one photon). Figure 6 shows, as a function of total loss (or distance), the optimum values of the signal state intensity, τ s , and the corresponding secret key rate, S, for decoy intensities of τ d ∈ [0.01, 0.05, 0.1], as well as for a perfect decoy state protocol (i.e. using values of Q z 11 and e x 11 computed from the model, as detailed in the preceeding section).
Rate-limiting components
Finally, we use our model to simulate the performance of the MDI-QKD protocol given improved components. We consider two straightforward modifications to the system: replacing the InGaAs single photon detectors (SPDs) with superconducting single photon detectors (SSPDs) [37], and improving the intensity modulation (IM). For various combinations of these improvements, the optimized signal intensities and secret key rates for µ d = 0.05 are shown in Figure 7. First, using state-of-the-art SSPDs in [37], the detection efficiency (η) is improved from 14.5% to 93%, and the dark count probability (P d ) is reduced by nearly two orders of magnitude. Furthermore, the mechanisms leading to afterpulsing in InGaAs SPDs are not present in SSPDs (that is P a = 0). This improvement results in a drastic increase in the secret key rate and maximum distance as both the probability of projection onto |ψ − and the signal-to-noise-ratio are improved significantly. Second, imperfections in the intensity modulation system used to create pulses in our implementation contribute significantly to the observed error rates, particularly in the z-basis. Using commercially-available, state-of-the-art intensity modulators [38] allow suppressing the background light (represented by b x,z in general quantum state given in Eq. (2)) by an additional 10-20 dB, corresponding to an extinction ration of 40 dB. Furthermore, we considered improvements to the driving electronics that reduces ringing in our pulse generation by a factor of 5, bringing the values of m x,z in Eq. (2) closer to the ideal values. As seen in Figure 7, this provides a modest improvement to the secret key rate, both when applied to our existing implementation, and when applied in conjunction with the SSPDs. Note that in the case of improved detectors and intensity modulation system the optimized τ s for small loss (under 10 dB) is likely overestimated due to neglected higher-order terms.
Discussion and conclusion
We have developed a widely applicable model for systems implementing the Measurement-Device-Independent QKD protocol. Our model is based on facts about the experimental setup and takes into account carefully characterized experimental imperfections in sources and measurement devices as well as transmission loss. It is evaluated against data taken with a real, time-bin qubit-based QKD system. The excellent agreement between observed values and predicted data confirms the model. In turn, this allows optimizing mean photon numbers for signal and decoy states and finding rate-limiting components for future improvements. We believe that our model, which is straightforward to generalize to other types of qubit encoding, as well as the detailed description of the characterization of experimental imperfections will be useful to improve QKD beyond its current state of the art. To finish, let us emphasize that tests of a model that describes the performance of a QKD system in terms of secret key rates has to happen in a setting in which eavesdropping can be excluded (i.e. within a secure lab and using spooled fibre) -otherwise, the measured data, which depends on the (unknown) type and amount of eavesdropping, may deviate from the predicted performance and no conclusion about the suitability of the model can be drawn. Interestingly, this implies that neither phase randomization, nor random selection of qubit states or intensities of attenuated laser pulses used to encode qubit states is necessary to test a model, as their presence (or absence) does not impact the measured data. However, it is obvious that these modulations are crucial to ensure the security of a key that is distributed through a hostile environment. We note that in this article, all effects of imperfections in the system on the measured quantities are still attributed to an eavesdropper, and accounted for in the calculation of the secret key rate as well in the optimization of system parameters. | 12,507.6 | 2012-04-03T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Multi-fractal characteristics of pore structure for coal during the refined upgrading degassing temperatures
The low-temperature nitrogen adsorption measurement is commonly used to describe the pore structure of porous medium, while the role of degassing temperature in the low-temperature nitrogen adsorption measurement does not attract enough attention, various degassing temperatures may lead to the different pore structure characterization for the same coal. In this study, the low-rank coal collected from Binchang mining area, southwest of Ordos Basin was launched the low-temperature nitrogen adsorption measurement under seven various degassing temperatures (120 °C, 150 °C, 180 °C, 210 °C, 240 °C, 270 °C and 300 °C), respectively, the dynamic change of the pore structure under refined upgrading degassing temperatures are studied, and it was also quantitative evaluated with the multi-fractal theory. The results show that the pore specific surface area and pore volume decrease linearly with the increased degassing temperatures, ranges from 12.53 to 2.16 m2/g and 0.01539 to 0.00535 cm3/g, respectively. While the average pore aperture features the contrary characteristics (various from 4.9151 to 9.9159 nm), indicating the pore structure has been changed during the refined upgrading degassing temperatures. With the upgrading degassing temperatures, the sizes of hysteresis loop decrease, and the connectivity of pore structure enhanced. The multi-fractal dimension and multi-fractal spectrum could better present the partial abnormal of pore structure during the refined upgrading degassing temperatures, and the quality index, Dq spectrum, D−10–D10 and multi-fractal spectrum could describe the homogeneity and connectivity of the pores finely. The degassing temperatures of 150 °C, 180 °C and 270 °C are selected as three knee points, which can reflect the partial abnormal of the pore structure during the refined upgrading degassing temperatures. Under the lower degassing temperature (< 150 °C), the homogeneity and connectivity of the pore feature a certain increase, following that it presents stable when the degassing temperatures various from 150 to 180 °C. The homogeneity and connectivity of the pore would further enhanced until the degassing temperature reaches to 270 °C. Because of the melting of the pore when the degassing temperature exceeds 270 °C, the complexity of pore structure increased. In this study, we advise the degassing temperature for low-temperature nitrogen adsorption measurement of low-rank coal should not exceed 120 °C.
Introduction
Coal is a natural porous medium with complexity pore structure, and the development and distribution of pores with various apertures contribute significantly to the seepage and migrate of methane. There are various methods to acquire the pore structure parameters of coal, and the low-temperature nitrogen adsorption (LP-N 2 A) is one of the most useful and effective tool, especially that for the low-and middle-rank coal. When the LP-N 2 A is used to study the pore structure of the coal, the coal is commonly pretreated under vacuum with a certain temperature, which is named as degassing temperature. Commonly, the degassing temperatures used by the previous researchers for the porous mediums are different, such as 110 °C for Devonian shale (Luffel and Guidry 1992), 110 °C for coal beds in the Eastern Interior Basin (Mardon et al. 2014), 90 °C for Run of Mine (Okolo et al. 2015), 80 °C for coal samples in Hancheng Block (Zhao et al. 2016), 105 °C for coal samples in western Guizhou (Chen et al. 2018), and so on. Li et al. (2020Li et al. ( , 2021a have studied the dynamic change of pore structure under various degassing temperature, the thermal effect on the pore structure and the thermal evolution of meso-pore structure under various degassing temperatures have been discussed, while these studies do not pay much attention on the quantitative characterization of the dynamic change of the pore structure. The porous medium features complexity pore structure, the conventional Euclid language could not present the pore structure characteristics in coal and shale ideally. Mandelbrot (1967) built and developed the fractal theory when studied the length of the coast of Britain, which provide a useful tool to quantitative describe the change rule of the object with irregular shape (Grassberger and Procaccia 1983;Halsey et al. 1987). In the field of unconventional oil and gas, the geology characteristics commonly feature strong heterogeneity and nonlinear characteristics, such as the micro-fractures, cleats (Hirata et al. 1987;Panahi and Cheng 2004;Li et al. 2007), pore structure (Yao et al. 2008), and so on. These characteristics present a certain fractal behavior. The fractal theory has been widely used in analyzing the pore structure of porous mediums as a nonlinear science (Naveen et al. 2018). For the pores in the porous mediums, even the pores under nano-scale also features complexity and refined structure, which is similar to the pores with mm-scale, this is named as similarity. The Sierpinski model, thermodynamic model, FHH model, Langmuir model and fully penetrable sphere model are quite commonly used in the fractal theory Kumar et al. 2019;Li et al. 2019;Ma et al. 2019;Mahamud et al. 2019;Mangi et al. 2020;Gonzalez 2021), and the single fractal and multi-fractal methods are frequently used to analyse the pore structure in coal and shale. The single fractal could only reflect the whole characteristics of the pore structure, while the multi-fractal could describe both the whole and partial characteristics of pore structure. Caniego et al. (2000) studied the singularity features of pore size soli distribution and found that the reveal dimensional analysis was quite useful to characterize the soil structures, and the multi-fractural parameters to evaluate the soil structure stability (Ferreiro and Vazquez 2010), and the multi-fractal model could refine the vertical spatial heterogeneity of porous medium significantly (Martinez et al. 2010;Ghanbarian et al. 2015). With the detailed analysis of the singularity exponent parameter, generalized dimension parameter, information dimension and correlation dimension, the tight sandstone reservoir types are classified, which could guide the development of the tight sandstone gas . With the multi-fractal parameters of f ( ) and H, the heterogeneity and connectivity of the pores in the Upper-Lower Bakken shale were also distinguished (Liu et al. 2018).
The multi-fractal has been widely used to describe the complexity structure characteristics from the nano-scale pores to km-scale faults (Hirabayashi et al. 1992;Kiyashchenko et al. 2004;Panahi and Cheng 2004;Xie et al. 2007;Zhao et al. 2011), especially for the former one (Shivakumar et al. 1996;Li et al. 2015;Liu et al. 2018;Song et al. 2018;Zhang et al. 2021). Zheng et al. (2019) provided a new method to calculate the NMR T 2 cutoff value in the coal with the multi-fractal theory, and the NMR T 2 cutoff values feature significantly linear relationships with the fractal dimensions (Zhao et al. 2021), which could be used to analyse the distribution of fluids in the coal . With the gas adsorption method and the multi-fractal theory, the multilayer adsorption and porosimetry effect of the gas on the pore structure are studied by Wang et al. (2019). According to the distribution of fractures and the density of the fractures, the multi-fractal theory could present the connectivity of the fractures based on the images, which could predict the effective seepage area of coalbed methane (Chen et al. 2017;Cheng et al. 2020). Li et al. (2015) and Song et al. (2018) studied the pore structure of the tectonically deformed coal with the multi-fractal theory found that the pore aperture decreased, and the pore connectivity of the pores in the deformed coal is poor, because the tectonic deformation contributes to the concentrate of the pores. Zhang et al. (2021) studied the pore structure characteristics in the shale with multi-fractal and found that the TOC features linear negative correlation with the H and Δα. The porosity of the porous mediums presents significantly relationships with the fractal dimensions (Gonzalez 2021). The measurements method for the pore structure, such as the mercury intrusion porosimetry (MIP), low-field NMR spectral analysis (LFNMR) and gas adsorption, could be used to analyse the pore structure of tight sandstone, coal and shale (Lai et al. 2015;Liu et al. 2018;Guo et al. 2019;Yuan and Rezaee 2019;Wang et al. 2020). Hou et al. (2020) reported that the high probability measure areas are much more sensitive when the LFNMR was used to study the pore structure of the coal beds, and the other multi-fractal parameters, such as A, D 0 -D 1 and D 0 -D 2 are also quite useful.
Although substantial works have been done on the pore structures of porous medium with multi-fractal theory, the quantitative evaluation of the homogeneity and connectivity of the pores in the coal with multi-fractal method under refined upgrading degassing temperatures is still lack. In this study, the low-rank coal collected from Binchang mining area, southwest of Ordos Basin was ground with a size of 60-80 mesh, and the LP-N 2 A measurements with various degassing temperatures were launched to investigate the dynamic change of the pore structure. Finally, the quantitative evaluation of the homogeneity and connectivity of the pores during the refined upgrading degassing temperatures were carried out.
Measurements
The coal sample was collected from Binchang mining area, southwest of Ordos Basin, China. The reflectance of vitrinite in coal (R o,max ), proximate analysis of coal and scanning electron microscope (SEM) are carried out to acknowledge the basic properties of the coal, and the LP-N 2 A measurements were launched to acquire the pore structure parameters of the coal.
The coal sample was polished, and the Carl Zeiss Primotech optical microscope was used to measure the R o,max of coal, and the method of determining microscopically the reflectance of vitrinite in coal (GB/T 6948-2008, Chinese standard) was taken as reference. The coal sample was polished with a size of 200 mesh, with the proximate analysis of coal (GB/T 212-2008, Chinese standard) and KY-2000 instrument, the proximate analysis of coal was measured. For the SEM measurement, the coal sample was first polished with a size of 10 mm × 10 mm × 1 mm; following that, the coal sample was sprayed with gold, the pore shape and pore types were viewed with the MAIA3 model 2016 (LM) scanning electron microscope for ultra-high resolution field emission scanning electron microscope, and the gold-plated thickness measurement by SEM (GB/T 17722-1999, Chinese standard) was consulted. When it comes to the LP-N 2 A measurements, seven degassing temperatures were set, 120 °C, 150 °C, 180 °C, 210 °C, 240 °C, 270 °C and 300 °C, respectively, and the LP-N 2 A measurements were launched on the TriStar II Plus Series adsorption instrument according to the determination of the specific surface area of solids by gas adsorption using the BET method (GB/T 19587-2017, Chinese standard). The LP-N 2 A measurements were measures under − 196 °C (Fig. 1).
Multi-fractal theory
The multi-fractal theory includes two equivalent mathematical descriptions, multi-fractal singular spectrum (α~f(α)) and fractal dimension (q~D q ) (Halsey et al. 1987), and various fractal parameters could be used to describe the whole and partial abnormal of pore structure characteristics (Li et al. 2015;Zhang et al. 2021). For the gas adsorption measurement, the distribution of the relative pressure would be divided into several boxes with equal Fig. 1 The flow chart of measurements in this study length, and the size of the box is presented as ε (Liu et al. 2018). Aims to the relative pressure, the probability distribution function ( P i ( ) ) of No. i box could be defined as: where N i ( ) is the adsorbed quantity in the No. i box, i = 1, 2, 3,…; N t is the total adsorbed quantity. P i ( ) could also be featured with the exponential function as: where i is the singular index, which can reflect the partial singular strengthen of P i ( ) (Vazquez et al. 2008). The quantity of subintervals with the same probability of α could be recorded as N ( ) . The smaller the size of ε, the greater N ( ) would be, and the N ( ) could also be transformed as: where f(α) is the fractal dimensions labeled by singular index α, the multi-fractal singular spectrum with the relationships between f(α) and α, which can reflect the partial abnormal of the adsorbed quantity in the various pores with different pore apertures. α(q) and f(α) could be acquired with Eqs. (4) and (5) (Chhabra and Jensen 1989), where q is the order of the statistical operator (− ∞ < q < + ∞). In this study, q is the integer ranges from − 10 to 10 with a step length of 1. For the multi-fractal theory, i (q, ) is the partition function, and it could also be rewritten as: or where (q) is the quality index. The relationship between fractal dimension (D q ) and q could be obtained as: When q > 0, it commonly reflect high probability zone of the distribution of pores. While, q < 0 means the low probability zone of the distribution of pores (Caniego et al. 2003;Vazquez et al. 2008;Li et al. 2015). In order to ensure the continuity of D q , D 1 can be acquired from the L'hospital rule (Li et al. 2015), Then, series of (q, D q ) could be obtained, and the q~D q fractal dimension spectrum could be acquired. D 1 is the information dimension, and D 2 is the related dimension (Song et al. 2018). D 1 can feature the concentration degree of pores with various apertures, the smaller D 1 means the pores would be mainly concentrated in partial apertures. For D 2 , it mainly presents the connectivity of the pores with various apertures, the greater D 2 is, the better pore connectivity would be.
The basic properties of coal
The average R o,max of the coal sample is 0.62%, the average content of moisture and ash yield are 5.07% and 17.56%, respectively, while the average content of volatile matters is quite high, which reaches to 38.18% (Table 1), meaning the coal sample from Binchang mining area is the low-rank coal.
The pores in the coal sample are mainly plant tissue pores and gas pores (Fig. 2). The plant tissue pores are squeezed and deformed (Fig. 2a), indicating the sediment compaction has significantly influence the shape of the pores. The gas pores mainly developed on the surface of the coal matrix (Fig. 2b), meaning a brief hydrocarbon generation. Besides the gas pores, there are also a certain amount of in-organic minerals distributed on the surface of coal matrix (Fig. 2). It can be found that the pores mainly developed in isolation, indicating the poor connectivity of pores in the coal.
Pore structure characteristics from the LP-N 2 A measurements
It can be found that the adsorption/desorption curves for the same coal sample are different under different degassing temperatures (Fig. 3). When the degassing temperature is lower than 240 °C, the adsorption curve features as obvious up-bulge as the relative pressure is less than 0.1. With the continuously increase of the relative pressure, the adsorbed quantity of the coal sample increases linearly. At relative pressure of 0.9, the adsorbed quantity increase sharply. The desorption curve also features sharply decrease at the relative pressure ranges from 0.9 to 1, and the desorption curve approximately parallels to the X-axis when the relative pressure decreased to 0.5, and there is an obviously knee point of the adsorbed quantity at the relative pressure of 0.5. With the continuously decrease of the relative pressure, the desorption curve almost parallels to the adsorption curve. Under the degassing temperatures range from 120 to 240 °C, the hysteresis loop mainly features the H 3 type. When the degassing temperature exceeds 240 °C, the adsorption curve is almost parallels to the X-axis, the increase of the adsorbed quantity is slow, and when the relative pressure reaches to 0.9, the adsorbed quantity increases sharply. The desorption curve is almost parallels to the adsorption curve, and the knee point at relative pressure 0.5 is also not obvious. The hysteresis loop for the coal sample under high degassing temperature features as H 4 type (Fig. 3).
With the increased degassing temperatures, not only the adsorption/desorption curves, but also the pore structure is dynamic change. The cumulative pore specific surface area and cumulative pore volume feature as continuously decrease, the incremental pore specific surface area and incremental pore volume present contrary, the pores with aperture below 20 nm contribute dominantly to the decrease of pore volume and pore specific surface area (Fig. 4). With the increased degassing temperatures, the BET specific surface area, BJH pore volume and maximum adsorbed quantity feature linearly decrease, while the average pore aperture features contrary, especially when the degassing temperature reaches to 270 °C, the average pore aperture increases sharply (Fig. 5).
For the LP-N 2 A measurements of the coal sample, the only variable is the degassing temperatures. Due to the low maturity of the Binchang coal sample, it presents high content of volatile matters, and the volatile matters may be decomposed under high degassing temperatures. With the continuously decrease of the volatile matters in the lowrank coal, the pore structure would be changed. The degassing temperatures and content of volatile matters may be the essential factors that alter the pore structure of low-rank coal.
The hysteresis loop
The various adsorption/desorption curves under various degassing temperatures indicates that the degassing temperature features significantly influence on the pore structure of coal. With the increased degassing temperatures, the hysteresis loop obviously decreases, indicating the amount of the ink-bottle pore decreases, and the pore connectivity has been enhanced. In order to describe the size of the hysteresis loop quantitative, the hysteresis loop aperture was utilized in this study. The greater hysteresis loop aperture of the coal, the more complexity pore structure would be. The hysteresis loop could be defined as: where d ' (q de −q ad ) is the hysteresis loop aperture at relative pressure of i, cm 3 /g; q de(i) is the adsorbed quantity from the desorption curve relative pressure of i, cm 3 /g; q ad(i) is the adsorbed quantity from the adsorption curve at relative pressure of i, cm 3 /g.
The hysteresis loop aperture is almost stable when the relative pressure is lower than 0.5. When the relative pressure exceeds 0.5, there is the obvious increase of the hysteresis loop aperture, and the increase of the hysteresis loop aperture is significantly. With the increase of the degassing temperatures, the hysteresis loop aperture features continuously decrease, especially when the degassing temperature exceeds 150 °C, the hysteresis loop aperture decreases sharply at high relative pressure zone. Under the low relative pressure, although there is a decrease of the hysteresis loop aperture, it is not obvious. According to the Kelvin equation, when the relative pressure exceeds 0.5, the pore aperture would exceed 5 nm, it indicates that the increased degassing temperature mainly influenced the pores with aperture greater than 5 nm (Fig. 6).
Multi-fractal characteristics of pore structure
There is a significantly linearly relationship between ln ε and ln χ(q, ε), indicating ε and χ(q, ε) feature the scaling invariance, then the pore structure of the coal under various degassing temperatures features the multi-fractal characteristics (Song et al. 2018) (Fig. 7).
The quality index
τ(q) features the distribution of pore. For the single fractal, the relationship between τ(q) and q is linear, indicating the homogeneity of the pore structure. For the multi-fractal, the relationship between τ(q) and q is nonlinear. It can be found that τ(q−) > τ(q+), meaning the multi-fractal characteristics of the pore structure under various degassing temperatures (Fig. 8). The slope of τ(q)~q at q >0 is different with that of τ(q)~q at q<0, meaning the heterogeneity of the pore structure. When q < 0, τ(q) is almost coincided, τ(q)~q features almost-linear. With the increased degassing temperatures, the value of τ(q) is increased, but it is not obvious. When q > 0, the differences among the coal sample under various degassing temperatures are obvious, and the τ(q)~q tends to linear with the increased degassing temperatures, indicating the pore structure tends to be homogeneity. τ(q) is lower than that at higher degassing temperatures, it indicates that the homogeneity of the pore structure is enhanced with the increased degassing temperatures.
The fractal dimensions
The shape of D q spectrum and the sizes of D −10 -D 10 could describe the partial difference of porosity at various pore apertures (Halsey et al. 1987;Li et al. 2015;Liu et al. 2018). The wider D q spectrum means the greater D −10 -D 10 , which indicates the obvious differences of pores in the coal, and the pore structure is complexity. Figure 9 shows that D −10 -D 10 is greatest when the degassing temperature is 120 °C, meaning the maximum pore complexity of pores at the low degassing temperatures; while the D −10 -D 10 is smallest at the degassing temperature of 300 °C, and the difference of D −10 -D 10 presents continuously decrease with the increased degassing temperatures.
The shape of D q~q at various degassing temperatures is different. At the degassing temperature of 120 °C, the shape of D q~q features as reverse "S"; while it presents as almostlinear shape at the degassing temperatures of 150 °C and 180 °C at q < 0 zone, and it is still features as reverse "S" at q>0 zone; when the degassing temperature exceeds 210 °C, the D q~q presents as almost-linear shape wholly. The left branch and right branch of D q spectrum represent the different information of the pore structure in the coal. Compared with the pore size distribution of the coal sample (Fig. 4), it can be inferred that the right branch (D 0 -D 10 ) mainly represent the dynamic change of pore structure with the pores aperture below 20 nm, while the left branch (D −10 -D 0 ) mainly describe the various pore structure of pores aperture greater 20 nm. The almost-linear shape reflect the pore structure is generally homogeneity, and the reverse "S" shape dominantly represents the complexity pore structure in the coal. It can be found that the pores with apertures greater than 20 nm is almost stable when the degassing temperature exceeds 150 °C. When it comes to the pores with aperture below 20 nm, there are two obvious knee points at the degassing temperatures of 150 °C and 270 °C. There is a certain enhancement of the homogeneity of pore structure at the degassing temperature of 150 °C, and then the homogeneity of pore structure is stable when the degassing temperature increased to 180 °C. The homogeneity of pore structure presents a sharply increase when the degassing temperature reaches to 270 °C, and it reaches to the maximum (Fig. 9).
The information dimension D 1 mainly represents the average distribution of pores with various apertures, the smaller D 1 means the poor pore size distribution, the pores mainly concentrated with a certain pore apertures. D 1 mainly ranges from 0.9530 to 0.9932 (Table 2), D 1 is quite close to 1, indicating that the homogeneity of the pores is increased. With the increased degassing temperatures, the pore size distribution of the pores is enhanced, and the average pore aperture is also increased (Fig. 5). However, it should notice that D 1 features a certain decrease when the degassing temperature reaches to 300 °C (Table 2), indicating the homogeneity of pore structure decreases at the higher degassing temperature. The related dimension D 2 could represent the connectivity of pores in the coal admirably. The higher D 2 value commonly features the better pore connectivity. With the continuously increase of the degassing temperatures, the D 2 value increases, it can be inferred that the increased degassing temperatures could also increase the pore connectivity Fig. 7 The double ln curves between ε and χ(q, ε) of coal under various degassing temperatures ▸ in coal. Similar, the D 2 value also decreases at the degassing temperature of 300 °C, the pore connectivity features a certain reduce at high degassing temperature. It can be found that the D 0 -D 10 is higher than that of D −10 -D 0 at the same degassing temperature. Therefore, the complexity of the pore is mainly reflected by the pores with aperture below 20 nm, and the distribution of the pores with aperture greater than 20 nm just accelerate the complexity of pores in the coal.
With the increased degassing temperatures, the pores in the coal would be collapsed with the higher temperatures (Li et al. , 2021a, and this may accelerate the heterogeneity of the pore structure. However, the collapse of the pores mainly occurs in pores with aperture ranges from 5 to 15 nm (Li et al. , 2021a, the ash in these pores could maintain the shape of the pores, and the connectivity and homogeneity of the pores are enhanced. However, with the increased degassing temperatures, the pores with aperture below 5 nm begin to collapse, and the supporting of the ash in these pores are weak, which leads to the complexity pore structure .
Multi-fractal spectrum (α~f(α))
The α~f(α) multi-fractal spectrum features as up-bulge. The singular index α 0 could provide the pore size distribution characteristics in the coal. The higher α 0 means the partial fluctuation of pores. It can be found that with the increased degassing temperatures, the value of α 0 decreases, which reflects the increased homogeneity of the pores in the coal (Fig. 10).
The width of α~f(α) multi-fractal spectrum (α q− -α q+ ) could present the complexity of the pore size distribution. The wider α q− -α q+ is, the stronger heterogeneous of pore structure would be. With the increased degassing temperatures, the α q− -α q+ decreases, it means the partial difference of the pore structure is reduced, and the pore structure tends to be simple (Fig. 10). α 0 , α −10 -α 0 and α 0 -α 10 could also represent the various pore structure characteristics in the coal. α 0 decreases with the increased degassing temperatures, and it reaches to minimum when the degassing temperature is 270 °C; following that, there is a faint increase, indicating a certain increase of the pore structure at high degassing temperature (Table 3). The relationship between degassing temperatures and α −10 is similar to that of α 0 , while that is contrary for that of α 10 . The deviation of α~f(α) also decreases with the increased degassing temperatures. However, it should notice that when the degassing temperature exceeds 240 °C, the deviation of α~f(α) changes from right avertence to left avertence (Table 3).
Conclusions
The coal sample was launched the LP-N 2 A measurements under various degassing temperatures, the dynamic change and quantitative characterization of the pore structure was studied, several conclusions could be acquired.
(1) The degassing temperature could alter the pore structure of the Binchang low-rank coal significantly, this may dominantly relate to the continuously decomposition of volatile matters in the coal. With the increased degassing temperatures, the pore specific surface area and pore volume feature as decrease, while that for the average pore aperture is contrary. . 10 The curves of α~f(α) for coal under various degassing temperatures Table 3 The α~f(α) multifractal spectrum parameters Degassing temperatures (°C) α 0 α −10 α 10 α −10 -α 0 α 0 -α 10 (α 0 -α 10 )-(α −10 -α 0 ) 120 (2) The higher degassing temperatures does not favor the true pore structure of coal, and the lower degassing temperature (< 120 °C) would be suitable degassing temperature.
(3) The multi-fractal model would be an useful tool to describe the dynamic change of pore structure under various degassing temperatures.
The fractal dimension (q~D q ) and multi-fractal spectrum (α~f(α)) are two essential parameters to describe the homogeneity and connectivity of the pores in the coal. The homogeneity and connectivity of the pore features a certain increase when the degassing temperature reaches to 150 °C, and it is almost stable during the degassing temperature raise to 180 °C, and the homogeneity and connectivity of the pore enhance sharply subsequently; because of the collapse of the pores with aperture below 5 nm, the homogeneity and connectivity of the pore decreases when the degassing temperature exceeds 270 °C.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 6,738.2 | 2021-07-01T00:00:00.000 | [
"Materials Science"
] |
Science and Values in Undergraduate Education
While a conception of science as value free has been dominant since Max Weber defended it in the nineteenth century, recent years have witnessed an emerging consensus that science is not – and cannot be – completely free of values. Which values may legitimately influence science, and in which ways, is currently a topic of heated debate in philosophy of science. These discussions have immediate relevance for science teaching: if the value-free ideal of science is misguided, science students should abandon it too and learn to reflect on the relation between science and values – only then can they become responsible academics and citizens. Since science students will plausibly become scientists, scientific practitioners, or academic professionals, and their values will influence their future professional activities, it is essential that they are aware of these values and are able to critically reflect upon their role. In this paper, we investigate ways in which reflection on science and values can be incorporated in undergraduate science education. In particular, we discuss how recent philosophical insights about science and values can be used in courses for students in the life sciences, and we present a specific learning model – the so-called the Dilemma-Oriented Learning Model (DOLM) – that allows students to articulate their own values and to reflect upon them.
Introduction
Science is about the facts and nothing but the facts. This view is quite common among scientists and laypeople alike and accordingly also among (aspiring) science students (Corrigan et al. 2007: 1-2;Fisher and Moody 2002;Kincaid et al. 2007: 13-14;King and Kitchener 2004). It entails that (good) science is "value free": scientific research and its results should not be contaminated with values of any sort, whether political, religious, moral, social, or economic values. The conception of science as a value-free enterprise has been widely accepted and very influential at least since Max Weber defended it in the nineteenth century. In recent decades, however, a growing number of philosophers of science has cast doubt on it, and a consensus is emerging that science is notand cannot becompletely free of values. Which values may legitimately influence science, and in which ways, is currently a topic of heated debate in philosophy of science. These discussions have immediate relevance for science teaching: if the value-free ideal of science is misguided, science students should abandon it and learn to reflect on the relation between science and valuesonly then can they become responsible academics and citizens.
In this article, we investigate ways in which reflection on science and values can be incorporated in undergraduate science education. While we think this holds across a wide variety of scientific disciplines, we focus on the life sciences. In particular, we discuss how recent philosophical insights about science and values can be used in courses for science students, and we present a specific learning model that allows students to articulate their own values and to reflect upon them. 1 We hope and expect that university lecturers can benefit from this article and can apply our model in their own teaching (cf. Koster and Boschhuizen 2018).
The outline of the article is as follows. Section 2 reviews the current debate about science and values in philosophy of science. The notion of value-free science is analyzed in detail, and different types of values that may affect science are identified. An especially relevant distinction is that between epistemic and non-epistemic values, the most challenging discussions are about the (legitimate or illegitimate) roles of non-epistemic values at the heart of scientific practice. In Section 3, we substantiate our claim that these philosophical discussions are highly relevant for undergraduate science education: students need to critically think about the relation between science and values. The question of how this can be achieved is discussed with reference to a Bachelor course that one of us (EK) teaches to students in the Biomedical Sciences. In this course, students (who have no rich background in philosophy of science and have little experience in actual scientific research) are stimulated to develop a critical approach to science via systematic presentation of examples of the interaction between scientific research, on the one hand, and epistemic and non-epistemic values, on the other. Section 4 explicates the "Dilemma-Oriented Learning Model" (DOLM), used in the abovementioned course. This model helps students to reflect upon their "own" values: values that are typically related to their background and personal convictions. Because these students will plausibly become scientists, scientific practitioners, or academic professionals and because their values will influence their future professional activities, it is essential that they are aware of these values and are able to critically reflect upon their role. Section 5 concludes the article by discussing some wider implications.
Science and Values: Lessons from Philosophy
The value-free ideal has dominated our conception of science for a very long time (Carrier 2008: 1-7;Kincaid et al. 2007: 5-6;Stenmark 2006: 49-53). In its strongest version, it expresses the view that the sole aim of science is to disclose facts about the world, and that facts can and should be sharply distinguished from values. Building on an empiricist tradition going back to Locke and Hume, logical positivist philosophers of science argued that science should be based on logic and sensory experience alone, so that it would yield objective factual knowledge of the worldindependently of the subjective perspectives or opinions of individual scientists. Science can only tell us how the world is, not how it should beand, conversely, scientific research is not affected by our ideas about how the world should be, by our value judgments. For logical positivists, scientific knowledge should be verifiable through observation or experiment, and the truth of value judgments like "torturing animals is wrong" can never be verified in this way (they regarded value judgments as expressions of emotions). Hence, they excluded value judgments from the domain of science.
At this point, we need to say a bit more about the nature of values. Above, we mentioned political, religious, moral, social, and economic values, and one might add, for example, aesthetic and personal values. So there appear to be many kinds of values, but is there a general definition or characterization of the notion of value? There is no easy answer to this question. McMullin (2000: 550) suggests the following: "to value something is to ascribe worth to it, […] to regard it as desirable," and a value is "the characteristic that leads something to be so regarded." 2 Reasons for valuing something can range from purely subjective preferences of the person who values it to features that are objectively required for that something to function properly. For example, when someone buys a specific raincoat because it is waterproof and attractively designed, both properties are valued (by that person), but the former valuation is less subjective than the latter. Notwithstanding such variation, the standard conception of values (endorsed by the logical positivists) entails that values always involve some subjectivity, because something can only be a value when it can be valued by a human agent. (This also applies to the raincoat, whose being waterproof is a value only because people are interested in using the raincoat to protect themselves from the rain.) There can be many different sources of values: ideologies (e.g., political, economic), religious or metaphysical beliefs, interests (e.g., personal, financial), and so on. For example, a gambler who has a financial interest in Zenith winning tomorrow's horse race will value Zenith's healthy condition. Of course, a healthy condition is generally valued in any race horse, but note that this gambler would value an inferior physical condition in Zenith's competitors. While particular interests, or commitments to a particular ideology or religion, can thus inspire or even compel one to adopt certain values, such commitments and interests are not in themselves values.
Back to science. The strong value-free ideal sketched above has been challenged in many ways and is generally rejected nowadays. A fundamentalalbeit controversialcriticism focuses on the fact-value distinction itself, arguing that in many cases this distinction cannot be drawn (see, e.g., Dupré 2007). A less radical, and more generally accepted, critique proceeds from the observation that there are some values that are obviously central to, if not constitutive of, sciencewhere the prime example is truth. So, the question does not seem to be whether values are involved in science, but rather which values are (legitimately) involved and where and how they are involved. These questions have been hotly debated by philosophers of science since Thomas Kuhn's seminal 1977 paper "Objectivity, Value Judgment, and Theory Choice" and a great variety of arguments and perspectives can be found in the literature. 3 While there is consensus that science is notand cannot bevalue free in the strong sense sketched above, there remain (sometimes deep) disagreements about the legitimate place and role of values in science. As we will see below, some philosophers claim that a weaker version of the value-free ideal can still be maintained, whereas others abandon the ideal altogether.
In order to structure the debate, let us start by raising three different but equally important questions (adapted from Kincaid et al. 2007: 10): A. Which kinds of values (are allowed to) play a role in science? B. Where do these values play a role? C. What effect does their involvement have?
Answers to question (A) often invoke a distinction between epistemic and non-epistemic values. Epistemic values are those values that are conducive to an important aim of science: knowledge production (McMullin 1983: 18). 4 Kuhn (1977) listed a number of epistemic values that apply to scientific theories: accuracy, consistency, scope, simplicity, and fruitfulness. 5 Other examples would be explanatory power and unifying power. Epistemic values that apply to scientists may include skepticism, disinterestedness, and openness to counter-evidence. Non-epistemic values, on the other hand, would include, for example, cultural, moral, economic, and political values and also more personal values based on religious commitments, interests, or loyalty to colleagues and sponsors.
While there is debate about which values count as epistemic, no one would contest that epistemic values play a legitimate role in science. 6 A more important, and more fundamental, question is whether also non-epistemic values are involved in scientific research and, if so, whether their involvement is inevitable or only possible and whether it is always detrimental. Those who want to exclude non-epistemic values from science, maintaining that only epistemic values are allowed to play a role, can be regarded as defending a weak version of the value-free ideal (Kuhn 1977, McMullin 1983, and Dorato 2004. Their opponents typically argue that non-epistemic values cannot be eliminated from scientific research (either in practice or in principle) but that this does not imply that science is hopelessly subjective: there are ways to retain the objectivity of science other than cleansing it from nonepistemic values (examples are Longino 1990, and Douglas 2009). Incidentally, some authors reject the (possibility of making a) distinction between epistemic and non-epistemic values altogether (e.g., Rooney 1992, Douglas 2009). For the purposes of the present paper, we will ignore this debate and assume that the distinction can be made (cf. Pournari 2008).
So far, we have discussed the role of values in science in a quite general way. But science is a complex enterprise, and it is important to carefully differentiate the various stages of scientific practice in which values may or may not be involved. This brings us to question (B): Where do (epistemic or non-epistemic) values play a role? Here it is useful to distinguish between three stages of scientific practice: I. First stage: choice of research topic and methods II. Second stage: carrying out the research III. Third stage: application of research results In the first stage, before the actual research starts, all kinds of values may play a part. Most importantly, the choice of research topics cannot be made in a value-free manner. 7 Epistemic values may come into play in this stage, for instance, when competing research proposals are evaluated with help of criteria such as expected explanatory success and breadth of scope. Non-epistemic values are involved as well. When governments, politicians, or business executives decide which types of research will be financed, the values of political parties and private corporations influence the direction of scientific research. And even if scientists (e.g., within a university setting) are free to choose their own topic of research, their personal interests, political ideas, or religious beliefs may affect which issues they want to investigate. The choice of research methods is also value-laden. Epistemic values are clearly relevant here, but in some cases, non-epistemic values can come into play as well: think of financial considerations or ethical restrictions (e.g., research on animals or human subjects). In Section 3.2, we will discuss the role of values in this stage in more detail.
The second stage might be called the "heart" of science: this is where the actual scientific research is carried out. It is this stage in particular that has been the focus of the debate about the value freedom of science since Max Weber and the logical positivists. While their strong value-free ideal has generally been rejected, today's proponents of the weak value-free ideal claim that in this stage, only epistemic values are allowed to play a part. Among such epistemic values are the ones that govern hypothesis or theory construction and selection (see Kuhn's list, cited above). In addition, epistemic values can determine which kind of evidence is to be considered as proof for the hypothesis under scrutiny or govern the way in which evidence is obtained. Whether or not non-epistemic values should also be allowed to play a part in this stage is a matter of debate, however. Advocates of the weak value-free ideal deny this, but other philosophers of science have argued that there is an ineliminable role for non-epistemic values in the second stage as well, because epistemic considerations alone do not suffice to determine theory choice (Longino 2004). 8 However, allowing non-epistemic values (based, e.g., on ideological commitments, religious beliefs, or interests) to play a role in the construction, acceptance or rejection of scientific claims leaves us with the difficult task of specifying how precisely such value influences are to be managed, for they can easily lead to unwanted bias that corrupts the time-honored objectivity of science. Finally, it should be noted that ethical considerations about issues of appropriate conduct in scientific research (e.g., sloppy science, fraud) and, again, about experiments on animals or humans are also relevant in this stage. In Section 3.3, we will discuss the role of values in the second stage in more detail and present some examples.
Finally, in the third stage, results of scientific research are applied in real-world situations. It is quite obvious that this stage involves all kinds of values, alsoand perhaps most of allnon-epistemic ones. For after a scientific research project is completed, its results can play a role, for example, in political decision-making or in commercial activities of private companies. In such cases, the application of scientific research is always accompanied by, first, implicit or explicit ideas about the good life and a just society and, second, certain economic interests. Science and non-epistemic values can thus be thoroughly intertwined in this stage. In Section 3.2., we provide some examples.
It can be concluded, first, that nobody denies a role for both epistemic and nonepistemic values in the first and third stage, where scientists and policymakers decide about the selection of research topics and methods and about the application of the results of scientific research. There will probably be disagreement and debate about the choice of values involved in these processes. A second conclusion from our short analysis is that in the second stage, epistemic values cannot be dismissed. Decisions about the acceptance of a hypothesis in favor of a rival one, or judgments which theory is preferable to guide ongoing scientific research, cannot be made without an appeal to epistemic values. Doing science without epistemic values is simply impossible -the strong value-free ideal is untenable and should be regarded as a false ideal.
In light of these reflections, question (C) about the effect of values on science can be confined to the possible impact of non-epistemic values on the acquisition of scientific knowledge (cf. Elliott 2011: 304). What effect does the involvement of, for instance, political ideologies and interests of commercial companies have on activities at the heart of science? This question leads to a number of problems. Suppose that non-epistemic values influence the process of acquiring scientific knowledge, do we then have to conclude that science is biased? Is the possible presence of non-epistemic values in this stage of science a hindrance to speak about objectivity, or do such values perhaps play a vital role in scientific practice, for example, in the construction of scientific theories? How can we prevent that the impact of non-epistemic values on science corrupts academic culture and harms the reliability and validity of scientific results (Radder 2010)? And if it is inevitable that non-epistemic values play a role in the acceptance of scientific knowledge, do we then need to construct an alternative conception of science, a "value-directed view of science," as Stenmark (2006)
calls it?
This last question has been answered in the affirmative by Helen Longino and Heather Douglas, who offer analyses of science that acknowledge the role of non-epistemic values and include normative frameworks for diminishing their negative role while allowing for their positive role. Longino (1990: 76-81) submits that the solution of the problem can be found in the social character of science: scientific knowledge is always shared in a community of researchers. It is the communication and interaction between the members of a research community that can render scientific results objective and uncontaminated by prejudices and idiosyncrasies of individual scientists. Such objectivity is guaranteed if the scientific community allows for (1) recognized avenues for criticism (such as journals and conferences); (2) shared standards (the epistemic values mentioned above); (3) community response (criticism is taken seriously); and (4) equality of intellectual authority (of members of the community). Douglas (2009) approaches the problem in a different way. She distinguishes between direct and indirect roles for (non-epistemic) values, where values play a direct role when they "act as reasons in themselves to accept a claim, providing direct motivation for the adoption of a theory," while they play an indirect role when they "act to weigh the importance of uncertainty about the claim, helping to decide what should count as sufficient evidence for the claim" (Douglas 2009: 96). Douglas argues that non-epistemic values are allowable in scientific practice as long as they play an indirect role only.
In sum, the strong value-free ideal of science is untenable. Science cannot be practiced without epistemic values, and nobody will deny the role of non-epistemic values in the stages in which scientific research is selected and applied. The controversial issue is whether non-epistemic values are inevitable in processes of the evaluation and justification of scientific claims. If the influence of these values is indeed inevitable, then one can raise questions about (i) the impact of these values on the results of scientific research, (ii) the possibility to make them transparent, and (iii) the ways in which their impact may be diminished, if so desired. Since most science students are inclined to adopt the value-free ideal of science, it appears advisable to reflect on the role of values in science education. Why this is a good idea, and how this can be achieved, is the focus of the next sections.
Introducing "Science and Values" in Undergraduate Education
In the previous section, we have concluded that science is not value free. However, students often automatically start reasoning from a value-free point of view (Aalberts, Koster and Boschhuizen 2012;Koster and Boschhuizen 2018;Fisher and Moody 2002;King and Kitchener 2004). Usually students suppose that science is about the facts and only about the facts. They think that values play no roleor ought not to play a rolein the development of science. Here are some typical examples of statements by students about their own views before and after taking a reflective course (Koster and Boschhuizen 2018: 50): & Before: "I was convinced that scientists are people who are completely objective." & Before: "I regarded science simply as the truth." & After: "Now I know that social and cultural factors influence what we regard as knowledge." & After: "Now I know that full objectivity is unattainable. And that you are influenced, unconsciously, by your cultural, political or social background." Since students are initially unaware of the interaction between science and values, they need to reflect upon (A) the difference between epistemic and non-epistemic values; (B) the role of these values in the selection, execution, and application of scientific research (stages 1, 2, and 3); and (C) the effects of values on science (the distinctions made in Section 2). In this section, we first substantiate our claim that undergraduate education should include reflection upon the role of values in science (3.1). Next, we demonstrate how students can be made aware of the interaction of science and (epistemic and non-epistemic) values in the first and the third stage (3.2) and in the second stage (3.3). Special attention is given to the consequences of the impact of values on science.
The Need for Education in "Science and Values"
The recent philosophical insights about the role of values in science, sketched in Section 2, resulted from a naturalistic turn in philosophy of science that involved a shift from abstract, analytic accounts of science to approaches based on a study of scientific practice. The observation that in actual practice science is not and cannot be value free has led to the abandonment of the value-free conception of science. As Kelly and Licona (2018) argue, science education may profit from making a similar naturalistic turn, in which attention for actual epistemic practices takes center stage. We fully agree and accordingly we submit that science students should develop an awareness of, and an ability to reflect upon, possible interactions between science and values. To be sure, our proposal to include reflection on science and values in undergraduate science education is not completely novel, nor is it a very radical proposal: in the literature in science, education pleas for paying attention to values have been made before (e.g., Poole 1995;Corrigan et al., 2007;Corrigan and Smith 2015). However, we think that the insights that have emerged from the contemporary philosophical debates on science and values offer new resources for teaching undergraduate students and for developing concrete learning models that address the interaction between science and values.
There are at least three reasons why undergraduate students ought to reflect on the role of values in science: (i) to acquire an adequate and realistic conception of science, (ii) to prevent them from unconsciously adopting a false conception of science that may have misleading and dangerous consequences, and (iii) to prepare them for academic citizenship. We will discuss each of these reasons in turn.
First, students need to be informed about and critically reflect on the nature of science. Since they will practice, use, and/or evaluate scientific research themselves, it is important for them to think critically about the process of achieving scientific knowledge. They should acquire a realistic view of science, rather than the idealized picture that often dominates public debates. In particular, they should be aware of the influence of (hidden) assumptions on scientific methods, obtain realistic ideas about the reliability and limitations of scientific research, of the practice of scientific experiments, and of the nature of scientific laws and theories. To prevent misconceptions of science, it is also necessary for them to learn more about the interaction of science and values. 9 Second, because students will very often become scientists, scientific practitioners, or academic professionals and since values will influence their future professional practices, it is important for them to reflect upon the role values may play in (i) the selection, (ii) the construction and evaluation, and (iii) the application of scientific knowledge. Since values can influence scientific practices, the presentation of science as entirely value free is deceptive and can have pernicious consequences. In the words of Kincaid et al. (2007: 4): "If scientific results concerning IQ and race, free markets and growth, or environmental emissions and planetary weather make value assumptions, treating them as entirely neutral is misleading at best." To prevent that value assumptions play a decisive role while hidden behind a cloak of neutrality, students need to become aware of the interaction of science and values at all levels (stages 1-3).
The view of science as being value free is also dangerous because it may hide the influence of certain values secretly supported by scientists themselves. Hans Radder (2010: 7-8), for instance, makes plausible that economic values are present in science by way of a variety of formal and informal personal ties. Individual scientists are increasingly running their own business, and some of them are holding externally sponsored professorships and chairs. Under the guise of neutrality, scientists can serve their own interests andas is well documented in the case of pharmaceutical industriessometimes even manipulate their evidence (Healy 1998(Healy , 2002. On the level of academic culture, it is sometimes claimed that science is structurally "colonized" by economic vocabularies and metaphors. With reference to colonization, Daniel Lee Kleinman speaks about "direct and indirect effects of industry on academic science" and sums up a number of mechanisms by which these effects are realized: the pressure to undertake research with obvious economic development potential, the shaping of efficacy standards by industry, and courses to teach scientists how to write a business plan or how to develop and implement financial plans (Kleinman 2010: 31-39). The commodification of academic research is thus realized on individual and institutional levels. One of the strategies often mentioned to minimize the influence of economic values is the training and mentoring in research ethics (e.g., Resnik 2010: 86). An obvious prerequisite for such education is the critical reflection on the relation between science and values.
A third reason why students need to reflect on the interaction of science and values has to do with the ideal of "education for (academic) citizenship" (cf. Fuller 2000: 62-74). Academic citizenship is the ability of scientists, scientific practitioners, and academic professionals to reach beyond their own discipline and thus to reflect critically on the influence of, for instance, culture, belief, and commerce in their future professional practice. In today's pluralistic society, which features a multiplicity of approaches, points of view, values, and interests, this ability is of great importance. Education in science and values prepares students to acquire such a critical attitude inside and outside the academy.
Values in the Selection and Application of Scientific Research
Students need to think critically about the role of values in science. A first step to reach this goal is to make students aware that values are indeed involved in scientific research. There are at least two strategies to make students reflect upon the ideal of value-free science. A systematic strategy consists of a theoretical exposure about science and values (along the lines of the second section of this article). To be successful, this approach needs students who are able to understand sophisticated, philosophical arguments. If a course on science and values is developed for the benefit of students in philosophy, then this strategy will probably do. But if the course is meant for Bachelor students who did not receive any training in logic or other philosophical skills, then this is what they need to learn in the first place. For these students, another strategy is preferred: teaching by way of demonstration.
By giving examples of the role of values in (renowned) scientific research, students become aware of the relevance and importance of the subject and of the problematic character of the value-free view of science. Ordering these examples (i) by distinguishing between the stages before actual research starts, in which research is conducted, and after it has finished, (ii) by making the distinction between epistemic and non-epistemic values, and (iii) by discussing the effects of values on science will stimulate students to reflect on the theme of science and values in a more structured and systematic way. Below we will indicate how this is done in an actual, second-year course for Bachelor students in one of the life sciences at the VU University of Amsterdam. In this course, entitled "Philosophy and Science," several examples are given to make students aware of the presence of values in science. These examples are also meant to stimulate critical reflection on the question whether or not these values play a legitimate role in science (Sections 3.2 and 3.3). Next, students are stimulated to reflect upon the values that influence their own scientific practices (Section 4).
An example of the interaction between science and values during the selection of research concerns the way in which a choice between biomedical approaches and clinical trials is made. Assuming that there is money available for only one type of research project, what are the reasons a funding organization can have for choosing between a proposal that focuses on the underlying mechanisms of a bodily disorder (biomedical approach) and a trial to determine the effect of a medicine to recover the patients suffering from the same disorder (clinical trial)? Students easily understand that values epistemic values such as explanatory success, applicability, reliability, and scope on the one hand and social relevance and financial feasibility as examples of non-epistemic values on the otherare relevant for making a choice between these two research proposals. A more difficult, and more interesting, question is why certain values prevail over others.
The same holds for the influence of values on science in the application of scientific research. If medical research regarding a potentially dangerous influenza virus results in the development of an effective therapy, the answer to the questions of whether and, if so, how this therapy can be applied depends on the values involved. Epistemic values such as generality (the expected scope of the therapy) and non-epistemic values like safety (the degree of the health risks), individual freedom (should the therapy be prescribed compulsory?), and financial conditions determine the answer to these questions. It is clear that these answers, among others, depend on the political views (and ideological sources) of the government.
Students may be very apt to discuss these questions, and these discussions could indeed be helpful to better understand the interaction of science and values. However, for the aim of the course "Philosophy and Science," it is even more important and interesting to reflect on the influence of values in the second stage of scientific practices.
Values at the Heart of Scientific Research
In Section 2, we have seen that (i) epistemic values interact with processes of construction and evaluation of scientific knowledge and (ii) the most challenging question is whether nonepistemic values are legitimately involved in these processes. Here we present two examples that are discussed in the course "Philosophy of Science." These examples show, first, that epistemic values play an indispensable role in science and, second, that non-epistemic values are plausibly also part of scientific practice, at least in the examples discussed.
A classic analysis of the interaction of science and epistemic values is provided by Thomas Kuhn. Kuhn stressed the fact that "every individual choice between competing theories depends on a mixture of objective and subjective factors, or of shared and individual criteria" (Kuhn 1977: 325). The objective criteria include accuracy, consistency, scope, simplicity, and fruitfulness. These criteria play a vital role when a scientist has to choose between competing theories. However, as Kuhn showed by discussing some examples from the history of science, these criteria do not determine theory choice. He lists two sorts of difficulties: "individually the criteria are imprecise," and "when deployed together, they repeatedly prove to conflict with one another" (1977: 322). For both cases Kuhn presents convincing instances. Regarding the first difficulty, Kuhn shows that the criterion of "accuracy" cannot always discriminate between competing theories. One of his examples is the choice between heliocentric and geocentric systems: Copernicus' system was not more accurate than that of Ptolemy (until drastically revised by Kepler). Adding criteria such as consistency and simplicity does not eliminate the problem: both astronomical theories were internally consistent but inconsistent with certain existing scientific explanations, and the criterion of simplicity could as well be interpreted in favor of Ptolemy as in favor of Copernicus (Kuhn 1977: 322-325). Kuhn concludes that a choice between these theories cannot be made on the basis of the five objective criteria only. This is why he writes that these "objective criteria do not function as unambiguous rules, which determine choice, but as values, which influence it" (1977: 331). The criteria of choice must thus be supplemented by "subjective considerations" which are not the same as "bias and personal likes or dislikes" (Kuhn 1977: 337). Regarding the example of the two competing astronomical theories, the choice is regulated by scholarly backgrounds, individual experiences as a scientist, and values (Kuhn 1977: 325). Because the evidence plus a fixed set of epistemic values do not determine which theory must be preferred, the choice between competing scientific theories must be based on supplementary (and possibly nonepistemic) values.
Since a huge number of post-Kuhnian studies show in detail how values interact with science, many examples regarding the influence of values on the construction and evaluation of scientific claims could be given. Here we confine ourselves to the influence of values on the formulation of hypotheses regarding human evolution. In the 1950s and 1960s, Sherwood Washburn developed his theory of human evolution, centered on the concept of "man-the-hunter." According to Washburn and others, man evolved into a bipedal toolmaker with relatively large brains due to the organized hunting by males working as a team, which was seen as the crucial cause. "The biology, psychology, and customs that separate us from the apesall these we owe to the hunters of the past" (Washburn and Lancaster 1975: 303). This theory suggests that the activity of men drove evolution forward, while women, gathering food and giving birth, were not important for the coming into existence of Homo sapiens (Haraway 1989: 186-230). During the 1970s, two alternative theories, assigning a major role to the changing behavior of females, were developed. The first oneproposed by Sally Slocum and later further developed by Nancy Tanner and Adrienne Zihlmanwas called the "woman-the-gatherer hypothesis." This theory states that the major cause for the high level of the development of tools was the need of women to gather scarce vegetable food (Haraway 1989: 127, 228 f., 331-348). The second was famously formulated by Sarah Hrdy. Her story of the origin of (wo)mankind makes use of sociobiological theories applying evolutionary theory to the development of behavior. The key word in her theory is "strategy." Female apes invest in reproductive strategies that enlarge the probability of survival of their offspring: by mating with dominant and aggressive males, they diminish the chance that other males will kill their descendants. According to Hrdy, these kinds of evolutionary strategies are crucial factors in the explanation of the origin of modern man: "the central organizing principle of primate social life is competition between females and especially female lineages" (Haraway 1989: 349;cf. 349-367). The differences between these theories, especially between the ones proposed by Washburn and Hrdy, can partly be explained by the different field studies of primates and by the emergence of sociobiology. However, since the available evidence underdetermines their theories, it is highly plausible that the different perspectives on the role of men and women in society function as hidden background assumptions. From the point of view of Washburn, it was self-evident that human beings were men and that public life was centered on their activities. From the feminist perspective of Hrdy, much lost ground had to be made up by women. This example shows that the formulation of scientific theories is unconsciously (and perhaps sometimes consciously) influenced by non-epistemic values (Theunissen 2004: 129-146;cf. Longino 1990: 103-132).
The presentation of these examples is carried out during the lectures. In meetings of the group tutorials (approximately 20 students), there is room to evaluate these examples, to critically discuss them, and to ask more fundamental questions about, for instance, the (il)legitimate role of non-epistemic values in science and whether the presence of these values in science automatically entails that science is biased. This is done via a number of assignments.
One of the assignments is explicitly meant to discuss Longino's view on science as a social enterprise. The assignment is constructed around two examples of recent research in the life sciences and is related to the absence or presence of (i) a diversity of scientific approaches and (ii) proper functioning feedback mechanisms. The first example is about the competition between adherents of the "out-of-Africa-thesis" and the "multiregional hypothesis." On the basis of archeological data (the fossil record), it could not be decided which of the two models was preferable. Until the late 1980s, the two theories were underdetermined by the available evidence. A choice between the two models had to be based on non-epistemic valuesa conclusion the students have to find out by themselves. New evidence suggested (among others from the fields of genetics and linguistics) that the "out-of-Africa-thesis" was the most reliable (Lewin and Foley 2004: 331-421). In this case, new evidence coming from other scientific fields allowed for a choice between the two competing models. The students have to argue whether this choice was indeed solely based on epistemic values. This is not indisputable, because "evidence" can be influenced by, for instance, ideologies and interests and is sometimes even consciously manipulated (cf. Radder 2010).
The second example in the assignmentconcerning research on the effectiveness of medicinesillustrates how a diversity of scientific approaches is valuable for the practice of science. Because the development and testing of medicines are very expensive, usually only one type of organization is involved in this process: the pharmaceutical industry. The monopoly of these companies in combination with their financial interests undermines the effectiveness of feedback mechanisms such as double-blind experiments, peer review, and statistical tests (Radder 2010). Accordingly, drug research could benefit from a diversity of scientific perspectives and from independent institutional controls and testing methods: the current risk of bias and manipulation due to the pharmaceutical industry's monopoly could then be diminished or even eliminated. Students reflect on this claim with help of Longino's thoughts on the way objectivity can be guaranteed by the scientific community. They try to find out what the effect on medical research would be if the four conditions mentioned by Longino would be fulfilled in this example.
Values: From Awareness to Self-Awareness
The examples given in Sections 3.2 and 3.3 all support the conclusion that science is value laden or, to put it more carefully, that the value-free view of science is far from self-evident. By presenting these kinds of examples, students become acquainted with the possibility that values play a role in scientific research. They learn that epistemic and non-epistemic values influence the processes of acquiring, formulating, and accepting scientific knowledge. Through the structured presentation of these case studies, students are challenged to think in a more systematic way about the interaction of science and values. Questions about the objectivity of science are also raised.
The course shows the complex relation between science and values in the scientific discipline of the students, but usually none of this is seen by them to apply directly to the role of values and convictions regarding their own scientific practices. Due to the way textbooks teach them to think about science, they still think of themselves as value-free agents of science (Aalberts, Koster and Boschhuizen 2012). During their studies, however, students become themselves more and more involved in the process of scientific research, and this process is thus (possibly unnoticed) influenced by epistemic and perhaps even non-epistemic values. Hence, the question arises in which way teachers can stimulate students to reflect upon the impact of values on their own scientific activities.
While students may learn a lot about the interaction between science and values via studying philosophical literature, examples, and case studies, this may not immediately lead to awareness of and reflection on how their own scientific practice is value-laden. This was already noticed by John Dewey. According to Dewey, one's mental attitude is not necessarily changed by the teaching of science as subject matter and by engaging in, for instance, physical manipulations in a laboratory (Dewey 1910(Dewey /1995. For Dewey, experience is the key to science education: experiences have the power to transform our concepts and deep-seated convictions about science (Dewey 1938(Dewey /1997. Based on this idea, he defines education "as a continuing reconstruction of experience" (Dewey 1897(Dewey /2008. Dewey argues that conducting scientific inquiry can provide students with the ability to make informed decisions through value judgments. It would be a challenge to connect scientific inquiry and values in science education by starting from Dewey's approach (cf. Lee and Brown 2018), given recent criticisms on aspects of his work (e.g., Radder 2019: 256-260; Roothaan 2014: 220-221). In this paper, however, we will not pursue this idea but propose a different approach to relate scientific inquiry to values in science education. In the next section, we use this approach to develop a concrete learning model.
The Dilemma-Oriented Learning Model (DOLM)
Reflection on values in scientific research will be an important step in the development of a critical approach to science. By scrutinizing different case studies in the life sciences, students begin to understand that the value-free view of science is problematic and possibly false. Values matter in science. Because students will become scientists, scientific practitioners, or academic professionals themselves, they need to think critically about the way their own values interact with science. Because these values are so deeply embedded in their way of doing and thinking, it is a difficult task to, first, identify and, next, discuss them. It is relatively easy to see how values that are not our own are part of the research process in an implicit and unacknowledged way. But it is much harder to recognize that our own ways of observing and conceiving the world contain values which could be just as prominent. Reflection upon one's own values is thus necessary.
Understanding the way scientific knowledge is acquired and reflecting upon the students' own values are the goals of the Bachelor course "Philosophy and Science" for students of Biomedical Sciences at the Vrije Universiteit Amsterdam. In the first part of this course, students become acquainted with the role of epistemic and non-epistemic values in science (as discussed in the previous section), while during the second part, the emphasis is on the interaction of science and one's own values. In this section, we will describe the second part of this course and explain how the "Dilemma-Oriented Learning Model" (DOLM) can help to reflect upon one's own values. In Sections 4.1 and 4.2, we explain DOLM, and in Section 4.3, we show how DOLM is used in the course "Philosophy and Science."
High-Potential Issues as Pedagogical Tools
DOLM can be applied to cases of complex issues in which scientific knowledge is involvedso-called "high-potential issues". High-potential issues have two features: they cannot be defined with a high degree of completeness, and they cannot be solved with a high degree of certainty. As pedagogical tools, such issues have the potential (i) to teach students how to evaluate facts and theories, (ii) to make them aware of underlying (sources of) values, and (iii) to clarify, structure, and weigh their arguments regarding their choice in the dilemma so they can take positions and make choices based on considered judgments (Boschhuizen, Aalberts, and Koster 2007). This is why high-potential issues are helpful for reflecting on the relation between science and values.
An example of a high-potential issue is the choice between conventional medicine and homeopathy. In a systematic evaluation based on the evidence-based method by Aijing Shang and colleagues in The Lancet (Editorial 2005), the conclusion was drawn that homeopathy is out of date and defeated. The editorial address summarized the article with the following telling statement: "The end of homeopathy." Shang et al. write that homeopathy fares poorly when compared with conventional medicine. Although many people use homeopathic remedies, the reported positive results seem to be consequences of the placebo effect. Shang et al. (2005, 726) suggest that positive findings of trials of homeopathy can be explained by referring to bias.
However, this did not entail the end of homeopathy. In the Netherlands, representatives of the Dutch Royal Association for Homeopathy rejected the conclusions of The Lancet (Koster 2014). One of their main criticisms concerned the use of the evidence-based method. They claimed that this method cannot be applied in the case of homeopathy. Homeopathic remedies are fine-tuned: they are developed for individual patients, and the same remedy cannot be given to a random group of individuals. Instead of evidence-based medicine, they argue in favor of observational methods such as cohort studies. Therefore, the approach of Shang et al. can also be accused of bias, in this case regarding the method (Boschhuizen, Aalberts, and Koster 2008).
This discussion suggests that such questions, and other complex issues in the life sciences, cannot be answered simply by referring to "the facts." Reflection on methodology and evaluation of, for instance, claims about possible biases are also necessary. Next to this, underlying assumptions related to (sources of) epistemic and non-epistemic values play an important but usually hidden role in the assessment of the claims under discussion. The former values may concern the nature of reality, the essential characteristics of explanatory mechanisms, and the question of what can be considered as evidence, while the latter may relate to, for example, the reputation of journals, financial interests of scientists and pharmaceutical industries, and ideological views on science. What is needed is a judgment in which implicit values are made explicit and in which the arguments are considered and evaluated. This is why the debate about conventional medicine and homeopathy can be seen as an example of a "high-potential issue." Confronting students with this kind of issues makes them aware of the complexity of the evaluation of scientific research and helps them to acquire critical abilities in general and to develop "broad-mindedness" and "responsibility" in particular. Broad-mindedness can be characterized by receptiveness to new and different ideas or the opinions of others. Developing broad-mindedness is a process that is sometimes called "transformative learning" (Mezirow et al., 1990, xvi), because it results in the reformulation of one's frame of referencein which underlying values are centralto allow a more inclusive, discriminating, and integrative understanding of one's experience. In the context of the choice between conventional medicine and homeopathy, the aim is to critically evaluate and broaden students' views on, for instance, evidence-based practices. Responsibility is seen here as students' willingness and ability to account for their choices and actions and to make clear how they relate to their own (underlying) values. The development of students' critical abilities such as broadmindedness and responsibility corresponds with the learning goals of the course under scrutiny.
The use of high-potential issues in education can be compared to the application of socioscientific issues as pedagogical tools. It is argued, for instance, that such tools are helpful to develop argumentation skills in students (Christenson et al. 2014) and to make them aware of the role of knowledge, values, and experiences in their argumentation (Rundgren et al. 2016). While some studies are thus positive about the use of these tools, others are more critical. Lee (2007), for instance, found that students need a lot of guidance to develop the ability to make informed decisions on socio-scientific issues (176): "The results of the trials show that teachers need to take students through a critical examination of scientific evidence and engage them in logical argumentation to put their views in perspective and avoid bias." Tal and Kedmi (2006) argue that the use of socio-scientific issues in education enlarges students' argumentation skills but that traditional content-based textbooks written from a value-free perspective keep students away from a critical thinking culture. Furthermore, it has been shown that students use nonepistemic values (such as personal, social, and cultural values) in thinking about socioscientific issues, without relying on inquiry-based learning or by selectively using scientific evidence (Lee and Brown 2018: 66-68).
In the next section, we introduce another pedagogical tool: DOLM. DOLM has been developed to help students reflect upon, to broaden, and to give an account of one's underlying (sources of) values or, in Mezirow's terminology, one's frame of reference (Boschhuizen, Poortinga andAalberts 2006, Koster, Aalberts andBoschhuizen 2009;Mezirow et al. 1990). The tool of DOLM allows students to become aware of the role that (non-epistemic) values play in their decision-making, and it teaches them to explicitly reflect on the way they use scientific knowledge.
Introduction of DOLM
DOLM is a four-phase model, which starts with a case study involving a high-potential issuea "dilemma" in terms of DOLM. Students make distinct choices by reflecting on the significance of their choices: reflection on intuitive ideas (Phase A), reflection on the relevant scientific knowledge (Phase B), and philosophical reflection (Phase C). Reflection on (sources of) values cuts across phases A, B, and C. In a more retrospective assignment (phase D), students look back on their choices and arguments (see Fig. 1). This is meant to raise their awareness of how they gauge the value and evaluate knowledge, how their values influence this process, and how they appreciate and apply the different kinds of reflection as an act of critical self-reflections.
During each of the phases A, B, and C, students take three steps: (1) they clarify their commitment to certain theories, methods, and (sources of) values; (2) they weigh the importance and significance of these theories, methods, and (sources of) values; and (3) they make a reasoned choice. A special point of interest is the use of dialogue as a means of communication about students' choices and arguments. Students are encouraged to reflect together with their peers and tutor. This dialogue confronts them with their own values and with the values of other students. In addition, it teaches them to take seriously each other's underlying sources of epistemic and non-epistemic values and to enter into an open-minded discussion about each other's views. After each phase, students record their experiences in a report. The report after phase D gives a summary of the learning process (Aalberts, Koster and Boschhuizen 2012).
DOLM in the Life Sciences
DOLM has been integrated into the course "Philosophy and Science." In this course, students study texts, attend lectures and classes, hand in "reflection tasks," read and comment each other's assignments, and discuss topics like the relation between science and values, the role of epistemic and non-epistemic values in the formulation and acceptance of scientific knowledge, and the influence of their own point of view on the practice of science. In the course, the dilemma between conventional medicine and homeopathy is used to reflect on the question: "What is science?". Students are given an assignment in which they are asked to take on the role of a policy advisor at the "Foundation for Drug Development," responsible for financing scientific research into new medicines, to the amount of EUR 500,000. Two requests have been submitted. The first concerns clinical research for a new, conventional cancer medicine specially developed to eliminate side effects. The second concerns a cohort study for a new homeopathic treatment to eliminate the side effects of cancer medicines. Only one of the requests can be granted. Which one is the question for the policy advisor. In Phase A, students opt for one of the two clinical studies based on their own experiences and values, intuitive ideas about conventional medicine and homeopathic remedies, and relevant scientific knowledge achieved in other courses. In this phase, students defend their choices quite straightforwardly, sometimes without further arguments: "We have chosen for conventional medicine based on our own experiences. Our education has strengthened our choice" (Boschhuizen, Aalberts, and Koster 2008). In the next step (Phase B), they critically think about the claims of evidence-based medicine and the characteristics of homeopathic remedies, and they learn to consider the dilemma from distinct perspectives. This can result in a more balanced view: "I've taken the side of homeopathy two times now, and am developing some understanding for its opponents. Their arguments, however, were not convincing" (Koster, Aalberts and Boschhuizen 2009). In particular, they are confronted with points of view in which homeopathy is severely criticized because of its implausible principles and its lack of explanatory power, and with positions that are in favor of homeopathy because of positive experiences and of research concluding that homeopathic medicines do have significant effects. They are also introduced to efforts that try to explain these significant effects. This new information sometimes results in a different point of view: "I have altered my position because, after careful consideration of my original viewpoint, I was ultimately convinced by the opposing points of view" (Koster, Aalberts and Boschhuizen 2009). Because of the introduction of these different points of view, students again realize that (sources of) values influence scientific research. In this phase, the students begin to attach importance to the question whether homeopathic medicine can be considered a scientific approach or not. To answer this question, philosophical reflection upon the question "What is science?" is needed (Phase C). In this part of the course, students examine and critically reflect upon different perspectives on science such as the empirical cycle of the logical positivists, Karl Popper's idea of falsification, Thomas Kuhn's concept of scientific paradigms, Harry Collins' reading of the sociology of scientific knowledge, and some positions in social epistemology. This can result in a more reflective perspective on their choice between conventional medicine and homeopathy: "…and our own paradigm has also played a role in our decision-making. By executing tasks, we realized this point more and more... However, if we had had a completely different paradigm, we would probably have made another choice" (Boschhuizen, Aalberts, and Koster 2007). Central to the lectures about these different perspectives is the way they conceptualize, evaluate, or simply discard the relation between science and values.
One of the aims of the course is that students learn to think about (the sources of) their values, (if necessary) reformulate their perspectives on science, and make choices concerning the dilemma based on considered judgments. For that constructive process, dialogue is an essential ingredient. Of course, the aim will sometimes also be reached during the lectures or when students study the texts related to subjects from Phase B and C. It is quite natural that some students will then reframe their system of underlying values. But, as Paul Feyerabend (1975, 31) wrote, "prejudices are found by contrast, not by analysis." Applying this thought to the context of the course, it follows that a direct analysis of the role of our own values in our perspective on science normally will not work. By analyzing them, they will hardly become apparent. We need the confrontations with other views, with opposing stances, to become aware of (the sources of) our own values and presuppositions (cf. Pera 1994;Weigand and Dascal, 2001). In short, we need dialogue.
How can this dialogue be stimulated? During the group tutorials, students present their positions regarding the dilemma. These positions are typically not only different in the choice for or against conventional medicine or homeopathy, the grounds that one student puts forward may also differ from the grounds of another student. By confronting each other with these various claims, grounds, and reasons and by discussing themwith respect for each other's stancesit is possible to become aware of the values involved in the argument. The dialogue makes it possible to reflect explicitly on the various aspects of the student's judgment: relevant scientific knowledge, the social aspects of the issue, the normative-ethical aspects of possible choices, one's own values, world view and (non)religious beliefs, and the interrelations between all these. In this way, students have the possibility to become aware of their own and each other's (sources of) values and to think critically about them. This aim of the course is not easily reached: first, students need a lot of practice in recognizing underlying values and in using their imagination to redefine issues from different perspectives. Second, teachers need to learn how they can facilitate the analysis of (sources of) values and dialogue. To facilitate the dialogue, it is important that a safe environment is created in which students act respectfully, are open-minded, and show interest in each other's views and in which everyone accepts the agreements about the dialogical method in the classroom. As mentioned, it is not easy to create these conditions and to achieve the aim of the course. But if it is successful, then one of the main goals of the courseawareness of the relation between science and valuesis reached. Elsewhere one of us and two colleagues from VU University Amsterdam have shown that this approach is actually quite successful (Aalberts, Koster and Boschhuizen 2012). 10 In the retrospective assignment (phase D), students look back on their choices and arguments. In particular, they reflect on the way epistemic and non-epistemic values influenced their choice and in which way they now think about the possible involvement of non-epistemic values: could this involvement have been avoided or eliminated? Or did they find ways to handle these values in the way suggested by, for instance, Longino?
Conclusion
In this article, we have shown that the strong value-free ideal of science is untenable. Epistemic and non-epistemic values are present in scientific practices, in particular in the stages in which scientific research is selected and applied. We have seen that epistemic values play an indispensable role in what might be called "the heart of science": they necessarily influence the evidential standards needed for justifying a claim. Whether non-epistemic values are inevitably involved in the assessment of scientific claims is a more controversial issue. However, when these values are involved in processes of evaluation and justification, the question is whether this implies that science is hopelessly biased. Some philosophers of science defend that even if this is the case, it is still possible to retain the objectivity of science.
We have argued that students need to be aware of these interactions between science and values. Therefore, it is necessary to pay attention to this subject during undergraduate education. This is best done by way of presenting instances of value-laden research. In this way, students become acquainted with the influence of epistemic and non-epistemic values on the formulation and acceptation of scientific knowledge. They thus learn that the value-free view of science is inadequate. Furthermore, they are stimulated to critically think about the possible effects of the involvement of values on science. The next step consists in reflecting upon students' own frame of reference: in which way do values influence their own approach of science? By way of high-potential issues, incorporated in DOLM, students are stimulated to rethink the influence of their own values on scientific practices. We thus aim for what may be called "Effective Reflective Education" (Koster and Boschhuizen 2018).
According to Helen Longino, the objectivity of science can be guaranteed by the social character of scienceas long as the scientific community fulfils the four conditions of a genuine dialogue (cf. Section 2). In other words, critical discussion among scientists who work from different perspectives, assumptions, or worldviews and/or use different methodologies and approaches will enhance the reliability of the resulting scientific claims. We have seen that dialogue is also important as a means to reflect on one's own values in science education. Students need the confrontation with other views to become aware of their own (sources of) values. Accordingly, we conclude that diversity may be productive not only for the development of science but also for the reflection on scientific practices in undergraduate education. | 14,002.8 | 2019-12-10T00:00:00.000 | [
"Philosophy",
"Education"
] |
Statistics of small length scale density fluctuations in supercooled viscous liquids
Many successful theories of liquids near the melting temperature assume that small length scale density fluctuations follow Gaussian statistics. In this paper I present numerical investigations of fluctuations in the supercooled viscous regime using an enhance sampling method. I present results for the single component Lennard-Jones liquid, the Kob-Andersen binary mixture, the Wahnstr\"om binary mixture, the Lewis-Wahnstr\"om model of ortho-terphenyl and for the TIP4P/Ice model of water. Results show that the Gaussian approximation persist to a good degree into the supercooled viscous regime, however, the approximation is less accurate at low temperatures. I relate the non-Gaussian fluctuations to crystalline configurations. Implications to theories of the glass transition are discussed.
I. INTRODUCTION
Small length scale density fluctuations in normal homogeneous liquids above the melting temperature obey Gaussian statistics over many orders of magnitude. 1,2][8][9][10][11][12] In this study I directly investigate to what extend Gaussian statistics of small length scale density fluctuations persist into the supercooled viscous regime near the glass-transition.Viscous liquids are highly nontrivial as characterized by the three non's: 10 non-exponential relaxation of equilibrium fluctuations, non-Arrhenius temperature dependence of structural relaxation time, and nonlinear out-of-equilibrium relaxation.Thus, it is not obvious that Gaussian statistics will persist into the supercooled viscous regime.
][15] In general, liquids can be cooled below melting temperature due to the existence of a free-energy barrier in the form of a critical nucleus. 16The dynamics of a supercooled liquid near the glass-transition is dramatically slower than near the melting temperature.If dynamics were governed by a fixed free energy barrier the slowdown would follow an Arrhenius law.However, many liquid have super-Arrhenius behavior (the first "non").8][19][20] It is an appealing idea that the dynamical heterogeneity is linked to geometric arrangements of locally preferred structures of well-packed particles.Several studies have identified accumulation of a) Electronic mail<EMAIL_ADDRESS>This gives a picture of a less homogeneous structure with non-Gaussian small length scale density fluctuations.A disadvantages of "locally preferred structure" approach is that it is system specific.In this paper I propose to study statistics in the collective density field.This is a generic approach that can be applied to widely different systems.This is demonstrated by investigating systems belonging to chemically different classes.
2][33] The motivation is the experimental observation that the structure factor is similar in the normal-and in the supercooled liquid regime.The quadratic scaling law of the temperature dependency of the relaxation time [32][33][34][35] originates from generic kinetic contained models 35,36 .These models have trivial thermodynamic statistics, but nontrivial slow dynamics leading to a glass-transition.This picture suggests that statistics of small length scale density fluctuations near the glass transition inherit the Gaussian statistics of the normal liquid regime.
In this study I examine the statistics of small length scale density fluctuations for the single component Lennard-Jones (LJ) 37 model, the binary Kob-Andersen mixture (KABLJ) 38 , the Wahnström binary mixture (WABLJ) 39 , a coarse grained model o-terphenyl (LWoTP) 40 and the TIP4P/Ice 41 water model.Enhanced sampling molecular dynamics methods are used to sample statistics into the wings of the distributions.Results show that the Gaussian hypothesis is fair in the supercooled regime, however, deviations are more significant at low temperatures.The analysis suggest that non-Gaussian features are related to first-order transitions to crystals.
The remainder of the paper is organized as follows.Section II introduce the formalism used the describe density fluctuations and some theory of the Gaussian hy-pothesis.Section III describe numerical methods used for enhanced sampling of the density field and give descriptions of the investigated models (e.i.energy surfaces).Section IV present the results, and implications are discussed in Section V.
II. FORMALISM AND THEORY
In experiments (X-ray or neutron scattering) and theories of the liquid states it is often convenient to work in reciprocal space.In this section I will give the formalism used to describe density fluctuations in both reciprocal k-space and a subvolume.
A. The collective density field in k-space Consider a liquid of N particles located at R = {r 1 , r 2 , . . ., r N } in a volume V with periodic boundaries so that the thermodynamic density is ρ = N/V .Let the real-space density field be ρ(r) = N n δ(r n − r) where δ is Dirac's delta function.The collective density field in reciprocal space is then defined as the Fourier transform of the real-space density field: where k = k k is the scattering-or wave vector (sometimes the letter "q" or "Q" is used).The 1/ √ N factor ensure system size scale invariance of amorphous configurations (liquids).The scaling is √ N for configurations with long-range translational order (crystals) along the k direction.For a system of N point particles in a periodic orthorhombic cell the field can be written as where k = (2πn x /L x , 2πn y /L y , 2πn z /L z ), n x , n y and n z are integers, and L x , L y and L z are the length of the volume that confines the liquid (V = L x L y L z ).Due to the anisotropy of liquids the investigation can be limited to k-vectors along the x-direction without loss of information.For a given cubic box with a size of L = L x = L y = L z we consider vectors of lengths k = 2πn/L where n is an integer (n x = n and n y = n z = 0).We note that the anisotropy of the liquid is in principle destroyed by the constraint of the anisotropic periodic boundaries.However, such effects are expected to be small and are ignored in this study.
In the following we consider the probability distribution P (|ρ k |) where |ρ k | is the norm of the collective density field.We do not need to consider the tedious twodimensional complex plane since the P (ρ k ) distribution is radial symmetric for a liquid.The second moment S k = |ρ k | 2 is the structure factor routinely measured in scattering experiments.If statistics of density fluctuations follow Gaussian (G) statistics then probability ( The central limit theorem dictates that in the thermodynamic limit density fluctuations become Gaussian (see also discussion in Section V): for N → ∞.Thus, we limit our analysis to small length scale fluctuations by studying systems of about 100 to 1000 particles (unless otherwise stated).It have been shown that a small system can represent viscous dynamics of larger system. 42he fourth moment . Thus, we define a non-Gaussian parameter for the |ρ k | fluctuations as ( This parameter quantifies deviations from Gaussian statistics near the center of the distribution, |ρ k | 0. Deviations in the tails of the distribution cannot be expected to be represented by this parameter.For this, higher order moments are relevant.
B. The density fluctuations in a subvolume
The central limit theorem dictates that non-Gaussian feature are more pronounced in smaller systems.Thus, it can be illustrative to investigate density fluctuations in small subvolumes of a larger system.We define a subvolume though the function h(r) so that h is unity inside the volume and zero outside.The collective density field in this subvolume can then be written as ρ
√
V h where V h is the size of the subvolume.The k = 0 relates the actual density ρ h = N h /V h in the subvolume.Here N h = N n h(r n ) is the number of particles in the subvolume.The Gaussian approximation of the ρ h density fluctuations is where m 2 = (ρ h − ρ h ) 2 is the variance.For this distribution the fourth central moment m 4 = (ρ h − ρ h ) 4 equals 3m 2 2 .Thus, we define a non-Gaussian parameter as
III. METHODS AND MODELS
To highlight non-Gaussian features, I suggest to apply a potential that bias the system towards rare configurations that would not be sampled otherwise.Specifically, a harmonic potential is added to the Hamiltonian that will bias the system towards large ρ k values.The Gaussian hypothesis can then either be investigated directly on statistics of the biased simulations, or by re-weighting statistics of a series of simulations (referred to as the "umbrellas sampling method" 43 ).Below is a description of the suggested method: A. Sampling rare fluctuations of the collective density field Let H(R, Ṙ) be the Hamiltonian of a given system.To sample rare ρ k fluctuations at some density and temperature we simulations a Hamiltonian with added harmonic bias field 44 : where κ is a spring constant and a is an anchor point of the bias field.By reweighing we can get that the |ρ k | probability distribution of the unperturbed system: where P κa (|ρ k |) is the distribution of the Hamiltonian with the harmonic bias field.For a series of overlapping distributions (with different a's and κ's) the normalization constants N κa can be determined numerically using the iterative multistate Bennett acceptance ratio (MBAR) method. 45Alternatively, the distribution function P κa can be investigated directly.By combining equations 2 and 7 we get that the Gaussian approximation predicts where ment of biased distribution is Results of the Gaussian approximation can be used as initial guesses for the iterative MBAR method. 45In practice this lead to fewer iterations before reaching convergence.
To perform molecular dynamics simulations forces on particles from the bias field needs to be evaluated.The total force acting on particle j is 44 where is the force of the unbiased Hamiltonian, and Although the force on particle j depends on the positions of all particles it is possible to design an N -scaling algorithm: First loop over all particles to compute ρ k and then loop over all particles again to get particle forces using Eqs.11 and 12.The algorithm can be parallelize to several processes since both the computation of ρ k and the F (κa) j forces involve sums of independent contributions (assuming the same for F (0) j ).Computational efficiency of the algorithm is crucial, since we wish to conduct long-time simulations in the viscous regime where dynamics are slow.
B. Sampling rare density fluctuations in a subvolume
The overall idea of the above mentioned method for computing rare fluctuations of collective density field can be reused to sample rare density fluctuations in a subvolume.In order to perform molecular dynamics with a bias field we define a continuous quantity Ñh that is strongly correlated with the number of particles N h inside the volume h.In practice this is done by using a switching functions on the boarders of the volumes as described in Ref. 46 .The unbiased P (ρ h ) distribution is obtained by reweighing biased P bias ( Ñh ) distribution using the MBAR method 45 .For binary mixtures a bias potential can be applied to both kinds of particles.Again statistical information of the unbiased system can then be determined with reweighing.
C. Energy surfaces
We investigate statistics of density fluctuations for several models defined as a 3N dimensional energy surface.The examples have been chosen to represent different chemical classes of liquids.
LJ:
In 1924 Lennard-Jones suggested a simple model of interaction between atoms by summing repulsive term representing Pauli repulsions and an attractive term representing London forces. 37In this study we investigate a truncated version: the potential energy surface is a sum over pair energies and zero otherwise.The LJ model is not a good glass-former, however, the it is included in this study since it is a standard system in computational condense matter physics.Temperature T = 0.8 and density ρ = 0.85 (L = 5.0273) is used as a representation of the "normal liquid regime".This state is close to the freezing temperature. 44,48BLJ: Kob and Andersen suggested a binary LJ mixture as model of a good glass former. 38It is an 80/20 mixture of particles that have a strong affinity towards unlike atoms.This parametrization makes a good glass-former on time-scales and system sizes typically investigated in silico.This model is the standard model for computational studies of low temperature liquid dynamics.It is custom to study the system at the density ρ = 1.2 where the melting temperature is T m = 1.027(3). 49elow this temperature the particles will eventually phase separate in long-time simulation.The major constituent, the A's, will form a face centered cubic crystal.If crystallization is avoided, however, the low-temperature liquid accumulate locally preferred structures where one of the small particle is surrounded by ten larger particles forming a twisted bicapped square prism 24,30 .
WaBLJ: Wahnstöm suggested a 50/50 binary LJ mixture with a size ration of 80% 39 .Unlike the KABLJ mixture the interactions parameters (ε's and σ's) follow the Lorentz-Berthelot rule of mixing.The system is a good glass former (in silico), however, in long-time simulations the mixture can form a MgZn 2 crystal structure 26 .In the supercooled regime the liquid collect local structures of icosahedral order and Frank-Kasper order. 24,26,29The latter is a geometric arrangement where two touching larger particles have six smaller particles as common neighbors.These structures are favored by the low-temperature liquid since they pack space well, and are also a part of the crystalline structures. 26,50oTP: Lewis and Wahnström 40 suggested a coarse grained model of ortho-terphenyl (C 18 H 14 ) where molecules are constructed from three LJ particles placed in the corners of an isosceles triangle.Each LJ particle represent a benzene ring.To avoid that LJ particles crystallize into a close-packed structure the molecule have an inner angle of 75 • (that is in between 60 • and 90 • degree -the angles found between neighbor triplets in close packed structures).
In long-time simulations, however, the system can crystallize into a structure where the LJ particles form a base centered cubic lattice with random orientations of molecule. 47,51We study a system of N = 324 molecules (unless else stated) at the temperature T = 350 K at density ρ = 1.09 g/ml (L = 4.84 nm).
Water: Abascal et al. 41 suggested the TIP4P/Ice atomistic water model.This four site model reproduce the complicated phase-diagram of real water, suggesting that it also gives a good representation of hydrogen-bonds in the liquid state.The model is studied at temperature T = 280K and density ρ = 1 g/ml.There are no signs of spontaneous crystallization.
Numerical computations are performed using the software packages LAMMPS 52 , RUMD 53 , and home-made code available at the website http://urp.dk/tools.Implementation of the ρ k bias field is available in the official LAMMPS and RUMD packages.The P (|ρ k |) distribution on a logarithmic scale (k = 1.4 nm −1 ) of the LWoTP model for system sizes of N = 324 and N = 2592 molecules, respectively.The Gaussian is better for the larger system as expected from the central limit theorem.The non-Gaussian parameters αρ k are 0.029 and 0.0011 for N = 324 and N = 2592, respectively.are studied in systems with a gas-liquid interface.This is done by constructing an elongated simulation cell with periodic boundaries in the x and y directions and walls at the boundaries of the longer z direction.
Results for the LJ, KABLJ and WaBLJ models are reported in reduced Lennard-Jones units, while physical units units are used for the LWoTP model and the TIP4P/ice water model.
A. Fluctuations of the collective density field
Before investigating the glass forming liquids, we first focus on the LJ model near the melting temperature.Figure 1 shows probability distributions P (|ρ k |) on a logarithmic scale.The figure includes k-vectors from lengths of k = 1.25 (n = 1) up to k = 15.0 (n = 12).The solid black lines are the reweighed distributions using a series of biased simulations and the red dashed lines are the Gaussian predictions (shifted vertically for clarity).The first impression is that the Gaussian hypotheses give a good description.This confirm the consensus that small length scales fluctuations are Gaussian in the normal liquid regime. 1,2The tails of the distributions, however, show non-Gaussian features.As an example, the k-vectors with n = 6 and n = 8 show fat tails (compared to the Gaussian reference).A representative configura- tion from the tail of the distribution for n = 6 is shown on Fig. 2(a).A crystalline structure is apparent in both the real-space configuration and the scattering spectrum shown on Fig. 2(b).Consistent with this, a cubic box with 108 LJ particles have an ideal crystal structure with 3 × 3 × 3 fcc unit cells giving a Bragg peak at n = 6.The distribution of the n = 8 vector (Fig. 2(c) and Fig. 2(d)) also have a fat tail.This can also be attributed to a crystalline configuration, but with another orientation.As an aside, bias simulation similar to the ones presented here, can be used to compute the melting point of crystals.This is refereed to as the "interface pinning" method. 44ther k-vectors have thin tails.As an example Figs.2(e) show a configuration from the tail of the n = 10 wave vector.This structure it not crystalline but have disordered.Figs.2(g) and 2(h) show a configuration from the longest wave vector of the investigated system size (n = 1).The liquid have responded to a strong bias field by forming a vapor slap and a crystalline slap.
Next we investigate the glass forming models.First, we consider the KABLJ mixture at T = 0.45 (ρ = 1.2) of a system size of N = 1000 particles.This state-point is well below the melting temperature of T m = 1.027(3) 49 .The structural relaxation is about 10 3 times larger than in the normal liquid regime. 38,54The solid black lines on Fig. 3 trajectories of the KABLJ mixture and LWoTP trimer are shown on Fig. 3. Crystallizing trajectories are discarded in the analysis.Figure 5 show P (|ρ k |) distributions of the glass forming liquids KABLJ, WaBLJ, LWoTP and water for several kvectors.The red dashed lines indicate the Gaussian approximation.The agreement is good, but the tails of the distributions deviates from the Gaussian prediction.The deviations are system-size dependent as expected from the central limit theorem.Figure 6 shows that the non-Gaussian fat tail for k = 1.4 nm of the LWoTP systems is greatly diminished when the system size increased from N = 324 to N = 2592.
B. Density fluctuations in a subvolume
Fluctuations in small subvolumes of a larger system can give a further insight to the structural origin of non-Gaussian small length scale density fluctuations.Figure 7 show the distribution function of the ρ h density in a 3 × 3 × 3 subvolume, h.The points are reweighed statistics from simulations with a bias potentials that push the system towards configurations with a certain amount of particles in the subvolume.The red dashed line is the prediction from the Gaussian approximation.The agreement is good, 1,2 however, some deviations are seen in the tails of the distributions.The low density limit correspond to the formation of a cubic vapor bubble in the liquid.As described by classical nucleation theory the free energy −k B T ln(P ) for forming such a bubble is associated with both a bulk-and a surface contribution.
To investigate the supercooled regime we setup a gasliquid coexistence simulation of the KABLJ mixture (in the same way as we did for the single component LJ model).Figure 8 shows the structural relaxation time in liquid slap.The relaxation time is non-Arrhenius in the investigate temperature regime, 0.28 < T < 0.60.The points on Fig. 9 shows the distribution of ρ h fluctuations in a 3 × 3 × 3 subvolume h for the temperatures T = {0.35,0.40, 0.55}.Gaussian statistics are shown as red dashed lines.The conclusion from the analysis of the |ρ k | fluctuations remains -the Gaussian approximation gives a fair description, but becomes increasingly less accurate at lower temperatures.Deviations are both seen in the tails of the distribution, and near the mean as shown by the non-Gaussian parameter α ρ h .Figure 10(a) shows the P (N A , N B ) distribution.We would expect to see elliptical shaped contour lines for Gaussian approximation, but see some deviations from this. Figure 10(b) show a configuration from a fat-tail part of the distribution at equimolar composition in the subvolume.The configuration is a cubic CsCl crystallite.This structure is one of the thermodynamically stable crystal structures of the KABLJ model. 49Thus, I conclude that non-Gaussian feature are related to the first order-transition to a crystal.
V. DISCUSSION
Let us summarize the results before moving on with a discussion of the implications.I have presented an investigation of a range of chemically different glass formers, and the overall conclusion is that the Gaussian hypothesis gives a fair description of the small length scale density fluctuations, however, as temperature is lowered the Gaussian approximation is less accurate.
Gaussian statistics usually comes about in two ways: (i) from the central limit theorem, or (ii) from an harmonic approximation: (i) The central limit theorem dictates that if random variables from any underlying distribution are added the resulting distribution will follow Gaussian statistics.For a non-flowing equilibrium liquid of a sufficient size it can be assumed that subvolumes fluctuate independently.As an example think of the ideal gas model of non interacting particles.For the ideal gas the number of particles in a given subvolume h follow the Poisson distribution.If the size of the subvolume is increased, then Raikovs theorem dictates that fluctuation follow another Poisson distribution with a higher average.This distribution will be closer to the Gauss function.For a liquid with interactions the underlying distribution differs from Poisson statistics, however, by studying small length scale density fluctuations we can gain insight on the nontrivial underlying distribution.This brings us the other less trivial way of arriving at Gaussian statistics, (ii) i.e. by an harmonic expansion around a local minimum of the free energy function F : If x is a order parameter, like ρ k or ρ h , then the free energy along this coordinate is where P (x) is the probability distribution.The function F (x) can be expressed as a polynomial expansion around the minimum at x 0 .It is often convenient to assume that only the second order term is of relevance, thus giving a harmonic approximation for the free energy 5 The truncation of the expansion series is non-trivial, as discussed below.By equating Eqs. 13 and 14 and isolating P we arrive at Gaussian statistics for the probability distribution (I remind the reader that the reason we find near-Gaussian statistics is not due to the harmonic bias field).
To understand first order transitions, e.g. the gas-liquid transition, higher order terms are important.As a classic example, Landau's (L) effective Hamiltonian 55,56 includes higher-order terms to give a description of the density fluctuations near the gas-liquid critical point: . With the Landau theory in mind, we expect deviations from Gaussian statistics for liquid density fluctuations when other phases (gas or crystals) interfere with the liquid state.In the normal liquid regime deviation from Gaussian statistics can be due to the formation of a vapor bubble as exemplified on Figs. 1 and 2. In the supercooled regime the crystal basin of the free energy becomes large suggesting that statistics becomes less Gaussian due to the presence of a crystal.The microscopic picture is the formation of subcritical crystallites (Fig. 6).In agreement with this, the systems are more prone to crystallization when a strong field biasing |ρ k | is applied as exemplified on Fig. 4. I conjecture that it is possible that non-crystalline structures could be important for the statistics of small length scale fluctuations.To address this more refined methods are needed, and I leave this to future studies.Some structural candidates are the "locally favored structures" identified for some of the models [20][21][22][23][24][25][26][27][28][29][30] .These structures have been suggested as an important component to understand the dynamics of the highly viscous liquid near the glass transition.It would be a valuable insight to show that statistics of small length scale density fluctuations is related to dynamics.
Another angle is to view small length scale density fluctuations from is the "energy landscape" perspective 42,[57][58][59] .In this picture, the 3N dimensional energy surface of the liquid is partitioned into basins identified by local minimums.Below a certain onset temperature the system explores confrontational space by two mechanisms.At short times the system vibrates in a basin that is, to a good approximation, harmonic.Thus, it is expected that these vibrations will give rise to Gaussian statistics of density fluctuations.On longer timescales the system will explore different basing (activated relaxation).From this perspective the non-Gaussian feature at low temperatures are related to density fluctuations between basins (the inherent states).However, this needs to be investigated.Some theories directly or indirectly assume Gaussian statistics of small length scale density fluctuations.3][34][35][36] As mentioned in the introduction, the picture is that thermodynamics and structural details of glass forming materials are note crucial for understanding the dynamics of a highly viscous liquid.Finally, the Gaussian approximation of density fluctuations play a role in some elastic models. 10,60,61The idea behind these approaches to understand viscous dynamics is though elastic deformation that allow a small length scale subvolume to rearrange.Here the harmonic approximations enters in some theories to make predictions related to experiments.
VI. ACKNOWLEDGMENTS
This study was initiated by discussion with the late David Chandler.I joined his group in 2010-2012, and I greatly benefited from interactions with Patrick Varilly, Amish J. Patel, Thomas Speck, David Limmer, Lester O. Hedges, Yael Elmatad and Adam P. Willard.David had a remarkable talent of challenging conventional beliefs and thereby moving the scientific field forward.For the preparation of this manuscript I also received comments and suggestions from Andreas Tröster, Jeppe C. Dyre and Thomas B. Schrøder.This work was supported by the VILLUM Foundations Matter Grant No. 16515.
FIG. 1 .
FIG. 1. Probability distribution P (|ρ k |) on a logarithmic scale with k = (2πn/5.0273,0, 0) for the single component LJ model in the normal liquid regime.The solid black lines are the distribution function computed from reweighed biased simulations and the red dashed lines are the corresponding Gaussian predictions (Eq.2).The distributions have been shifted vertically for clarity.The insert show the structure factor where dots indicate the investigated k-vectors.
FIG. 2 .
FIG. 2. Configurations taken from the non-Gaussian tails of distributions shown on Fig. 1.The left panels show a representative configuration, and the right panels show the scattering spectra S k in the xy-plane (average over several configurations).From top down the panels shows representations taken from biased simulations with wave vectors with n = 6, n = 8, n = 10 and n = 1.
FIG. 4 .
FIG. 4. Examples of crystallizing trajectories.(a) The |ρ k | trajectory in a simulation with the bias field 5(|ρ k | − 6.5) 2 added to the Hamiltonian of the KABLJ mixture.A crystallite is formed in the last third of the simulation.The crystallite consist of pure A's and crystallization event is accompanied by a phase separation.(b) The number of A's that have 12 A's in the first neighbor shell.This order-parameter is a indicator of the crystallization.(c) The configuration in the last step of the simulation.The larger A's are colored green, while the smaller B's are red.(d) The qubatic orderparameter 47 and (e) the potential energy in biased simulations of the LWoTP model.(f) The final configuration of a trajectory where a crystal is formed.Each molecule have been give an individual color to tell them apart.
FIG. 6.The P (|ρ k |) distribution on a logarithmic scale (k = 1.4 nm −1 ) of the LWoTP model for system sizes of N = 324 and N = 2592 molecules, respectively.The Gaussian is better for the larger system as expected from the central limit theorem.The non-Gaussian parameters αρ k are 0.029 and 0.0011 for N = 324 and N = 2592, respectively.
FIG. 7 .
FIG. 7. Density fluctuations in a 3 × 3 × 3 subvolume, h, of the LJ model in the normal liquid regime with a gas-liquid interface (T = 0.7, N = 3000, Lx = Ly = 10).The inset show a typical configuration of the system with an gas-liquid interface, and the subvolume h located in the bulk liquid part.
ρ
FIG. 8. Structural relaxation time τα as a function of temperature of KABLJ mixture with gas-liquid interface (N = 3000, Lx = Ly = 10).The structural relaxation time is here defined as Fs(k = 2π, t = τα) = 1/e where Fs is the self intermediate scatter function of A's located inside the slap (−5 < z < 5).The inset show the density of the liquid slap as a function of temperature.
FIG. 10 .
FIG. 10. (a)The ln(P (NA, NB)) distribution in a 3 × 3 × 3 subvolume of the KABLJ mixture at T = 0.325 (the white squares were not computed due to bad statistics).Nongaussian features are seen as the contour lines that deviates slightly from being oval.(b) A configuration from the tail of the distribution with equimolar composition .The upper half of particles have been made invisible to reveal the arrangement of particles in the 3 × 3 × 3 subvolume h.The structure correspond to a cubic CsCl crystallite.This is one of the known stable structures of the mixture.49 | 6,764.8 | 2018-11-01T00:00:00.000 | [
"Physics"
] |
Injection locking-based pump recovery for phase-sensitive amplified links
An injection locking-based pump recovery system for phase-sensitive amplified links, capable of handling 40 dB effective span loss, is demonstrated. Measurements with 10 GBd DQPSK signals show penalty-free recovery of a pump wave, phase modulated with two sinusoidal RF-tones at 0.1 GHz and 0.3 GHz, with 64 dB amplification. The operating power limit for the pump recovery system is experimentally investigated and is governed by the noise transfer and phase modulation transfer characteristics of the injection-locked laser. The corresponding link penalties are explained and quantified. This system enables, for the first time, WDM compatible phase-sensitive amplified links over significant lengths. © 2013 Optical Society of America OCIS codes: (060.2320) Fiber optics amplifiers and oscillators; (140.3520) Lasers, injectionlocked. References and links 1. C. M. Caves, “Quantum limits on noise in linear amplifiers,” Phys. Rev. D 26, 1817–1839 (1982). 2. E. Desurvire, Erbium-doped Fiber Amplifiers, (John Wiley & Sons, 1994). 3. W. Imajuku, A. Takada, and Y. Yamabayashi, “Low-noise amplification under the 3dB noise figure in high-gain phase-sensitive fibre amplifier,” Electron. Lett 35, 1954–1955 (1999). 4. D. J. Lovering, J. A Levenson, P. Vidakovic, J. Webjörn, and P. St. J. Russell, “Noiseless optical amplification in quasi-phase-matched bulk lithium niobate,” Opt. Lett. 21, 1439–1441 (1996). 5. Z. Tong, C. Lundström, P. A. Andrekson, C. J. McKinstrie, M. Karlsson, D. J. Blessing, E. Tipsuwannakul, B. J. Puttnam, H. Toda, and L. Grüner-Nielsen, “Towards ultrasensitive optical links enabled by low-noise phasesensitive amplifiers,” Nat. Photonics 5, 430–436 (2011). 6. J. Hansryd, P. A. Andrekson, M. Westlund, J. Li, and P. O. Hedekvist, “Fiber-Based Optical Parametric Amplifiers and Their Applications,” IEEE J. Sel. Topics Quantum Electron. 8, 506–520 (2002). 7. J. Kakande, C. Lundström, P. A. Andrekson, Z. Tong, M. Karlsson, P. Petropoulos, F. Parmigiani, and D. J. Richardson, “Detailed characterization of a fiber-optic parametric amplifier in phase-sensitive and phaseinsensitive operation,” Opt. Express 18, 4130–4137 (2010). 8. M. Vasilyev, “Distributed phase-sensitive amplification,” Opt. Express 13, 7563–7571 (2005). 9. R. Tang, P. Devgan, V. S. Grigoryan, and P. Kumar, “Inline frequency-non-degenerate phase-sensitive fibre parametric amplifier for fibre-optic communication,” Electron. Lett 41, 1072–1074 (2005). 10. R. Tang, P. Devgan, P. L. Voss, V. S. Grigoryan, and P. Kumar, “In-Line Frequency-Nondegenerate PhaseSensitive Fiber-Optical Parametric Amplifier,” IEEE Photon. Technol. Lett. 17, 1845–1847 (2005). 11. O. K. Lim, V. Grigoryan, M. Shin, and P. Kumar, “Ultra-Low-Noise Inline Fiber-Optic Phase-Sensitive Amplifier for Analog Optical Signals,” in Optical Fiber Communication Conference and Exposition (OFC) and National #187962 $15.00 USD Received 4 Apr 2013; revised 30 May 2013; accepted 5 Jun 2013; published 11 Jun 2013 (C) 2013 OSA 17 June 2013 | Vol. 21, No. 12 | DOI:10.1364/OE.21.014512 | OPTICS EXPRESS 14512 Fiber Optic Engineers Conference (NFOEC), Technical Digest (CD) (Optical Society of America, 2008), paper OML3. 12. R. Tang, J. Lasri, P. S. Devgan, V. Grigoryan, P. Kumar, and M. Vasilyev, “Gain characteristics of a frequency nondegenerate phase-sensitive fiber-optic parametric amplifier with phase self-stabilized input,” Opt. Express 13, 10483–10493 (2005). 13. Z. Tong, C. J. McKinstrie, C. Lundström, M. Karlsson, and P. A. Andrekson, “Noise performance of optical fiber transmission links that use non-degenerate cascaded phase-sensitive amplifiers,” Opt. Express 18, 15426–15439 (2010). 14. C. J. McKinstrie, M. Karlsson, and Z. Tong, “Field-quadrature and photon-number correlations produced by parametric processes,” Opt. Express 18, 19792–19823 (2010). 15. Z. Tong, C. Lundström, E. Tipsuwannakul, M. Karlsson, and P. A. Andrekson, “Phase-Sensitive Amplified DWDM DQPSK Signals Using Free-Running Lasers with 6-dB Link SNR Improvement over EDFA-based Systems,” in European Conference and Exhibition on Optical Communication (ECOC), Technical Digest (CD) (Optical Society of America, 2010), paper PDP1.3. 16. Z. Tong, C. Lundström, P. A. Andrekson, M. Karlsson, and A. Bogris, “Ultralow Noise, Broadband PhaseSensitive Optical Amplifiers, and Their Applications,” IEEE J. Sel. Topics Quantum Electron. 18, 1016–1032 (2012). 17. Z. Tong, A. Bogris, C. Lundström, C. J. McKinstrie, M. Vasilyev, M. Karlsson, and P. A. Andrekson, “Modeling and measurement of the noise figure of a cascaded non-degenerate phase-sensitive parametric amplifier,” Opt. Express 18, 14820–14835 (2010). 18. A. Takada and W. Imajuku, “Optical phase-sensitive amplifier with pump laser phase-locked to input signal light,” in Proceedings of European Conference and Exhibition on Optical Communication (ECOC), (Optical Society of America, 1997), 98–101. 19. R. Slavı́k, F. Parmigiani, J. Kakande, C. Lundström, M. Sjödin, P. A. Andrekson, R. Weerasuriya, S. Sygletos, A. D. Ellis, L. Grüner-Nielsen, D. Jakobsen, S. Herstrøm, R. Phelan, J. OGorman, A. Bogris, D. Syvridis, S. Dasgupta, P. Petropoulos, and D. J. Richardson, “All-optical phase and amplitude regenerator for next-generation telecommunications systems,” Nat. Photonics 4, 690–695 (2010). 20. S. Sygletos, R. Weerasuriya, S. K. Ibrahim, F. Gunning, R. Phelan, J. O’Gorman, J. O’Carrol, B. Kelly, A. Bogris, D. Syvridis, C. Lundström, P. Andrekson, F. Parmigiani, D. J. Richardson, and A. D. Ellis, “Phase Locking and Carrier Extraction Schemes for Phase Sensitive Amplification,” in Conference on Transparent Optical Networks (ICTON), 2010 12th International, Technical Digest (CD) (Optical Society of America, 2010), paper Mo.C1.3. 21. S. Kasapi, S. Lathi, and Y. Yamamoto, “Amplitude-squeezed, frequency-modulated, tunable, diode-laser-based source for sub-shot-noise FM spectroscopy,” Opt. Lett. 22, 478–480 (1997). 22. E. K. Lau, L. J. Wong, X. Zhao, Y. K. Chen, C. J. Chang-Hasnain, and M. C. Wu, “Bandwidth Enhancement by Master Modulation of Optical Injection-Locked Lasers,” J. Lightw. Technol. 26, 2584–2593 (2008). 23. A. Fragkos, A. Bogris, D. Syvridis, and R. Phelan, “Amplitude Noise Limiting Amplifier for Phase Encoded Signals Using Injection Locking in Semiconductor Lasers,” J. Lightw. Technol. 30, 764–771 (2012). 24. E. K. Lau and M. C. Wu, “Amplitude and Frequency Modulation of the Master Laser in Injection-Locked Laser Systems,” in Proceedings of International Topical Meeting on Microwave Photonics, (2004), 142–145. 25. M. Vainio, M. Merimaa, and K. Nyholm, “Modulation transfer characteristics of injection-locked diode lasers,” Opt. Commun. 267, 455–463 (2006). 26. S. L. I. Olsson, B. Corcoran, C. Lundström, E. Tipsuwannakul, S. Sygletos, A. D. Ellis, Z. Tong, M. Karlsson, and P. A. Andrekson, “Optical Injection-Locking-Based Pump Recovery for Phase-Sensitively Amplified Links,” in Optical Fiber Communication Conference and Exposition (OFC) and National Fiber Optic Engineers Conference (NFOEC), Technical Digest (CD) (Optical Society of America, 2012), paper OW3C.3. 27. B. Corcoran, S. L. I. Olsson, C. Lundström, M. Karlsson, and P. Andrekson, “Phase-sensitive Optical PreAmplifier Implemented in an 80km DQPSK Link,” in Optical Fiber Communication Conference and Exposition (OFC) and National Fiber Optic Engineers Conference (NFOEC), Technical Digest (CD) (Optical Society of America, 2012), paper PDP5A.4. 28. S. L. I. Olsson, B. Corcoran, C. Lundström, M. Sjödin, M. Karlsson, and P. A. Andrekson, “Phase-Sensitive Amplified Optical Link Operating in the Nonlinear Transmission Regime,” in European Conference and Exhibition on Optical Communication (ECOC), Technical Digest (CD) (Optical Society of America, 2012), paper Th.2.F.1. 29. S. K. Korotky, P. B. Hansen, L. Eskildsen, and J. J. Veselka, “Efficient phase modulation scheme for suppressing stimulated Brillouin scattering,” in Proc. Technol. Dig. Conf. Integr. Opt. Fiber Commun., (1995), 110–111. 30. A. Furusawa, “Amplitude squeezing of a semiconductor laser with light injection,” Opt. Lett. 21, 2014–2016 (1996). 31. C. Lundström, R. Malik, L. Grüner-Nielsen, B. Corcoran, S. L. I. Olsson, M. Karlsson, and P. A. Andrekson, “Fiber Optic Parametric Amplifier With 10-dB Net Gain Without Pump Dithering,” IEEE Photon. Technol. Lett. 25, 234–237 (2013). #187962 $15.00 USD Received 4 Apr 2013; revised 30 May 2013; accepted 5 Jun 2013; published 11 Jun 2013 (C) 2013 OSA 17 June 2013 | Vol. 21, No. 12 | DOI:10.1364/OE.21.014512 | OPTICS EXPRESS 14513
Introduction
Phase-sensitive amplifiers (PSAs), e.g.fiber optic parametric amplifiers (FOPAs) in phasesensitive (PS)-mode, are in theory capable of noiseless amplification, i.e. a 0 dB noise figure (NF) [1].This should be compared with phase-insensitive amplifiers (PIAs) such as erbiumdoped fiber amplifiers (EDFAs) having a 3 dB quantum-limited NF at high gain [2].Low NF PSAs have been realized in both FOPAs [3], and nonlinear crystals [4], with FOPA-based implementations showing significantly higher gain.A high-gain optical amplifier with close to 0 dB NF would have major impact on areas such as sensing and spectroscopy as well as fiber optical communication systems [5].
FOPA PSAs require in their simplest configuration three frequency-and phase-locked waves at the input, commonly referred to as pump, signal, and idler, and can be implemented in frequency-degenerate and frequency-nondegenerate configurations.Frequency-degenerate PSAs can only amplify one specific wavelength channel for a given pump configuration and are difficult to implement with high gain due to the quadratic dependence of the gain on the pump power [6].Frequency-nondegenerate PSAs on the other hand support simultaneous amplification of many independent signals and can provide high gain, growing exponentially with pump power [7].
The concept of a frequency-nondegenerate PSA-amplified transmission link, utilizing the ultra-low NF, multi-channel capability, and high gain of a frequency-nondegenerate PSA, was first introduced in 2005 by Vasilyev et al. [8].The first experimental realization of a frequency-nondegenerate PSA-amplified transmission link was using single-channel 2.5 Gbit/s non-return-to-zero (NRZ) data transmitted over a 60 km dispersion compensated link [9].The frequency-and phase-locking of the waves was accomplished using an optical double-sideband modulation scheme [10,11], with the bandwidth limited to the bandwidth of the optical modulators used to generate the sidebands.Following demonstrations have used an all-optical scheme which is based on four-wave mixing (FWM) where the frequency-and phase-locking is achieved through parametric idler creation.The combination of a PIA, for creating a set of frequency-and phase-locked waves, followed by a PSA, is a practical way of implementing a PSA and was first introduced in [12].This scheme is commonly referred to as the copier-PSA scheme.
The copier-PSA scheme has been thoroughly investigated and it has been shown theoretically that a transmission link implementation of the scheme can give up to 6 dB link NF improvement over conventional PIA-based schemes and a 3 dB improvement over all PSA-based schemes [13,14].This has also been shown experimentally for a copier-loss-PSA system, where the link was emulated by a lumped signal/idler loss [5].Based on the copier-loss-PSA system, amplification of dense wavelength division multiplexed (DWDM) differential quadrature phase-shift keyed (DQPSK) signals at 10 GBd with nearly 6 dB signal-to-noise ratio (SNR) improvement over an EDFA-based system have been demonstrated [15,16].This demonstration showed input signal-format independence and DWDM channel amplification capability of the copier-PSA scheme, two features very important for communication links.
A schematic illustration of a transmission link with lumped PSA amplification, based on the copier-PSA scheme, is shown in Fig. 1.A signal wave encoded with data along with a highpower pump wave, possibly phase modulated for suppression of stimulated Brillouin scattering (SBS), are injected into the copier.The copier is implemented with a PI-FOPA and when passing through the copier an idler wave, frequency-and phase-locked to the signal and pump, is created via FWM.Before transmission the high-power pump wave is separated from the signal/idler pair and attenuated to avoid degrading nonlinear effects such as cross-phase modulation (XPM) and self-phase modulation (SPM) during transmission and the signal/idler waves are tuned with respect to phase, delay, and dispersion.After recombining the pump with the signal/idler pair the waves are sent through the transmission link.
Receiver Preamplifier Transmitter
After transmission the pump is separated from the signal/idler pair and led through a pump recovery system.The pump recovery system should produce a high-quality pump wave based on the residual pump wave after transmission and is essential for obtaining the benefits of a PSA-amplified link.The signal and idler polarizations are tuned to maximize the PSA gain.The waves are then recombined and led into the PSA, implemented by a PS-FOPA, where low NF amplification takes place before the signal wave is filtered out and detected by the receiver.It is critical that the noise added on the signal and idler waves in the copier is decorrelated before the PSA for low NF PSA operation to be possible [17].In the transmission link implementation of the copier-PSA scheme this is achieved by the loss in the link.
Previous demonstrations of PSA-amplified links have not adequately considered the pump recovery system.In [9] the pump recovery before the PSA was achieved using a single EDFA and possible penalty due to pump degradation through the pump recovery was not considered.As mentioned earlier, the experimental work presented in [5,15,16] was carried out with a lumped signal/idler loss instead of transmission link and thus no pump attenuation stage or pump recovery system was used or required in those demonstrations.As such, pump recovery has provided a major obstacle in implementing a PSA-amplified link over significant fiber spans.
Pump recovery can be accomplished using optical injection locking (IL).There have been several demonstrations of pump wave generation, using IL, for subsequent use in PSAs.However, not in the context of frequency-nondegenerate PSA-amplified transmission links.IL has been used for phase-locking a semiconductor ring laser to a pulsed signal that was used as pump in an in-line frequency-degenerate PSA [18].An all-optical regenerator has been demonstrated where IL was used for narrowband filtering of a generated carrier wave and the injectionlocked wave later used as pump in a saturated PSA [19].Two schemes, both using IL, have been demonstrated for generating phase-locked pump waves for use in in-line "black-box" frequency-nondegenerate PSAs [20].
IL has also proved to be an extremely useful technique in a number of areas and has been thoroughly investigated for applications such as FM spectroscopy [21], and modulation bandwidth enhancement [22].There has also been a number of theoretical and experimental investigations dedicated to amplitude modulation (AM) and frequency modulation (FM) transfer for various operating regimes and slave laser (SL) driving conditions [23][24][25].However, to the best of our knowledge, no detailed investigation of amplified spontaneous emission (ASE) noise transfer through an injection-locked distributed feedback (DFB) laser has previously been published, with only power spectral density measurements performed of the output of a semiconductor laser injection-locked to an ASE degraded signal [23].
The first demonstration of a nontrivial pump recovery system for frequency-nondegenerate PSA-amplified links was presented in [26].This system enabled the demonstration of an 80 km PSA-amplified transmission link [27], the longest frequency-nondegenerate PSA-amplified link ever reported.The pump recovery system also enabled an investigation of transmission in the nonlinear regime [28].
In this paper we extend the concept presented in [26].Apart from demonstrating penalty-free operation of a PSA-amplified link, with phase modulated pump, for equivalent link losses of more than 40 dB we also present the first detailed experimental investigation of the operating power limits of the pump recovery system.We measure the noise generation in and the phase modulation transfer through the pump recovery system and relate this to bit error ratio (BER) measurements which enables us to understand the penalty mechanisms and draw conclusions about how the system can be improved.The investigation also contains novel measurement results on ASE noise transfer through an injection-locked DFB laser.
The paper is organized as follows.In section 2 we demonstrate our proposed pump recovery system in a PSA-and a PIA-amplified link for equivalent link lengths of up to 50 dB loss which let us observe the operating power limits.In section 3 we investigate in detail the factors that determine the operating limits, i.e. noise generation in and phase modulation transfer through the pump recovery system.Then in section 4, the measurements in section 3 are related to the PSA/PIA-amplified link performance and conclusions are drawn regarding how the pump recovery system performance can be improved.Finally in section 5 we state our conclusions.
The pump recovery scheme and demonstration
To get an idea of the requirements on a pump recovery system in a PSA-amplified link we consider a specific case as an example.We start with the condition that the pump power into the transmission link should not exceed 10 dBm.It has been shown that this level of pump power does not degrade the performance of a PSA-amplified 80 km link [27].If our target is a 100 km link then the pump will be attenuated by about 20 dB through the link.Furthermore, if we aim at 20 dB PSA net gain, then in our PSA implementation approximately 34 dBm of pump power is needed at the PSA input.With these limitations we need about 44 dB pump amplification in the pump recovery system.Apart from the high amplification, the recovered pump must also have high OSNR to avoid penalty from pump transfer noise in the PSA and we additionally require phase information on the incoming pump wave to be correctly reproduced since in our experiments we use a phase modulated pump for suppression of SBS [29].These requirements on amplification and OSNR are impossible to satisfy using ordinary EDFAs and thus a different solution is required.
The pump recovery system which we demonstrate here is a hybrid IL/EDFA solution where IL is used for its amplification, filtering, and amplitude squeezing properties [30], and the ED-FAs for power amplification.We demonstrate the performance and the operating power limits of the pump recovery system by carrying out BER measurements on a PSA-amplified link incorporating the pump recovery system.To gain further insight we compare the performance with the same system operated in PI-mode (by blocking the idler after the copier) and an EDFAamplified link.We also compare the performance of the hybrid IL/EDFA pump recovery system with a simple EDFA-based system, obtained by bypassing the IL in the hybrid system.Our link does not include any transmission fiber, instead the link loss is emulated by a lumped loss element.
Experimental setup
The experimental setup is shown in Fig. 2. A signal wave at 1545.2 nm was encoded with a 10 GBd DQPSK 2 15 − 1 pseudorandom bit sequence (PRBS).The signal was combined, using a wavelength division multiplexer (WDM), with a high-power pump wave at 1553.7 nm, phase modulated with two sinusoidal radio frequency (RF)-tones at 0.1 GHz and 0.3 GHz (giving 0.8 GHz bandwidth) for suppression of SBS in the FOPAs.2. Experimental setup used for demonstration and bit error ratio characterization of an injection locking-based pump recovery system in a phase-sensitive amplified link.DQPSK: differential quadrature phase-shift keyed, PRBS: pseudorandom bit sequence, RF: radio frequency, PC: polarization controller, WDM: wavelength division multiplexer, HNLF: highly nonlinear fiber, VOA: variable optical attenuator, EDFA: erbium-doped fiber amplifier, IL: injection locking, PZT: piezoelectric transducer, PSA: phase-sensitive amplifier, PIA: phase-insensitive amplifier, BER: bit error ratio, PLL: phase-locked loop.
The two waves were launched into the copier, consisting of 250 m highly nonlinear fiber (HNLF) with zero dispersion wavelength λ 0 = 1545 nm, and a phase-conjugated copy of the signal, the idler, was generated at 1562.2 nm through FWM.The copier had no net gain and the signal was 8.5 dB stronger than the idler at the output.The signal and idler waves were then separated from the pump wave and lead through an optical processor (OP) for power attenuation and equalization, filtering, and signal-idler relative delay tuning.The OP was also used for switching between PS-and PI-mode by selectively blocking the idler.The pump wave was attenuated using a variable optical attenuator (VOA), VOA1, to emulate link loss and vary the pump OSNR at point A. A delay line matched the optical path length for the signal/idler and pump waves.
After recombining the pump with the signal/idler pair they were again separated and the signal/idler pair was passed through a polarization controller (PC) while the pump was passed through the pump recovery system.The pump OSNR at the input of the pump recovery system was > 60 dB.The signal/idler pair was attenuated by more than 20 dB between the copier and the PSA/PIA preamplifier for all measurements which should be enough to decorrelate the signal/idler noise added in the copier.
In the pump recovery system the pump wave was first amplified by two EDFAs, EDFA1 and EDFA2, followed by a 0.9 nm and a 3.0 nm bandpass filter respectively.For the case with IL the pump wave was passed through VOA2 for tuning the power into the SL and then through PC1 for controlling the state of polarization (SOP).The SOP was tuned so that the phase transfer through the SL was maximized.The wave was then, via a circulator, injected into the SL which was a DFB laser without isolator.The SL input power was re-optimized for lowest BER at each setting of VOA1.The SL driving current was seven times the lasing threshold value, giving an output power of 20 dBm, and its wavelength was tuned so that the frequency difference between the SL and the incoming wave was minimized.For the case without IL the pump wave was passed unaffected through another path, as indicated in Fig. 2. The pump was finally amplified to 33.8 dBm for the PSA-amplified link and to 34.9 dBm for the PIA-amplified link and filtered by a 0.8 nm bandpass filter.The relative phase between the pump and signal/idler pair was stabilized from thermal drift and acoustic noise using a phase-locked loop (PLL) based on a frequency dither technique.The frequency dithering was applied using a piezoelectric transducer (PZT)-based fiber stretcher placed in the pump path of the pump recovery system.
The PSA/PIA preamplifier was implemented with two cascaded spools of stretched Gedoped HNLF with an isolator in between for SBS suppression [31].The gain was 20 dB both in the PSA-and PIA-case and was tuned by varying the output power from EDFA3.For the PSA-case the signal and idler powers launched into the preamplifier were equal.The FOPA preamplifiers were compared against an EDFA preamplifier with 3.8 dB NF, also with 20 dB gain.PCs were used to align the SOP of the waves before the FOPAs.
For the BER measurements the received signal power was measured at point B and varied using the OP.The preamplifier output was passed through a 2.0 nm bandpass filter and then into the differential receiver, comprising of a 1-bit delay interferometer and an amplified balanced receiver.Although only one branch of the DQPSK signal was demodulated and used for BER measurement, it has been previously shown that for copier-PSA schemes both tributaries perform similarly [15].Part of the filtered signal was diverted and used as a feedback signal for the PLL.
Measurement results
showing BER versus received signal power, i.e. signal power measured at point B, are presented in Fig. 3.For each of our three preamplifier configurations, with or without IL in the pump recovery system, we compare operation at different pump OSNR at point A.
At high pump OSNR (56 dB), we observe all systems operating as expected, with a 4.8 dB sensitivity increase comparing the PIA-and PSA-amplified cases.This is close to the ideal 6 dB improvement expected through the lower NF of the PSA [16].
When lowering pump OSNR from 56 dB to 37 dB, both the PIA-and PSA-amplified systems show a large sensitivity penalty when IL is not used in the pump recovery system.With IL in place, both the PIA-and PSA-amplified systems show much improved performance.However, significant penalty is observed when the pump OSNR is degraded to 11 dB.
To analyze how this sensitivity penalty evolves in our different systems, we plot the Qfactor penalty versus pump OSNR at point A. The Q-factor was calculated from measured BER using: For the PSA-case, the penalty was taken with respect to the performance at −42 dBm received signal power at 56 dB pump OSNR giving a BER of about 10 −8 .For the PIA-case the penalty was taken with respect to the performance at −37 dBm received signal power at 56 dB pump OSNR also giving a BER of about 10 −8 .The measurements were done with the pump recovery system optimized for each measurement point.
The Q-factor penalty versus pump OSNR at point A (bottom axis) and pump power at the pump recovery system input (top axis) is plotted in Fig. 4. As the pump OSNR is decreased, we see the Q-factor penalty increases, while showing the systems without IL penalized more heavily, as in Fig. 3.For the systems without IL, this penalty becomes apparent at a pump OSNR of about 50 dB, corresponding to roughly 0 dBm pump power at the input of the pump recovery 3. Measured bit error ratio (BER) versus received signal power (signal power at point B) comparing a phase-insensitive amplifier (PIA) and a phase-sensitive amplifier (PSA) amplified receiver with and without injection locking (IL) in the pump recovery system with an erbium-doped fiber amplifier (EDFA) amplified receiver.The measurements were done at various pump optical signal-to-noise ratio (OSNR) at point A, as given by the legend.The straight lines are linear fittings to the measurement points.system.For the systems with IL, penalty-free operation is observed down to a pump OSNR of 20 dB, corresponding to about −30 dBm pump power at the input of the pump recovery system.We can conclude that our proposed hybrid IL/EDFA pump recovery system show penaltyfree operation in a PSA-amplified link with lumped loss where the pump is phase modulated by two sinusoidal RF-tones at 0.1 GHz and 0.3 GHz (giving a 0.8 GHz bandwidth) and experience 64 dB overall amplification, from −30 dBm to 34 dBm.With the assumption of 10 dBm pump power launched into the link the penalty onset at 20 dB pump OSNR corresponds to a link attenuation of 40 dB.
Replacing the 40 dB lumped loss with a 200 km dispersion compensated fiber span will introduce nonlinear and linear effects that can degrade the pump wave and give additional penalties, i.e. shift the pump recovery penalty onset to a higher pump OSNR.However, as we will outline below, we believe that these effects are negligible given that the pump power launched into the link is low enough and that the bandwidth of the pump, dominated by the pump phase modulation, is small.That the transmission related penalties for the pump recovery system are negligible in our system has also been shown experimentally for an 80 km link with 10 dBm pump power launched into the link [27].
The dominant nonlinear effect acting on the pump wave will be SPM, which would add phase distortions.The impact of SPM can be estimated by calculating the nonlinear phase shift Φ.Using standard single mode fiber (SSMF), with attenuation α = 0.2 dB km −1 and nonlinear coefficient γ = 1 W −1 km −1 , the nonlinear phase shift for P 0 = 10 dBm power launched into a L = 200 km long fiber is Φ = γP 0 [1 − exp(−αL)] /α = 0.064π.The small nonlinear phase shift along with the continuous wave (CW) nature of the pump wave suggest that there should be no significant penalty related to SPM.
In a dispersion compensated link chromatic dispersion related effects such as phase to amplitude conversion will not cause any penalty.The dominant linear effect will instead be polarization mode dispersion (PMD), which could cause issues with polarization alignment for the IL process.Using the frequency-domain manifestation of PMD we can relate the output phase φ = ∆β L, where ∆β = β slow − β fast is the propagation constant difference between the fast and the slow axis, to the bandwidth ∆ω of the pump wave.The change in output phase ∆φ is related to ∆ω by ∆φ = ∆ω∆τ where ∆τ is the root mean square (RMS) value of the differential group delay (DGD) of the fiber.The DGD can be calculated from ∆τ = D p L 1/2 where D p is the PMD parameter, which is typically around 0.1 ps km −1 in modern fibers.Using the spectral width of the phase modulated pump wave ∆ f = 0.8 GHz and a L = 200 km long fiber we get an output phase difference of ∆φ = 2π∆ f D p L 1/2 = 4.5 × 10 −3 , which is negligible.
Investigation of pump recovery system operation limits
In the previous section we observed a large Q-factor penalty for the PSA/PIA-amplified systems as the power reaching the pump recovery system was decreased.To understand what is causing the penalty and how the pump recovery performance can be improved, we need to investigate the pump recovery system in more detail, in particular the IL transfer characteristics.
When the power into the pump recovery system is reduced, ASE noise will be generated and added to the pump wave by the first EDFA.However, the pump wave will subsequently be filtered and the noise partly suppressed though the IL.This effectively makes the SL determine the noise generated and added to the pump through the pump recovery system.For our PSA/PIA-amplified link the noise on the recovered pump will impact the amplified signal through pump noise transfer in the PSA/PIA.Also the transfer of phase modulation through the pump recovery system will depend on the power into the pump recovery system.The phase transfer through the IL is dependent on the field injected into the SL.Increased injection of ASE noise into the SL can therefore im- pact and degrade the phase transfer.An obvious consequence of degraded phase transfer is reduced pump SBS suppression in the PSA/PIA which in turn can lead to a Q-factor penalty through pump noise transfer if the pump SBS becomes significant.For the PSA we also expect a degraded phase transfer to impact the PSA operation through misalignment of the phasematching condition.If we denote the pump phase modulation at the pump recovery system input by θ in and at the output by θ out then the phase-matching condition in the PSA can be expressed as: 2θ p − θ s − θ i = π/2 where θ p = θ p + θ out and θ i = 2(θ p + θ in ) − θ s − π/2.We see that if θ out = θ in , the phase-matching will be disturbed.A deviation from the phase-matching would translate into gain fluctuations which in turn would affect the output signal and cause a Q-factor penalty.
The discussion above implies that an understanding of the transfer characteristics of IL is crucial for explaining the performance of the pump recovery system.The transfer characteristics of an injection-locked DFB laser are highly dependent on parameters such as the injection ratio, defined as the ratio between the injected power and the SL output power, the frequency offset between the injected wave and the free running SL, and the SL driving current.Therefore, in order to identify the limiting factors in our pump recovery system, with our specific operating conditions, we need to measure the noise generation in the pump recovery system (and ASE noise transfer through the SL) as well as the phase modulation transfer through the pump recovery system.
Amplitude noise and phase noise generation
The noise generation in the pump recovery system was investigated using homodyne coherent detection and constellation analysis.The experimental setup is shown in Fig. 5.A wave at 1553.7 nm was split up into two branches; a signal branch and a local oscillator branch.The local oscillator branch was frequency shifted by 27 MHz using an acousto-optic modulator (AOM) and then passed into a 90 • optical hybrid.The frequency was shifted in order to obtain a detectable beat tone between the local oscillator and the signal in the optical hybrid.The signal branch was passed through VOA1 for attenuation to vary the OSNR after EDFA1, i.e. at point C, and then into the pump recovery system.
Apart from removing the last EDFA, the pump recovery system was identical to the system used for the demonstration in section 2. We assumed that the last EDFA would not affect the noise properties of the recovered pump due to the high input power (20 dBm) to the EDFA from the SL.Both the case with and without IL in the pump recovery system was investigated, Pump power at pump recovery system input (dBm) Amplitude noise σ a Fig. 6.Measured amplitude noise (right axis) and phase noise (left axis) after the pump recovery system with and without injection locking (IL) in the pump recovery system versus pump optical signal-to-noise ratio (OSNR) at point C (bottom axis) and pump power at the pump recovery system input (top axis).For the case with IL the slave laser input power was kept constant at −7.3 dBm, corresponding to an injection ratio of −27.3 dB.
as indicated in Fig. 5.As for the demonstration measurements the SL driving current was seven times the lasing threshold value and the wavelength was tuned so that the frequency difference between the SL and the incoming wave was minimized.The SOP into the SL was tuned using PC1 so that phase transfer of an incoming phase modulated wave was maximized.The SL input power was kept constant at a high value (−7.3 dBm, corresponding to an injection ratio of −27.3 dB) using VOA2 in order to reduce the effect of filtering in the SL, facilitating the measurement of broadband ASE noise transfer.At injected powers above −5 dBm the SL got over-modulated and spurious tones appeared, this regime was therefore avoided.
After the pump recovery system the wave was injected into the 90 • optical hybrid.The hybrid output was detected using four 11 GHz bandwidth detectors and then sent to a real-time oscilloscope (16 GHz bandwidth) for sampling.The data was post-processed offline and amplitude noise and phase noise was extracted.The amplitude noise σ a was defined as the standard deviation of the normalized amplitude and the phase noise σ p was defined as the standard deviation of the phase.
Measured amplitude noise (right axis) and phase noise (left axis) at the output of the pump recovery system versus pump OSNR at point C (bottom axis) and pump power at the pump recovery system input (top axis) is presented in Fig. 6.For the case without IL both the amplitude noise and the phase noise increase with reduced pump OSNR, due to broadband ASE noise added by EDFA1.The noise floor reached at high pump OSNR was set by the sensitivity of the measurement system.
For the case with IL the amplitude noise is highly suppressed compared to the case without IL.The suppression of amplitude noise that we observe is in agreement with what was seen in [23].However, we also observe a small increase in noise when the pump OSNR reach low values.Previous investigations of single frequency tone transfer through an injection-locked DFB laser help to explain the increase of amplitude noise.This occurs through FM-to-AM conversion and AM-to-AM transfer.It has been shown experimentally that FM-to-AM conversion is proportional to the modulation frequency up to the SL resonance frequency [24].Due to the broadband nature of the ASE noise injected into the SL, only limited by the 0.9 nm bandpass filter after EDFA1, we can expect some impact of FM-to-AM conversion.
With IL the phase noise increases with decreased pump OSNR in a similar fashion as for the case without IL.It has been shown experimentally [25], and theoretically [24], that high FMto-FM conversion should be expected up to several GHz under locking conditions similar to ours.However, since the ASE noise injected into the SL is broadband we expect some filtering through the SL, which is also what we see as reduced noise compared to the case without IL.The phase noise contribution from AM-to-FM conversion is expected to be negligible compared to the contribution from FM-to-FM transfer [23].
Based on our phase noise and amplitude noise measurements we can conclude that the performance improvement seen in Fig. 3 and Fig. 4 for the hybrid IL/EDFA pump recovery system compared to the EDFA-based system comes from the squeezing and filtering of amplitude noise and filtering of phase noise through the SL.
Phase modulation transfer degradation
The phase modulation transfer was investigated using an experimental setup similar to the one used for the noise measurements.The setup is shown in Fig. 7.In this case a phase modulator was placed before the pump recovery system and either one or two sinusoidal RF-tones were applied for transfer characterization.For the one tone case the frequency RF 1 was swept from 0.10 GHz to 2.30 GHz and the phase swing at the pump recovery system input ∆φ in was π.In the dual tone case the applied modulation was either {RF 1 = 0.10 GHz, RF 2 = 0.32 GHz} or {RF 1 = 0.30 GHz, RF 2 = 0.91 GHz}.In this case the phase swing ∆φ in was 2π, each RFtone contributing with π swing.The tone frequencies and amplitudes in the dual tone case were selected to produce a flat-top spectrum, since this is desirable for efficient SBS suppression [29].The pump recovery system was tuned in the same way as for the noise measurements with the exception that also the SL input power was varied.
To determine the phase modulation transfer the sampled signal was filtered by 20 MHz bandpass filter(s) centered at the tone(s) center frequency in order to remove the noise contribution to the constellation.The phase modulation transfer ratio (MTR) was then calculated as the modulation depth at the output of the pump recovery system to that of the input Fig. 8. Measured phase modulation transfer ratio (MTR) versus pump optical signal-tonoise ratio (OSNR) at point C (bottom axis) and pump power at the pump recovery system input (top axis) for various modulation frequencies, as given by the legend.The inset show the dual tone data together with the single tone data.The slave laser input power (power at point D) was kept at −7.6 dBm, corresponding to an injection ratio of −27.3 dB.RF: radio frequency.
where ∆φ in = π for the one tone case and ∆φ in = 2π for the two tone case.
We measured phase MTR versus pump OSNR at point C and phase MTR versus SL input power, i.e. power at point D. The measurement versus pump OSNR was done with high SL input power (−7.3 dBm, corresponding to an injection ratio of −27.3 dB) in order to minimize the transfer degradation due to the SL when studying the OSNR dependence.For the same reason the measurement versus SL input power was done with high pump OSNR (42 dB) at point C. Due to over-modulation and appearance of spurious tones the power injected into the SL was kept below −5 dBm.
Measured phase MTR versus pump OSNR at point C (bottom axis) and pump power at the pump recovery system input (top axis) is presented in Fig. 8.We see that the single tone phase MTR decrease with increased tone frequency.This is expected from the bandwidth of the FMto-FM transfer in the SL [24].We also see that for a fixed tone frequency the phase MTR decrease with decreased pump OSNR.This is due to the ASE noise injected into the SL.The inset show the dual tone data together with the single tone data.We note that the curves for the dual tone cases are located between the curves for the corresponding single tones.This is what to expect if the dual tone transfer is treated as independent transfer of the two single frequency tones.In particular we note that the dual tone case with {RF 1 = 0.10 GHz, RF 2 = 0.32 GHz} is located between the curve for the single tone RF 1 = 0.10 GHz and the single tone RF 1 = 0.30 GHz.
Measured phase MTR versus SL input power is shown in Fig. 9.In Fig. 9, the single tone phase MTR decrease with increased tone frequency is again very clear.The phase MTR also decrease with reduced power into the SL.The reason for this is the decrease in FM-to-FM transfer bandwidth in the SL with reduced input power.The dual tone cases are located between the curves for the corresponding single tones.For the dual tone transfer, the tone with higher frequency will suffer more from the limited transfer bandwidth and thus the combined reduction in swing will be between the corresponding single tone cases.
Our measurements have shown that phase MTR is reduced, over all measured frequencies, both with reduced pump OSNR and reduced SL input power.The impact of reducing the SL input power was stronger than reducing the pump OSNR.They have also indicated that, from the perspective of phase MTR, dual tone transfer can be treated as independent transfer of two single tone components.
Characterization of pump recovery induced link penalty
The impact of noise generation and phase modulation transfer imperfection on the performance of the PSA/PIA-amplified link can be measured by varying the power into the pump recovery system and then extracting the resulting Q-factor penalty though analyzing the measured BER.By also varying the SL input power, for a fixed power into the pump recovery system, and comparing against the noise and phase transfer characteristics measured in section 3, we can gain additional information as to why these penalties arise.
The measurements were carried out using the same experimental setup as used for the demonstration in section 2, illustrated in Fig. 2. Apart from keeping the SL input power fixed, not tuned for lowest BER, the PSA/PIA-amplified link was operated in the same way as for the demonstration measurements.Two tones (0.1 GHz and 0.3 GHz) were used to phase modulate the pump in order to suppress pump SBS in the FOPAs.
The Q-factor penalty, extracted from measured BER, versus SL input power for various pump OSNR values is presented in Fig. 10.For the PSA-case the Q-factor penalty was taken with respect to the BER at −42 dBm received signal power, with high pump OSNR at point A (42 dB) and high SL input power (−5 dBm).For the PIA-case the penalty was taken with respect to the BER at −37.5 dBm received signal power, also with high pump OSNR at point A (42 dB) and high SL input power (−5 dBm).
For the PIA-case there is a large Q-factor penalty at the combination of low pump OSNR and high SL input power.The penalty is reduced both with increased pump OSNR and reduced SL input power.The penalty reduction with increased pump OSNR is explained by the reduction of phase noise and amplitude noise generated in the pump recovery system, as shown in Fig. 6.The penalty reduction with reduced SL input power can be understood as follows.As the SL input power is reduced also the bandwidth of the FM-to-FM transfer is decreased, as seen in Fig. 9.In practice the SL will work as a phase bandpass filter centered at the pump frequency, with the bandwidth set by the FM-to-FM transfer bandwidth.Therefore, as the power into the SL is reduced more noise is filtered out and the penalty decrease.With reduced power into the SL (and reduced pump OSNR) the phase MTR is also reduced.Below about −20 dBm SL input power this lead to a sharp penalty onset due to pump SBS (not shown in Fig. 10).
In the PSA-case the Q-factor penalty curve is V-shaped at low pump OSNR (11 dB and 22 dB).In this case there are two effects influencing the penalty.The phase noise filtering in the SL is still an important effect but also the phase MTR is important, since that will impact the phase-matching in the PSA.The combined effect of these two factors, with phase noise giving penalty at high SL input powers (> −13 dBm for the 21 dB pump OSNR case) and phase-matching misalignment giving penalty at low SL input powers (< −13 dBm for the 21 dB pump OSNR case), give the V-shape.The measurement at 11 dB pump OSNR show higher penalty, both at low and high SL input powers, than the measurement at 22 dB pump OSNR since lower pump OSNR both introduce more noise and degrade the phase MTR, as seen in Fig. 6 and Fig. 8, respectively.For high pump OSNR (32 dB and 42 dB) there is no penalty at high SL input powers, i.e. there is no penalty due to phase noise.There is only a penalty at low .Q-factor penalty calculated from measured bit error ratio versus pump optical signal-to-noise ratio (OSNR) at point C (bottom axis) and pump power at the pump recovery system input (top axis).The measurements were done at various slave laser input powers, as given by the legend.The phase-insensitive amplifier (PIA) gain and phase-sensitive amplifier (PSA) gain was kept at 20 dB.
SL input power (< −13 dBm) due to imperfect phase-matching in the PSA.Finally, in Fig. 11 we show the Q-factor penalty versus pump OSNR at point C (bottom axis) and pump power at the pump recovery system input (top axis) for various SL input powers.The Q-factor penalty for the PSA-case was taken with respect to the BER at −42 dBm received signal power and for the PIA-case with respect to the BER at −37.5 dBm received signal power.In both cases with high pump OSNR at point A (45 dB) and high SL input power (−5 dBm).Measurements penalized by SBS, occurring at combined low pump OSNR and low SL input power, were removed from Fig. 11.
In Fig. 11 the impact of phase noise filtering through the SL is again very clear, when comparing curves at difference SL input power.The effect of imperfect phase-matching in the PSA is not clearly visible since the lowest SL input power presented is at −15 dBm, just marginally below the −13 dBm where we started to see the effect in Fig. 10.An interesting feature that is clearly visible is that the PSA-case show less Q-factor penalty than the PIA-case, i.e. the PSA is less sensitive to noise on the recovered pump than the PIA.The reason for this difference between the PSA-case and the PIA-case, also seen in Fig. 10, is not clear.
Based on the results shown in Fig. 10 we can deduce how large phase MTR is needed for penalty-free pump recovery operation in the PSA-amplified link.For high pump OSNR (32 dB and 42 dB) we saw a penalty onset due to low phase MTR at −13 dB SL input power.Referring to Fig. 9, that show phase MTR versus SL input power at 42 dB pump OSNR, we can relate the SL input power to a phase MTR value.In Fig. 9 we can read out that at a SL input power of −13 dBm the phase MTR is approximately 97% for the dual tone case or approximately 98% for the single RF 1 = 0.10 GHz tone and approximately 96% for the single RF 1 = 0.30 GHz tone.
We can also deduce how much noise that can be tolerated for penalty-free pump recovery operation in the PSA-and PIA-amplified link.In Fig. 11 we see that the penalty onset is at about 25 dB pump OSNR for the PSA-case with −7.6 dBm SL input power.The corresponding value for the PIA-case is about 30 dB.In Fig. 6, showing the phase noise and amplitude noise versus pump OSNR at −7.3 dBm SL input power, we can read the corresponding phase noise and amplitude noise values.At 25 dB pump OSNR (the PSA-case penalty onset) the phase noise is 2.0 degrees and the amplitude noise is 0.03.At 30 dB pump OSNR (the PIA-case penalty onset) the phase noise is 1.4 degrees and the amplitude noise 0.03.
The penalty-free operating range for the pump recovery system could, for both the PSAand PIA-amplified links, be extended to include lower pump OSNR values if lower bandwidth pump phase modulation was used.For the PIA-case this would mean that the SL input power could be reduced without introducing penalty from SBS, thus allowing for better noise filtering.For the PSA-case the effect would be that the penalty onset due to imperfect phase modulation transfer would move to lower SL input power which in turn would allow for lower SL input power and better noise filtering through the SL.
We can now explain the Q-factor penalty difference between the PIA-and PSA-case with IL seen at low pump OSNR values in Fig. 4. For the PIA system the penalty originate from SBS and for the PSA system the penalty is due to the combined effect of noise on the pump and imperfect phase-matching in the PSA.
Alternative SBS suppression techniques, not based on pump phase modulation, would improve the operating limits both in the PSA-and PIA-case since that would allow for lower bandwidth phase modulation of the pump or in the extreme case no pump phase modulation.In the case of no pump phase modulation we expect both the PSA and PIA system to be limited by in-band pump noise and the practical problem of keeping the frequency difference between the incoming pump and the SL within the IL locking bandwidth.However, we have not observed any penalty from in-band noise in the measurements we have presented here.
Conclusions
We have demonstrated and experimentally investigated a hybrid IL/EDFA-based pump recovery system for PSA-amplified links.Recovery of a pump wave, phase modulated by two sinusoidal RF-tones at 0.1 GHz and 0.3 GHz for SBS suppression, with 64 dB overall pump amplification, from −30 dBm to 34 dBm, is shown to have negligible penalty when measuring BER on a 10 GBd DQPSK signal transmitted through a PSA-amplified link.With the assumption of 10 dBm pump power launched into the fiber, this infers that the pump recovery system can handle up to 40 dB of pump attenuation.Theoretical estimates indicate that there will be no significant penalties associated with the pump recovery when replacing the 40 dB lumped loss with a 200 km dispersion compensated fiber span.Preliminary results have shown that even higher pump powers, up to 20 dBm, can be launched into the link without significant penalty [28].This indicate that the pump recovery system could potentially handle even longer spans.
Measurements, based on homodyne coherent detection and constellation analysis, show that amplitude squeezing, amplitude noise filtering, and phase noise filtering through the SL can explain the superior performance of the hybrid IL/EDFA-based pump recovery system compared to a simple EDFA-based system.The measurements also showed that the phase MTR for the pump recovery system is reduced both with reduced SL input power, strong dependence, and with reduced pump OSNR, weaker dependence, and indicated that dual tone transfer can be treated as independent transfer of two single tones.
The impact of noise generation in and phase modulation transfer imperfections through the pump recovery system on the performance of the PSA/PIA-amplified link was investigated and quantified.It was found that the PIA-amplified link is penalized directly by noise on the recovered pump wave and also by reduced phase MTR through reduced pump SBS suppression.The PSA-amplified link is penalized directly both by noise on the recovered pump and by degraded phase MTR.Our measurements show that a dual tone phase MTR of approximately 98% is needed to avoid penalty due to mismatch of the phase-matching condition in the PSA.It is expected that lower bandwidth pump modulation would result in larger operating range for the pump recovery system, both in the PSA-and PIA-amplified link.If no tones had to be applied then we infer the operating range to be limited mainly by in-band pump noise and the practical problem of keeping the SL locked to the incoming pump wave.Our measurements also showed that the PSA-amplified link is less sensitive to noise on the recovered pump than the PIA-amplified link.However, we are not clear to why this is the case and further investigation is needed.
Our demonstrations have been carried out on a specific system transmitting single channel 10 GBd DQPSK data.However, we expect the pump recovery system operation to be independent of the number of channels and to a large extent on the modulation format of the transmitted data.Furthermore, the pump recovery system could be implemented in a multi-span scheme where one pump recovery system is included for each span.However, with cascaded pump recovery systems the accumulation of amplitude and phase noise as well as loss of phase modulation depth must be considered.According to Fig. 6 amplitude noise is heavily suppressed in the pump recovery system and should therefore not be accumulated in a cascaded scheme.Phase noise on the other hand will accumulate and eventually cause a penalty.Also the effect of non-ideal phase MTR will accumulate causing the phase modulation depth to be successively reduced, which will cause a penalty.Both the accumulation of phase noise and reduction of phase modulation depth can be decreased if the pump modulation bandwidth is reduced.Recent work has shown promising results towards achieving high-gain PSAs without pump phase modulation [31].We expect that the presented results provide a viable path to enabling multi-span WDM compatible PSA-amplified transmission links over large spans.
Fig. 1 .
Fig. 1.Schematic illustration of a phase-sensitive amplified (PSA) transmission link based on the copier-PSA scheme.
Fig. 4 .
Fig.4.Q-factor penalty calculated from measured bit error ratio versus pump optical signalto-noise ratio (OSNR) at point A (bottom axis) and pump power at the pump recovery system input (top axis) comparing a phase-insensitive amplifier (PIA) and phase-sensitive amplifier (PSA) amplified receiver with and without injection locking (IL) in the pump recovery system.
Fig. 5 .
Fig.5.Homodyne coherent detection setup used for characterizing noise generation in the pump recovery system and ASE noise transfer through the slave laser.VOA: variable optical attenuator, EDFA: erbium-doped fiber amplifier, PC: polarization controller, AOM: acousto-optic modulator, IL: injection locking.
Fig. 9 .
Fig.9.Measured phase modulation transfer ratio (MTR) versus slave laser input power (power at point D) for various frequencies as given by the legends.The pump optical signalto-noise ratio (OSNR) at point C was kept at 42 dB.RF: radio frequency.
Fig.11.Q-factor penalty calculated from measured bit error ratio versus pump optical signal-to-noise ratio (OSNR) at point C (bottom axis) and pump power at the pump recovery system input (top axis).The measurements were done at various slave laser input powers, as given by the legend.The phase-insensitive amplifier (PIA) gain and phase-sensitive amplifier (PSA) gain was kept at 20 dB.
. 10. Q-factor penalty calculated from measured bit error ratio versus slave laser input power (power at point D).The measurements were done at various pump optical signalto-noise ratio (OSNR) at point C, as given by the legend.The phase-insensitive amplifier (PIA) gain and phase-sensitive amplifier (PSA) gain was kept at 20 dB. Fig | 12,423.6 | 2013-06-17T00:00:00.000 | [
"Physics",
"Engineering"
] |
Continuous gas-phase synthesis of core–shell nanoparticles via surface segregation
Synthesis methods of highly functional core@shell nanoparticles with high throughput and high purity are in great demand for applications, including catalysis and optoelectronics. Traditionally chemical synthesis has been widely explored, but recently, gas-phase methods have attracted attention since such methods can provide a more flexible choice of materials and altogether avoid solvents. Here, we demonstrate that Cu@Ag core–shell nanoparticles with well-controlled size and compositional variance can be generated via surface segregation using spark ablation with an additional heating step, which is a continuous gas-phase process. The characterization of the nanoparticles reveals that the Cu–Ag agglomerates generated by spark ablation adopt core–shell or quasi-Janus structures depending on the compaction temperature used to transform the agglomerates into spherical particles. Molecular dynamics (MD) simulations verify that the structural evolution is caused by heat-induced surface segregation. With the incorporated heat treatment that acts as an annealing and equilibrium cooling step after the initial nucleation and growth processes in the spark ablation, the presented method is suitable for creating nanoparticles with both uniform size and composition and uniform bimetallic configuration. We confirm the compositional uniformity between particles by analyzing compositional variance of individual particles rather than presenting an ensemble-average of many particles. This gas-phase synthesis method can be employed for generating other bi- or multi-metallic nanoparticles with the predicted configuration of the structure from the surface energy and atomic size of the elements.
Introduction
Recently, a signicant amount of research effort has been devoted to the production of core-shell nanoparticles, which are composed of an inner core material coated by a shell of a different material. Such attention to core-shell nanoparticles arises from the fact that they can exhibit enhanced physical and/or chemical properties. 1-3 Furthermore, core-shell particles with distinctly new properties compared to those of the constituent materials can be designed by tuning, for example, their size, shell thickness, and structures. [4][5][6][7] A large number of research projects are underway to fabricate highly functional core-shell materials for applications in various elds, including optoelectronic devices, 8,9 biomedical imaging, 10,11 catalysis, 12,13 and plasmonics. 14,15 Currently, chemical synthesis techniques such as sol-gel, 16,17 solvothermal, 18 seed-mediated growth, 19 and cation exchange 20 are the most popular methods for fabricating core-shell nanoparticles. However, interface and surface contaminations are oen an unavoidable issue in the multiple-step, solution-based approaches. These impurities inevitably make solution-based processes time consuming as many steps are required to remove contaminants. The process of removing ligands also introduces uncertainty regarding the nal size and structure of the nanoparticles. 21,22 Contrary to the widely popular chemical synthesis, signicantly less attention has been paid to solvent-free gas-phase synthesis methods, which offer high purity and high throughput in nanoparticle production. Recently, gas-phase synthesis techniques based on low-pressure multi-magnetron gas aggregation sourceswhere one target acts as the source of the core material and the others as one or more sources coating materialshave enabled the fabrication of core-shell particles with tunable sizes and shapes. 23,24 Apart from the demanding high vacuum requirements, 25 these methods oen suffer from nucleation of pure-element byproducts, [24][25][26] and achieving uniformity in bimetallic morphology is challenging as the nanoparticles generated by non-equilibrium, fast kinetics processes that do not include an additional annealing process oen include random and unpredictable metastable phases. 27,28 Having control over size, composition, and morphology is desirable, as it enables investigations of the nanoparticle properties' effects on various applications. We note that in this article we use the term 'metastable' for any nanoparticle congurations that are not in the thermodynamically stable global energy minimum.
Here, we present a continuous gas-phase process based on spark ablation 29 with the capability of creating uniformly structured coreshell bimetallic nanoparticles with precisely controlled size and composition not containing other random metastable congurations. Spark ablation is a gas-phase synthesis technique with an appealingly simple design that utilizes a high voltage spark discharge between two electrodes acting as the material source for the synthesized nanoparticles. It has been used to create various types of materials such as semiconducting nanoparticles 30 and composite metal nanoparticles. 31 Similar to a related technique known as arc discharge, [32][33][34][35] it is an environmentally-friendly, inexpensive alternative to chemical synthesis techniques and offers a continuous production route at atmospheric pressure. The technique can readily be upscaled to mass-production by placing several electrode pairs in parallel. Recently, spark ablation has been used to produce Ag@Au and Au@Ag core-shell nanoparticles via a condensation mechanism which requires modication of the conventional spark ablation for a separate coating step. 36 In this study, we exploit the surface segregation phenomenon in generating core-shell bimetallic nanoparticles using spark ablation in a continuous process without an additional coating step. The surface segregation phenomenon refers to the enrichment of one component of a mixture in the surface region. It is generally agreed that surface segregation depends on an interplay between the atomic radii, cohesive energy, surface energy, and electronegativity of the core and shell materials. 37 As a rule of thumb, one expects that the metals with smaller atomic radii and larger surface energies would tend to occupy the core region. Utilizing the surface segregation mechanism, one can produce core-shell nanostructures by simply evaporating both core and shell materials simultaneously in the gas phase, rendering the process 'continuous' without the need of a separate coating process. [38][39][40][41][42] Note that in situ heat treatment in gas phase synthesis methods has been reported to be efficient for phase transformation. 43 In our setup, heat-induced surface segregation occurs when the agglomerates of bimetallic nanoparticles, synthesized by spark ablation, pass through a tube furnace during which the agglomerates become spherical core-shell structures.
In generating bimetallic core-shell nanoparticles using spark ablation via surface segregation, we have chosen Cu-Ag as our model system as the atomic radius mismatch is relatively high, and the surface energy difference is sufficient (1210 mJ m À2 for Ag and 2130 mJ m À2 for Cu). 44 Additionally, Cu-Ag is a well-studied immiscible material system. We have investigated the morphology, composition, and inter-particle heterogeneity of the generated Cu@Ag core-shell nanoparticles by scanning transmission electron microscopy (STEM), and energy-dispersive X-ray spectroscopy (EDX). To provide more indepth insight into the structural evolution of Cu-Ag nanoparticles during the heating and cooling processes, we also conducted molecular dynamics (MD) simulations. The numerical modeling corroborates our experimental results that the compaction temperature inuences the nanoparticle's nal structure and that the core-shell formation is attributed to the heat-induced surface segregation.
In addition to the capability of generating uniform coreshell nanoparticles with well-controlled size and composition, exploiting the surface segregation together with the spark ablation method is further benecial. As the core@shell nanoparticles generated via the presented synthesis method have already undergone a heating cycle, they are expected to exhibit high structural stabilities at elevated temperatures, which is supported by the MD simulations. It is well-known that nanoparticles with equilibrium shape and narrow-sized distributions are favorable for suppressing sintering. 45 The suppression of sintering makes the method appealing for catalysis applications, where the structural stability of bimetallic nanoparticles in elevated temperatures is essential.
The gas-phase synthesis method presented here can be employed for other bi-or multi-metallic systems with sufficient differences in surface energy and atomic radius of the elements for generating core-shell nanoparticles. However, this method is not just limited to the production of core-shell nanoparticles. The same method can also be used to create other structures (e.g., quasi-Janus or alloy) with the only requirement to design the desired structures being the knowledge of the surface energy and atomic size of the constituent elements.
Nanoparticle generation
First, the spark ablation system (Fig. S1 †), where spark ablation takes place, was evacuated with a rotary pump. Aer reaching a pressure of lower than 1 mBar, N 2 : H 2 carrier gas (95% : 5%; purity 99.9999%; Linde) was let in at a ow rate of 1.68 L min À1 , regulated with mass ow controllers (Bronkhorst, El-Flow-Select) and the pressure was kept at 1015 mBar with a pressure controller (Bronkhorst, El-Press-Select). A high-voltage, high power supply (Technix, Model CCR15-P-750) was used to charge a 20 nF capacitor bank shunted to the metallic electrodes enclosed in a chamber ushed with the carrier gas. The electrodes are separated by an air gap of about 2 mm. A grounded pure Cu electrode (GoodFellow, >99.99%) and a biased pure Ag electrode (GoodFellow, >99.95%) were used in this work. At a specic voltage over the capacitors and electrode gap, the carrier gas in the electrode gap breaks down into a conducting plasma, carrying an oscillating current from the discharging capacitor bank that ablates material from the electrodes' surfaces. The ablated material vapors nucleate into small (<10 nm) singlet nanoparticles that grow by full coalescence from collisions until they reach a diameter where further collisions between the primary particles lead to the formation of fractal-like agglomerates by coagulation and partial sintering. 46 Aer a few ms, the electrode gap regains its resistive properties, and the charge cycle is repeated. The breakdown voltage was monitored and set to 3.0-4.0 kV implicitly by the electrode gap.
Nanoparticle compaction and size selection
Aer generation, the Cu-Ag agglomerates were carried by the carrier gas downstream in the setup for subsequent thermal treatment (compaction) and size selection. First, the particles were assigned a known charge distribution in a b emitting 63 Ni neutralizer. This enables subsequent size selection via electrical mobility using a tandem differential mobility analyzer (DMA) setup, 47 consisting of two DMAs (DMA1: TSI 3081 Long; DMA2: custom Vienna type 48 ) with 10 L min À1 sheath ows. Inside the DMAs, an electric eld classies particles of particular electrical mobility (mobility in an electric eld), a function of diameter and charge. The agglomerates were compacted to spherical particles in a tube furnace (Lenton LTF) positioned between DMA1 and DMA2. Aer size selection in DMA2, the particles were either counted with an electrometer (TSI 3086B) or deposited with an electric eld in a custom electrostatic precipitator (ESP). [49][50][51] The electrical mobility range of selected particles is proportional to Q a /Q sh where is Q a is the carrier gas ow rate (1.68 L min À1 ) and Q sh is the DMA sheath ow rate (10 L min À1 ), equivalent to a ow ratio of ca. 1/6 and a size distribution width of the size selected aerosol nanoparticles with a diameter D p of AE1/6 D p . 52 Using the tandem DMA setup, even narrower size distributions can be obtained, as shown in Fig. S2 † where compacted CuAg aerosol nanoparticles were deposited with an electrical mobility diameter of 30 nm and measured from SEM micrographs.
Particle characterization
Electron microscopy and elemental analysis were used to characterize Cu@Ag nanoparticles offline. Only DMA2 was used (DMA1 was bypassed) to ensure a shorter deposition time at the cost of a broader particle size distribution. Particles size selected with DMA2 at electrical mobility diameter of 30 nm were deposited in the ESP set to 6-10 kV on holey carbon lm coated Au TEM grids (Agar Scientic). The ESP chamber was purged in N 2 (purity 99.999%) before and aer deposition. Depositing 30 nm particles ensured a sufficient stability necessary for acquiring STEM-EDX maps with enough counts for detailed analysis. The samples were handled in air before STEM imaging (JEOL 3000F operated at 300 kV). In STEM mode, the particles were imaged with a high-angle annular dark-eld (HAADF) detector. Elemental distribution maps (EDX maps) were also obtained in STEM mode with the coupled EDX spectrometer (Oxford Instruments), and the data was analyzed and processed in the INCA soware (Oxford Instruments) and with the Hyperspy package 53 in Python to extract the composition of Agrich and Cu-rich phases. Additional TEM-EDX statistics on 30 single Cu-Ag nanoparticles per furnace temperature at temperatures of 750 C, 850 C and 950 C were acquired in a 300 kV Hitachi TEM, with similar EDX spectrometer, and the data was analyzed with the associated Aztec soware (Oxford Instruments).
Molecular dynamics simulations
Embedded-atom method (EAM) 54 potentials for the Cu-Ag system developed by Williams et al. 55 were employed in the molecular dynamics simulations. Spherical Cu and Ag nanoparticles were constructed from their perfect face-centered cubic (FCC) crystals with particle diameters ranging from 2.5 to 4.2 nm. The unsupported Cu and Ag nanoparticles were equilibrated separately at 27 C for 100 ps and were subsequently placed next to each other in a nonperiodic vacuum cell.
This initial relaxation process leads to a Cu-Ag aggregate that is a small representation of metastable aggregate created from the fast quenching process in spark ablation. The equations of motion were integrated by the velocity-Verlet algorithm 56 with a time step of 1 fs. To simulate the compaction process in the furnace, the aggregate of Cu and Ag were continuously heated up to 750 C, 850 C, and 950 C at a heating rate of 0.13 C ps À1 . The system was then cooled at a cooling rate of 0.13 C ps À1 and equilibrated for 100 ps once the temperature reached 27 C. Note that a combination of MD and Monte Carlo (force-bias method) simulations was employed for the nanoparticle compacted at 850 C as its structure did not reach a crystalline state in the MD simulation. The canonical ensemble (i.e., NVT) was employed with Nosé-Hoover thermostat for temperature control. Additional simulations for larger particles (6 nm and 10 nm in diameter) were carried out under the same simulation setup. All the simulations were performed using the LAMMPS 57 code, and PyMOL 58 and OVITO 59 were used for visualizations. Crystallinity of the simulated nanoparticles were analyzed using polyhedral template matching. 59,60
Compaction behavior of the Cu-Ag nanoparticles
The particles generated in the spark-discharge chamber are fractal-like agglomerates consisting of primary particles in the size of 2-10 nm. It has been reported that Cu-Ag nanoparticles generated from spark ablation of sintered Cu-Ag electrodes show an increase in Cu-Ag solubility on the nanoscale despite their intrinsic immiscibility. 61 Similarly, it has been shown that primary particles generated from two different immiscible metal electrodes can form mixed crystalline phases given a sufficiently fast quenching. 62 Thus, although it is challenging to correctly characterize the composition and morphology of primary particles due to their small size, we believe that there is a high likelihood that the primary particles in the Cu-Ag nanoparticle agglomerates created in the spark ablation are binary mixtures of Cu and Ag. And, as the agglomerates approach the tube furnace, segregation is expected to take place due to increased atomic diffusion in the heating process. We discuss this further in the later section. The Cu-Ag particle agglomerates become compacted to spherical particles as they pass through the tube furnace. By keeping DMA1 at a xed electric mobility diameter of 35 nm and scanning the electric mobility diameter of DMA2, the compaction behavior of the Cu@Ag nanoparticles was obtained in a tube furnace temperature range from room temperature to 1000 C (Fig. 1). Each data point in Fig. 1 corresponds to the electric mobility diameter associated with the maximum particle concentration for that temperature.
When the furnace was set to room temperature, the mobility diameter of the nanoparticles scanned by DMA2 coincides with that of DMA1. This implies that there is no morphological change in the nanoparticles at that temperature. As the furnace temperature increases, however, the mobility diameter decreases with more noticeable changes at 400-600 C. At this temperature range, the structural evolution from a clustered, fractal particle morphology to a fully compacted, spherical particle takes place. [63][64][65] Spherical particles are observed at ca. 600 C, which is congruent with previously observed compaction temperatures of metallic aerosol nanoparticles at 1/3-2/3 of the bulk melting temperature in kelvin, 66 which for Cu and Ag are 1357.75 K (1084.6 C) and 1234.95 K (961.8 C), respectively. 67 At higher temperatures (>600 C), little to no further compaction occurs as indicated by the rst plateau at 600-800 C. This is supported by the spherical morphology of the Cu@Ag particles in the STEM-EDX maps at different temperatures shown in Fig. 2. Although the Cu@Ag particles become more or less spherical at 600 C, internal restructuring processes are expected to continue in the particles at higher temperatures. 66,68 Over the range of about 900-950 C, a transition to a second mobility diameter plateau occurs. We attribute this transition to increased Ag evaporation and depletion from the surface of the particles. A high volatility of Ag in nanostructure has previously been reported from in situ STEM studies by Lu et al. 69 Additionally, the Ag depletion is corroborated qualitatively by evaporation rates predicted by the Knudsen equation (SE1), as plotted in Fig. S3 in ESI. † Above 1000 C, we expect the material evaporation rate of both Ag and Cu to increase further and hence a continued reduction in the electric mobility diameter in Fig. 1. The consequent Cu enrichment will be discussed further in connection to the compositional analysis in the following section.
Morphology and composition of the Cu-Ag nanoparticles
STEM-EDX maps of single nanoparticles compacted at three different temperatures (750 C, 850 C and 950 C) were obtained to determine the Cu and Ag distribution and are shown in Fig. 2. We observe two distinct morphological phases of Cu@Ag nanoparticles in the STEM-EDX maps. When compacted at 750 C, the particles adopt a quasi-Janus structure. However, at 850 C, the EDX maps clearly indicate a core@shell morphology. In all the STEM-EDX maps in Fig. 2, it appears that Ag is present in Cu rich parts and vice versa. Using non-negative matrix factorization (NMF) with the Python library Hyperspy, 53 we were able to separate the EDX maps into Cu rich and Ag rich components for particles synthesized at 750 C, 850 C and 950 C for detailed quantication of the Cu rich and Ag rich segments (Fig. S4-S6 †). The atomic composition in the Cu rich and Ag rich parts of particles synthesized at 750 C (quasi-Janus particles), 850 C and 950 C (core@shell particles) are shown in Table 1. Moreover, the NMF spectral components in Fig. S4-S6 † reveal little to no oxygen in the Cu rich cores, while a small oxygen signal was detected in the Ag rich shell for the particles compacted at 850 C and 950 C (Fig. S5 and S6 †). This signies a resistance to oxidation, possibly due to high surface content of Ag. Although the samples were handled in air prior to STEM-EDX, the addition of 5% H 2 to the carrier gas has been shown to be benecial for reducing oxidation of particles synthesized by SDG. 70 We detected clear signs of oxidation only aer several weeks of ambient storage.
Additionally, the average compositions, as determined by TEM-EDX of 30 particles per temperature at the same three temperatures (750 C, 850 C and 950 C), are given in Fig. 3. It shows that the Cu-Ag particle composition range is narrow at all three temperatures, with a standard deviation of 5-7 at%. Clearly, the particles become enriched with Cu at 950 C, which correlates well with the decrease in mobility diameter observed in the compaction study above 900 C (Fig. 1), that was attributed to Ag evaporation. Typically, the composition of gas phase synthesized nanoparticles is investigated by interrogating a large number of particles simultaneously. 25,26,71,72 This approach provides a good sample of ensemble-averaged properties of many particles, but cannot provide information on compositional variance between particles. Indeed, to the best of our knowledge, the compositional uniformity between individual bimetallic nanoparticles synthesized from coagulating and/or coalescing monometallic particles in the gas-phase is not well-documented. Krishnan et al. 73 reported a very low interparticle compositional variance on single nanoparticles synthesized by a sectional Mo-Cu sputtering target, but did not account for the number of particles interrogated by EDX. This is to the best of our knowledge the rst time that compositional variance of individual particles synthesized by sintering of agglomerates formed by coagulation of bimetallic species has been studied, and is of relevance for multiple gas phase based techniques for synthesis of bimetallic nanoparticles from coagulating and coalescing particles. A quasi-Janus or crescent morphology observed at a compaction temperature of 750 C has been previously reported for this material system [74][75][76] although not for Cu@Ag particles synthesized in the gas phase to the best of our knowledge. Langlois et al. 75 studied the annealing of Cu@Ag core-shell nanoparticles on a substrate and observed that the structure transformed to Janus-like quasi-Janus when the amount of Ag in a particle is large. They reported a quasi-Janus conguration adopted beyond a critical Ag shell thickness of 3-4 nm. A global optimization study on small (100 to 300 atoms) Cu-Ag particles of varying composition supported on MgO(001) also showed the preference of Ag to migrate to the surface, with quasi-Janus morphologies appearing at higher Ag concentrations. 77 Comparing with our results in Fig. 2 and 3, a simple geometrical derivation for a spherical core-shell particle of uniform core and shell compositions (eqn (E6) in ESI †) suggests an Ag shell thickness of ca. 4.9 nm, 4.8 nm and 1.8 nm for the particles compacted at 750 C, 850 C and 950 C, respectively. In our study, we do not nd a clear relation between the element quantity and particle morphology, and hence the observation made by Langlois et al. is not supported by our experiments. The particles studied by that group 75 were, however, synthesized in a fundamentally different way via evaporation and thermal dewetting, where the substrate may play a signicant role in the formation and the thermodynamical stability of the particles. In our study, where Cu-Ag agglomerates compact directly in the gas phase, we determine the compaction temperature to be the most crucial variable in deciding the morphology of Cu@Ag core-shell nanoparticles.
Numerous theoretical works have been reported on the phase stability of Janus, core-shell, and alloyed Cu-Ag 78 The same model, however, also predicts the preference of an alloyed composition over the core-shell morphology for particles with size and composition similar to those synthesized here, which we do not observe. Another thermodynamical model based on surface energy differences of Ag and Cu, 76 where the authors synthesized crescent (quasi-Janus) and Cu@Ag core-shell nanoparticles by a solution process, suggests that a quasi-Janus morphology is always preferred but that the energetic difference between the two morphologies decreases with increasing Ag content, making the core@shell morphology more likely for particles with a high Ag fraction. This proposed trend is not reected in our results as particles synthesized at 750 C and 850 C have a similar composition yet a different morphology, i.e., the particles compacted at 750 C adopt a quasi-Janus structure, while the particles compacted at 850 C adopt a core@shell morphology. Additionally, the particles compacted 950 C also adopt a core@shell morphology with signicantly less Ag content (<25 at%). We note that the model proposed by Osowiecki et al. 76 should be accompanied by modeling of the signicant strain energy present in the Cu-Ag interface, due to the lattice mismatch of 13%, 78 84,85 Hence, it is clear that a suitable model for synthesis of core-shell structures via equilibrium gas-phase processes is lacking, which we address in the next section.
Molecular dynamics simulations
We performed molecular dynamics simulations to obtain further insight into the structural evolution of Cu-Ag nanoparticles during the heating and cooling processes. We carried out simulations based on well-established MD-routines for the Cu-Ag nanoparticle system, 27,[86][87][88] to demonstrate how quasi-Janus and core-shell structures can form from an aggregate by adjusting only compaction temperature. We mimic the experimental conditions by including both heating and cooling process that corresponds to entering and exiting the tube furnace. First, we present the simulation results and analysis of small particles ($4 nm in diameter). In order to employ the MD results in explaining the experimental results on signicantly larger particles, later in the section, we discuss the simulation results on larger particles (6 nm and 10 nm in diameter).
The TEM-EDX analysis shows that the atomic percentage of the Cu-Ag nanoparticles compacted at 750 C was Cu : Ag ¼ 39 : 61 (see Fig. 3). For this atomic ratio, the compaction between a Cu nanoparticle with a diameter of 3.0 nm and an Ag nanoparticle with a diameter of 3.9 nm was simulated.
The structural evolution of a Cu-Ag nanoparticle at this temperature is shown in Fig. 4a and b, along with the evolution of crystallinity (Fig. 4c) and the change in the average potential energy per atom (Fig. 4d). As the particles are heated from 27 C to 750 C, surface atoms of Ag start diffusing to the surface of Cu as expected from the lower surface energy and cohesive energy of Ag. It was observed that the Ag atoms do not readily diffuse into the Cu core region. At 750 C, the system forms a quasi-Janus structure with Ag atoms on the surface of Cu. As the temperature decreases, the Janus structure remains unchanged except for the continued diffusion of Ag atoms on the Cu surface. When the system is cooled back to a temperature of 27 C, an overall crystalline quasi-Janus Cu-Ag nanoparticle forms. A few Cu atoms diffuse to the Ag side. This overall quasi-Janus morphology in the MD simulation agrees with the STEM EDX maps (Fig. 2) of the Cu-Ag nanoparticles compacted at the same temperature. It is further supported by the quantication of the NMF of the same EDX maps (Table 1), where the Cu-rich phase contains little Ag and some Cu have incorporated in the Ag-rich phase. While an increased solubility of Cu in Ag has been observed previously upon quenching Cu-Ag mixed nanoparticles in inert gas condensation, 89 this is, to the best of our knowledge, the rst time it has been observed in a comparatively slower cooling cycle. To determine the melting point of the simulated Cu-Ag nanoparticles, the average potential energy per atom as a function of the temperature is plotted (Fig. 4d). Additionally, MD simulations for single-particles are conducted on a single Cu and a single Ag particle to obtain references for the melting behavior of Cu-Ag bimetallic particles (Fig. 4d). The melting point is generally dened to be the temperature at which the potential energy increases abruptly due to the absorption of latent heat of fusion. 90,91 It is well known that the melting point of nanoparticles is size-dependent, that is, it decreases as the size of nanoparticle decreases. 92 Fig. 4d shows the change in potential energy of a 3.0 nm single Cu, and of a 3.9 nm single Ag nanoparticle. The average potential energy per atom increases linearly with increasing temperature. In Fig. 4d, the potential energy per atom of both Cu and Ag do not show any abrupt jumps. This implies that at 750 C, neither Cu nor Ag melts. Crystallinity analysis 59,60 of the MD results also indicates that each nanocluster remains crystalline during the heating and cooling as shown in Fig. 4c. It is clearly seen that at 750 C, the crystallinity of both Cu and Ag remain FCC during the coalescence. Thus, it supports an assumption that the quasi-Janus structure is created by surface diffusion.
For the Cu-Ag nanoparticles compacted at 850 C with an atomic ratio of Cu : Ag ¼ 39 : 61 (see Fig. 3), a Cu nanoparticle with a diameter of 3.0 nm and an Ag nanoparticle with a diameter of 3.9 nm were simulated and are shown in Fig. 5ad. At 850 C, both the single Ag and the single Cu nanoparticle melt as indicated by the abrupt jump in the average potential energy per atom (Fig. 5d) and a similar jump in the potential energy is also observed for a Cu-Ag nanoparticle. Melting of the nanoparticle at 850 C was further supported by crystallinity analysis. As shown in Fig. 5c, the initial FCC structures are no longer observed and become amorphous at 850 C. As the temperature decreases at a cooling rate of 0.13 C ps À1 the Cu-Ag nanoparticle transforms to an internally mixed nanoparticle with an Ag shell. Note that the simulation shows that the mixing in the core is not uniform, i.e., segregation is observed within the core as seen in Fig. 5b, similar to what was observed in another MD study on smaller (2.5 nm) Cu-Ag nanoparticles heated to 327 C. 93 This core@shell morphology with a nonuniformly mixed core is consistent with the STEM-EDX observations presented in Fig. 2, and explains the non-spherical and non-homogeneous signal from the Cu core, in contrast to the relatively pure Cu-rich phase and Cu cores observed for nanoparticles compacted at 750 C and 950 C, respectively. An additional simulation with a slower cooling rate of 0.0008 C ps À1 (corresponding to 1 ms for cooling) was carried out to investigate the effect of the cooling rate on the mixing state of Cu-Ag system, in other words, to see whether the degree of segregation increases at a slower cooling rate (ESI Fig. S7 †). However, even at the slow cooling rate, the nanoparticle contains some Ag atoms in the Cu core, and they are not mixed uniformly with Cu atoms. Regarding the crystallinity, Fig. 5c shows that the nal structure of the nanoparticle at room temperature obtained using MD is not crystalline. Thus, we subsequently employed Monte Carlo to determine the crystalline structure of the nanoparticle as presented in Fig. 5c. The Monte Carlo result also agrees with our experimental result that in the bimetallic nanoparticles that are cooled at a much slower cooling rate (in the order of seconds), the Ag content increases in the core compared to the lower compaction temperature. In our results for the core@shell particles synthesized at 850 C the Cu core contains approximately 10.6% Ag (see Table 1).
The Cu-Ag nanoparticles compacted at 950 C contain only approximately 24% Ag according to the TEM-EDX analysis (Fig. 3), which we attribute to signicant Ag evaporation at that temperature. For the Cu-Ag system with this atomic ratio, Nanoscale Advances compaction between a Cu nanoparticle with a diameter of 3.7 nm and an Ag nanoparticle with a diameter of 2.9 nm was simulated ( Fig. 5e-h). Note that the composition of the nanoparticle in the simulation is set to the one we observe in the compacted NP aer the presumed Ag evaporation. In other words, the simulation does not include the evaporation process. Fig. 5g and h show that both Cu and Ag melt at this temperature as expected. The 2.9 nm diameter Ag nanoparticle melts at around 750 C, indicating an apparent melting temperature depression for smaller nanoparticle (Fig. 5h). Patches of monolayer Ag are found on the surface of melted Cu at 950 C (Fig. 5e). Some Ag atoms diffuse into the core of the Cu. Quantication of the NMF loadings (Table 1) of the corresponding EDX map in Fig. 2 agrees well with the low Ag content in the Cu core observed in the simulation result here. Furthermore, both the simulation and the EDX analysis identify a higher Cu content in the shell compared to the lower compaction temperatures. However, this is likely related to the issue of dening the extent of the very thin shell, leading to the inclusion of some Cu signal from the core. For the EDX maps, the size of the electron probe at the sample is also becoming a limiting factor for singling out the shell for this sample. We additionally conducted an MD simulation at an intermediate temperature of 790 C for equally sized Cu and Ag nanoparticles to demonstrate the possibility of optimizing the core-shell morphology and compositions (see ESI Fig. S8 †). At this temperature, Ag melts, but Cu does not. Thus, Ag atoms diffuse to the surface of the solid Cu nanoparticle resulting in a core@shell structure without Ag atoms in the core region. This implies that it may be possible to create well-dened Cu@Ag core-shell nanoparticles solely by choosing the right compaction temperature. There is also a possibility that this effect of melting temperature difference between Ag and Cu in nanoregime can lead to quasi-Janus structures. Given the reported high volatility of Ag, one can also assume a signicant melting temperature depression of Ag compared to Cu. This would lead to a wide temperature range in which the compacted agglomerate consisted of liquid Ag and solid Cu. In the case of large agglomerates compacted in the experiment, they may form quasi-Janus (or off-center core-shell) structures as the solidication proceeds.
We note that the good agreement observed in the temperatures of the MD simulations and the experimental results in this study is somewhat coincidental. It is well known that the melting point is overestimated in MD simulations due to superheating. 94 If simulations are performed for larger particles, the melting temperatures will be higher than that of the 4 nm ones, and thus the Janus structures form at higher temperatures (see ESI Fig. S9 †). This implies that one needs to be cautious when interpreting MD results for the melting points and the temperatures at which particular morphologies form.
However, the MD results seem to be powerful in eliciting the general trend. The simulation results support that the quasi-Janus and core@shell morphologies observed in the synthesized Cu-Ag nanoparticles at different temperatures are attributed to the immiscibility; combined effect of differences in surface energy, atomic size, and cohesive energies of Cu and Ag nanoparticles. 95 Even though we discussed the simulation results carried out for small nanoparticles ($4 nm in diameter), the same trend is observed in simulations performed for larger particles (6 nm and 10 nm in diameter) (ESI Fig. S10 †). We observe that regardless of the particle size, quasi-Janus particles are formed at low temperatures, and core@shell particles are formed at high temperatures. Therefore, we are condent that the structural evolution seen in MD simulations can explain the different morphologies observed also for the larger particles in the experiment.
According to Grammatikopoulos et al. 27 who also studied the equilibrium structures of Cu-Ag NPs using combined simulation of MD and Monte Carlo, the quasi-Janus Cu-Ag structure is a metastable state and core-shell-like is an equilibrium state. We have also shown that the equilibrium structure found for the composition investigated in this study (Cu : Ag ¼ 39 : 61) exhibits core-shell-like congurations, i.e., Ag shell with a non-uniformly mixed core. The fact that no quasi-Janus structures were observed at high temperatures in our experiments indicates that quasi-Janus structures are formed mainly by coalescence and surface diffusion of the aggregates at sub-melting temperature. Thus, we conclude that the transition from quasi-Janus to core-shell occurs when the compaction (heat treatment) of the Cu-Ag agglomerates is carried out at higher temperatures. Both the experimental observations and the simulation results point to a likelihood of presence of segregated domains in the nanoparticle aggregates in the tube furnace. Previous research in mixing of primary particles in spark discharge generated agglomerates showed clear alloying in the case of AuPd. 96 While increased mixing of Cu-Ag primary particles is possible due to the rapid quenching process, segregation likely occurs within individual primary particles as the agglomerates enter the tube furnace. It is noteworthy that this synthesis method can produce bimetallic nanoparticles with different morphology (either quasi-Janus or core-shell) by merely tuning the compaction temperature. The more signicant observation is that "uniform" bimetallic nanoparticles with chosen morphology can be readily produced by the presented method. Without a heating step, uniformity is oen challenging to achieve with gas-phase synthesis methods that are good for producing various random metastable structures through fast kinetics and non-equilibrium processes. 27,28 With our synthesis method, we avoid the randomness in the generated nanoparticle morphology by adding the heat treatment process for the Cu-Ag agglomerates.
Another important observation from the MD simulations is that the overall structures of the Cu-Ag nanoparticles remain consistent as they are cooled from high temperatures. This implies that the core-shell bimetallic nanoparticles generated via heat-induced surface segregation do not change their overall morphology when treated at high-temperature conditions. This parallels the synthesis method employed in this study where the core-shell nanoparticles generated have already undergone a heating and cooling process, i.e., the heat-induced surface segregation. No reconguration of the structure upon heating indicates that the core-shell particles generated via the presented method are likely to show high structural stability at elevated temperatures. Structural stability is a critical issue in various applications, especially in catalysis, in which the processes oen occur in high-temperature environments. Bimetallic nanoparticles generated by our method are likely to be resistant to a structural transformation upon heating. role in the creation of the particles. STEM-EDX analysis revealed that the as-generated Cu-Ag agglomerates can be made to adopt quasi-Janus or core-shell structures depending on the compaction temperature. Molecular dynamics simulations support the importance of compaction temperature in deciding the nal morphology of the Cu-Ag nanoparticles found in experimental results.
The presented method provides a route of achieving uniformity in core-shell bimetallic nanoparticles in terms of all three aspects; size, composition, and morphology. This is still extremely challenging with other gas-phase synthesis methods involving only non-equilibrium processes without additional annealing and equilibrium cooling processes. The bimetallic nanoparticles produced using our method are expected to exhibit high structural stability when subjected to hightemperature conditions, owing to the compaction process of heating and cooling during the synthesis. We expect that this method is ideal for producing bimetallic nanoparticles for catalysis applications, where the structural stability of nanoparticles in elevated temperatures is of great importance. This simple gas-phase synthesis method is not limited to the production of core-shell nanoparticles but can also be used to create other structures (quasi-Janus, alloy) with high stability. In designing the desired structures, the main properties to consider are the surface energies, atomic radii of the constituent elements, and compaction temperature.
Conflicts of interest
There are no conicts to declare. | 8,836.8 | 2021-04-14T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Parkinson's disease-associated human ATP13A2 (PARK9) deficiency causes zinc dyshomeostasis and mitochondrial dysfunction
Human ATP13A2 (PARK9), a lysosomal type 5 P-type ATPase, has been associated with autosomal recessive early-onset Parkinson's disease (PD). ATP13A2 encodes a protein that is highly expressed in neurons and is predicted to function as a cation pump, although the substrate specificity remains unclear. Accumulation of zinc and mitochondrial dysfunction are established aetiological factors that contribute to PD; however, their underlying molecular mechanisms are largely unknown. Using patient-derived human olfactory neurosphere cultures, which harbour loss-of-function mutations in both alleles of ATP13A2, we identified a low intracellular free zinc ion concentration ([Zn2+]i), altered expression of zinc transporters and impaired sequestration of Zn2+ into autophagy-lysosomal pathway-associated vesicles, indicating that zinc dyshomeostasis occurs in the setting of ATP13A2 deficiency. Pharmacological treatments that increased [Zn2+]i also induced the production of reactive oxygen species and aggravation of mitochondrial abnormalities that gave rise to mitochondrial depolarization, fragmentation and cell death due to ATP depletion. The toxic effect of Zn2+ was blocked by ATP13A2 overexpression, Zn2+ chelation, antioxidant treatment and promotion of mitochondrial fusion. Taken together, these results indicate that human ATP13A2 deficiency results in zinc dyshomeostasis and mitochondrial dysfunction. Our data provide insights into the molecular mechanisms of zinc dyshomeostasis in PD and its contribution to mitochondrial dysfunction with ATP13A2 as a molecular link between the two distinctive aetiological factors of PD.
INTRODUCTION
Parkinson's disease (PD) is the most common movement disorder, typically identified with clinical manifestations of tremor, bradykinesia, rigidity and postural instability. Degeneration of dopaminergic neurons in the substantia nigra pars compacta (SNpc) and formation of intracellular inclusion bodies (Lewy bodies) serve as histopathological hallmarks of PD. More than 90% of patients present as sporadic cases where the cause of the disease is unknown (sporadic PD), whereas 10% of PD patients have identifiable monogenic causes (familial PD). To date, 18 genes or loci in the human genome have been associated with familial PD (1).
The ATP13A2 gene (PARK9, MIM# 610513) encodes a lysosomal type 5 P-type ATPase. Mutations in ATP13A2 have been associated with an autosomal recessive levodopa-responsive early-onset parkinsonism, known as Kufor -Rakeb syndrome (KRS, MIM# 606693) (2). KRS patients present with typical PD manifestations alongside other clinical features such as supranuclear gaze palsy, facial-faucial myoclonus and spasticity (3). Mutations identified in most KRS patients follow an autosomal recessive trait involving two mutant alleles (homozygotes or compound heterozygotes) that cause mRNA degradation, protein misfolding/truncation and degradation (2)(3)(4)(5). ATP13A2 protein has been localized to several cellular acidic vesicles, including lysosomes and autophagosomes (2)(3)(4)(5)(6)(7)(8)(9)(10). It was therefore proposed that ATP13A2 functions in the autophagy-lysosomal pathway (ALP). In support of this, mutations in ATP13A2 have been associated with neuronal ceroid lipofuscinosis, a lysosomal storage disorder, in humans and dogs (11)(12)(13) and lysosomal dysfunction in KRS-patient-derived cell models (8,14). ATP13A2 has also been predicted to be a cation pump, based on its structural similarity to other proteins in the type 5 P-type ATPase family. Several metal ions have been reported as potential substrates (15). Among them, ionic manganese (Mn 2+ ) has been the cation subject of the most extensive investigation, because it is also a known environmental risk factor for PD. Several groups have demonstrated an exaggerated Mn 2+ toxicity at high doses in ATP13A2silenced yeast and mammalian cell models (9,10,16). In these models, overexpression of wild-type, but not mutant ATP13A2, conferred protection against Mn 2+ toxicity. Despite the apparent interaction of Mn 2+ in disease models, the cationic selectivity of endogenous human ATP13A2 for other metal ions remains to be determined.
In addition to manganese, zinc has been shown to interact with peptide fragments of ATP13A2 (17). Zinc, which is enriched in the brain, is an essential biometal required in numerous biological processes to maintain normal cell function. The intracellular concentration of biologically active free zinc ions (Zn 2+ ) is tightly regulated by zinc transporters to a diminutive level due to their potential toxicity, whereas the majority of intracellular Zn 2+ exists in an inactive form either bound to zinc-binding proteins (i.e. metallothioneins) or sequestered in cellular organelles (18). Zinc dyshomeostasis has been linked with several neurodegenerative diseases including PD. Elevated levels of zinc have been found in the SNpc and other tissues of PD patients (19 -21), and zinc has been identified as an environmental risk factor for PD (22). Despite the potential importance of zinc in the pathogenesis of PD, its aetiological role remains largely unknown.
Excessive Zn 2+ levels are also known to impair cellular energy production through an inhibitory action on mitochondria (23). Mitochondria generate the majority of cellular energy in the form of ATP via oxidative phosphorylation and produce detrimental reactive oxygen species (ROS) as a byproduct of this process. Mitochondrial dysfunction was initially linked to the pathogenesis of PD when 1-methyl-4-phenyl-1,2,3,4tetrahydropyridine (MPTP), a potent mitochondrial complex I inhibitor and a neurotoxic contaminant in the synthetic recreational opioid desmethylprodine, was linked to dopaminergic cell death in the SNpc, resulting in a PD-like syndrome (24). Since then, mitochondrial dysfunction has been recognized as a major contributor to the aetiology of sporadic (25,26) and familial PD (27)(28)(29)(30). A recent discovery that zinc accumulation contributes to and conversely, zinc chelation protects against MPTP-induced PD has highlighted a link between zinc and mitochondrial function in the pathogenesis of PD (31).
We previously reported that pathogenic compound heterozygous mutations in ATP13A2 caused loss of ATP13A2 expression and mitochondrial dysfunction (3,28). In this study, we have identified zinc dyshomeostasis in our human olfactory neurosphere (hONs) disease model system (32). The patient-derived hONs cells displayed a lower intracellular free zinc ion concentration ([Zn 2+ ] i ) with a decreased capacity to sequester Zn 2+ into the ALP vesicles and altered expression of zinc transporters. Pharmacological treatments that elevated the [Zn 2+ ] i were found to exacerbate the loss of mitochondrial function, leading to mitochondrial fragmentation and cell death as a result of ATP depletion. These findings indicate that loss of human ATP13A2 causes zinc dyshomeostasis and abnormal energy metabolism, providing evidence that ATP13A2 is a molecular link between abnormal zinc metabolism and mitochondrial dysfunction in the pathogenesis of PD.
ATP13A2 2/2 hONs cells are vulnerable to elevated [Zn 21 ] i
In order to determine the effect of excessive zinc levels in the setting of ATP13A2 deficiency, we exposed hONs cells with compound heterozygous loss-of-function mutations (c.3253delC and c.3176T.G) in ATP13A2 (3), to increasing doses of ZnCl 2 and measured the cell viability using the Neutral red uptake assay (33). hONs with ATP13A2 deficiency are denoted as ATP13A2 2/2 hereafter. In the vehicle-treated groups, ATP13A2 2/2 cells consistently showed a 20-40% lower retention of Neutral red compared with the control (Fig. 1). Neutral red is a weakly cationic dye and retained in the lysosomes depending on their pH (33) and the lower retention of Neutral red detected under vehicle treatment reflected a higher lysosomal pH in ATP13A2 2/2 KRS-patient cells (8,14). When treated with ZnCl 2 , ATP13A2 2/2 cells showed a dose-dependent and significant decrease in cell viability (P , 0.01), whereas the control cells demonstrated cytotoxicity only at the highest dose tested (P , 0.01, Fig. 1A). As Zn 2+ has been shown to increase mitochondrial ROS production (34), we then examined whether ROS was involved in the observed Zn 2+ -induced cytotoxicity. The Zn 2+ -induced reduction of cell viability in ATP13A2 2/2 cells was completely reversed by the introduction of an antioxidant, N-acetyl-cysteine (NAC), indicating that Zn 2+ toxicity is elicited by increased ROS production in ATP13A2 2/2 cells (Fig. 1B). Hydrogen peroxide (H 2 O 2 ), an ROS known to increase [Zn 2+ ] i by inducing the release of Zn 2+ from zinc-binding proteins (31), significantly reduced cell viability, to a greater extent in ATP13A2 2/2 cells (P , 0.01, Fig. 1C). Furthermore, the specific Zn 2+ chelator, N,N,N ′ ,N ′ -tetrakis(2-pyridylmethyl) ethylenediamine (TPEN), protected against H 2 O 2 -mediated cytotoxicity, strongly supporting the involvement of Zn 2+ in H 2 O 2 -mediated cytotoxicity. Next, we overexpressed wild-type ATP13A2 in ATP13A2 2/2 cells and treated ZnCl 2 to test whether restoration of ATP13A2 expression reverses Zn 2+ cytotoxicity. Western blot analysis confirmed expression of V5-tagged wild-type ATP13A2 (V5ATP13A2) in both control and ATP13A2 2/2 cells after lentivirus transduction (Fig. 1D). V5ATP13A2 expression significantly protected Zn 2+ -mediated cytotoxicity in ATP13A2 2/2 cells (Fig. 1E), whereas a similar overexpression of V5ATP13A2 was slightly toxic to the control cells as previously reported (6). Cytotoxicity/cell viability measured by the lactate dehydrogenase activity in the culture media of hONs cells and the Trypan blue exclusion assay was consistent with the results of the Neutral red uptake assay (Supplementary Material, Fig. S1A-G), confirming the increased cytotoxicity of Zn 2+ in ATP13A2 2/2 cells. Together, these findings support the existence of zinc dyshomeostasis in ATP13A2 2/2 cells, conferring sensitivity to treatments that induce an increase in [Zn 2+ ] i and ROS as an effector of Zn 2+ -mediated toxicity.
[Zn 21 ] i is lower in ATP13A2 2/2 hONs cells Excessive Zn 2+ concentration is known to be detrimental to cellular function (23,35), necessitating the maintenance of low [Zn 2+ ] i . As our cytotoxicity tests suggested that zinc homeostasis was disturbed in ATP13A2 2/2 cells, we assessed [Zn 2+ ] i using FluoZin-3 (Fig. 2). FluoZin-3 is a Zn 2+ specific dye that exhibits green fluorescence upon binding to Zn 2+ and has been widely used to measure [Zn 2+ ] i (31,34,36,37). In the vehicletreated groups, ATP13A2 2/2 cells showed an average of 23% reduction in the FluoZin-3 intensity compared with the control (P , 0.01), indicating lower [Zn 2+ ] i in ATP13A2 2/2 cells. Upon exposure to H 2 O 2 , both hONs cell lines showed a .2-fold increase in the FluoZin-3 fluorescence intensity, which was not significantly different between the two cell lines (P ¼ 0.51). H 2 O 2 -induced release of Zn 2+ was efficiently reverted to basal levels by co-treatment with TPEN, confirming the specificity of Zn 2+ in the H 2 O 2 -induced increase of FluoZin-3 fluorescence intensity. The lower [Zn 2+ ] i in ATP13A2 2/2 cells was also confirmed using another Zn 2+ -specific fluorescent dye, Zinpyr-1, by flow cytometry (Supplementary Material, Fig. S2).
Altered expression of zinc transporters in ATP13A2 2/2 hONs cells
To further assess the impact of ATP13A2 deficiency on zinc homeostasis, we evaluated changes in the expression levels of zinc transporters. To maintain zinc homeostasis, zinc transporters that are located in the membrane of various cellular organelles act to pump Zn 2+ across the membrane, playing a crucial role in modulating [Zn 2+ ] i (35). There are two distinct gene families involved in Zn 2+ transportation; 9 solute carrier family 30 genes encode zinc transporters (ZnTs) that mediate efflux of Zn 2+ (decreasing cytosolic Zn 2+ ), and ZRT/IRT-related proteins (zinc importing proteins, ZIP) encoded by 14 solute carrier family 39 genes that facilitate influx of Zn 2+ (increasing cytosolic Zn 2+ levels). We examined the gene expression of all ZnT and ZIP genes and ACTB encoding b-actin as a housekeeping gene in hONs cells using a quantitative real-time RT-PCR (qRT-PCR) (Fig. 3). Among the genes investigated, 19 (8 ZnTs and 11 ZIPs) were expressed in the hONs cells, while the expression of ZnT2, ZIP5, ZIP8 and ZIP12 was not detected with the PCR conditions employed (see Materials and Methods). There was no difference in the expression of ACTB. We found alterations in the expression levels of the majority of zinc pumps (13 out of 19 genes; 6 ZnTs and 7 ZIPs) in ATP13A2 2/2 cells compared with the control, suggesting altered Zn 2+ dynamics through the expression of zinc pumps: ZnT1, ZnT3 4, ZnT7 9, ZIP1 4, ZIP7 and ZIP9 10. All but one (ZnT8) of the ZnT/ZIP transcripts were upregulated in ATP13A2 2/2 cells. These findings, together with the observed lower [Zn 2+ ] i , are indicative of zinc dyshomeostasis in the presence of ATP13A2 deficiency. Impaired sequestration of Zn 21 into the ALP vesicles in ATP13A2 2/2 hONs cells ATP13A2 localizes to intracellular acidic vesicles, including autophagosomes, early/late endosomes and lysosomes (2 -10). Based on the reported location of ATP13A2 and the observed zinc dyshomeostasis in our ATP13A2 2/2 cells, we hypothesized that ATP13A2 is majorly involved in transporting Zn 2+ across the membrane of ALP vesicles and loss of ATP13A2 impairs the capacity to transport Zn 2+ into these vesicles. To test the hypothesis, we generated hONs cells expressing mRFP-LC3 to visualize LC3-positive ALP vesicles, including autophagolysosomes (38), and stained them with FluoZin-3 under the induction of accumulation of the ALP vesicles and increase in [Zn 2+ ] i (see Materials and Methods for details). The control cells displayed a higher number of vesicles positive for both mRFP-LC3 and FluoZin-3, when compared with ATP13A2 2/2 cells (Fig. 4A). Further analysis revealed that the Pearson's co-localization coefficient was significantly reduced in ATP13A2 2/2 cells compared with the control (n ¼ 47, P , 0.05), indicating a lower number of mRFP-LC3-positive vesicles containing Zn 2+ in ATP13A2 2/2 cells (Fig. 4B). The area occupied by mRFP-LC3-positive vesicles per cell did not differ between the two cell lines (P ¼ 0.44, Fig. 4C), negating the possibility of random detection of the decreased co-localization in the ATP13A2 2/2 cells. In addition, there was no difference in the number of FluoZin-3-positive vesicles per cell (P ¼ 0.33, Fig. 4D) or the FluoZin-3 intensity per vesicle (P ¼ 0.29, Fig. 4E) between the two cell lines. These results indicate that the sequestration of Zn 2+ into ALP vesicles is impaired by the loss of ATP13A2.
ATP13A2 2/2 hONs cells have impaired mitochondrial function
We and others have reported impaired mitochondrial function in fibroblasts from KRS patients (28) and ATP13A2-silenced cell models (6,39). We therefore assessed mitochondrial function in our hONs cells. The cellular ATP production rate was significantly lower in the ATP13A2 2/2 cells when compared with controls (32.9 + 2.4 for control and 26.1 + 2.6 for ATP13A2 2/2 cells, P , 0.01, Fig. 5A). Upon exposure to ZnCl 2 , ATP13A2 2/2 cells showed a significant reduction in ATP production rate (P , 0.05), which was completely blocked by V5ATP13A2 overexpression (P , 0.01 compared with the ZnCl 2treated empty vector control), while the same treatments did not change ATP production rate in the control cells. ATP13A2 2/2 cells showed an average of 37% reduction in tetramethylrhodamine methyl ester perchlorate (TMRM) labelling compared with the control under vehicle treatment (P , 0.01), indicative of a lower mitochondrial membrane potential (DC m , Fig. 5B). Notably, there was no difference in total mitochondrial mass between the cell lines when measured using the mitochondria-specific dye, MitoTracker Green (Supplementary Material, Fig. S3). When cells were treated with carbonyl cyanide 3-chlorophenylhydrazone (CCCP), a DC m uncoupling agent, TMRM retention was reduced to a similar degree in both the cell lines (P ¼ 0.75).
Zn 21 -mediated ROS production and altered expression of antioxidant genes in ATP13A2 2/2 hONs cells Zn 2+ accumulates in mitochondria via Zn 2+ transporting uniporters, with a resultant increase in ROS production (34,36). Given that ROS was found to be an effector of Zn 2+ -mediated cytotoxicity in ATP13A2 2/2 cells (Fig. 1), we exposed hONs cells to ZnCl 2 to examine whether exogenous Zn 2+ induced ROS production in hONs cells. ROS levels were assessed using fluorescent indicators specific for superoxide (O 2 2 ) production (MitoSox Red) and H 2 O 2 production (CM-H 2 DCFDA). Surprisingly, the basal H 2 O 2 production was lower by an average of 27% in ATP13A2 2/2 cells compared with the control (P , 0.01, Fig. 5C). However, when exposed to high concentrations of ZnCl 2 , H 2 O 2 production was rapidly (,30 min) induced in ATP13A2 2/2 cells while only minimal changes were detected in the control. Although ZnCl 2 treatment induced O 2 2 production in hONs cells, the levels of O 2 2 were comparable between the cell lines under both vehicle and ZnCl 2 treatment conditions (data not shown). Notably, the maximum dose of ZnCl 2 used to induce ROS production did not cause any appreciable cell death under the given exposure conditions (Supplementary Material, Fig. S4). To determine whether the cause of reduced ROS production in ATP13A2 2/2 cells resulted from a compensatory activation of the antioxidant enzyme systems, we examined the expression levels of the genes encoding antioxidant enzymes using qRT-PCR. All the genes examined were expressed at variable mRNA levels in ATP13A2 2/2 cells when compared with the control, while there was no difference detected in the expression of ACTB (Fig. 5D); a significant elevation was detected for superoxide dismutase 1 (SOD1, 129.8% + 7.7, P , 0.01), catalase (CAT, 121.1% + 5.3, P , 0.01) and glutathione peroxidase 1 (GPX1, 145.6% + 5.1, P , 0.01), while the level of superoxide dismutase 2 transcripts (SOD2, 88.2% + 6.3, P , 0.05) was decreased. These findings confirm the involvement of ROS in Zn 2+ -mediated cytotoxicity in ATP13A2 2/2 cells and also suggest that a loss of ATP13A2 results in altered ROS metabolism, which contributes to an increased susceptibility to Zn 2+ and induction of protective changes in the cellular antioxidant system.
To assess the effect of [Zn 2+ ] i on DC m , we exposed hONs cells to H 2 O 2 and examined changes in 5,5 ′ ,6,6 ′ -tetrachloro-1,1 ′ ,3,3 ′tetraethylbenzimidazolylcarbocyanine iodide (JC-1) fluorescence. JC-1 is a cationic dye that has been utilized to monitor DC m through its capacity to exhibit green fluorescence when in a monomeric form in the cytoplasm or in mitochondria with low DC m (e.g. damaged mitochondria) and red fluorescence upon formation of J-aggregates in mitochondria with normal-to-high DC m (e.g. healthy mitochondria). Under vehicle treatment, ATP13A2 2/2 cells displayed on average 46% lower proportion of red mitochondria compared with the control (P , 0.01, Fig. 6), consistent with the result of the TMRM assay. Exposure to H 2 O 2 reduced the area fraction of red mitochondria in both cell lines, but to a significant extent in ATP13A2 2/2 cells (P , 0.01). The toxic effect of H 2 O 2 on DC m in ATP13A2 2/2 cells was blocked by co-treatment with TPEN (P , 0.01), confirming the involvement of Zn 2+ in the H 2-O 2 -mediated reduction of DC m . The noticeable increase in green fluorescence observed with H 2 O 2 treatment is due to cytoplasmic diffusion of JC-1 monomers.
Zn 21 induces mitochondrial fragmentation, loss of mitochondrial function and cell death in ATP13A2 2/2 hONs cells Dysfunctional mitochondria, when excessively damaged by toxic stimuli such as ROS to an extent beyond the cellular capacity to restore their normal function by complementation, undergo fragmentation before uptake by autophagosomes and delivery to lysosomes for degradation, the process known as mitophagy (40). Our observation of Zn 2+ -induced mitochondrial dysfunction prompted us to investigate the potential effect of increased Zn 2+ on mitochondrial morphology. We treated hONs cells with ZnCl 2 and determined the mitochondrial reticular interconnectivity by calculating the mitochondrial form factor, of which low values indicate a more fragmented mitochondrial network and high values indicate a more cohesive reticulum (see Materials and Methods for details). When grown in media with a vehicle, there was no difference detected in the form factor between the cell lines ( Fig. 7A and B). Whereas, upon the addition of CCCP to the media, dramatic changes in mitochondrial network morphology were observed in both the cell lines, with significantly lower form factors compared with the respective vehicle controls (P , 0.01 for both cell lines). Upon exposure to ZnCl 2 , ATP13A2 2/2 cells displayed an average reduction of 29% in the form factor indicative of mitochondrial fragmentation when compared with the control (P , 0.01) and the vehicle-treated counterpart (P , 0.01). Conversely, the form factor for control cells under ZnCl 2 treatment was similar to the vehicle control counterpart, revealing a Zn 2+ -specific effect on mitochondrial morphology in the absence of ATP13A2. Zn 2+ -induced mitochondrial fragmentation was completely blocked by co-treating the cells with a mitochondrial fusion promoter, 3-isobutyl-1-methylxanthine (IBMX) (P , 0.01). We also examined the effect of Zn 2+ -induced mitochondrial fragmentation on the cellular ATP production rate and cell viability ( Fig. 7C and D). ZnCl 2 significantly impaired ATP production in both cell lines, but to a greater extent in the ATP13A2 2/2 cells (23.4 + 1.1 for vehicle and 13.8 + 0.4 for ZnCl 2 treatment, P , 0.01) when compared with the control groups (24.9 + 1.0 for vehicle and 21.0 + 0.9 for ZnCl 2 treatment, P , 0.05). The ATP production rate was significantly restored by co-treatment with ZnCl 2 and IBMX in the ATP13A2 2/2 cells (16.7 + 0.5, P , 0.05 compared with ZnCl 2 treatment). In addition, IBMX treatment reversed the Zn 2+ -induced reduction in cell viability for ATP13A2 2/2 cells (P , 0.05, Fig. 7D). These findings indicate that Zn 2+ -induced mitochondrial fragmentation causes Figure 5. Mitochondrial dysfunction and Zn 2+ -mediated ROS production in ATP13A2 2/2 cells. Mitochondrial function and ROS production were assessed in hONs cells exposed to ZnCl 2. (A) ATP production rate was significantly lower in ATP13A2 2/2 cells (grey bars) compared with the control (white bars) under basal conditions. Upon exposure to 100 mM ZnCl 2 , ATP production rate was significantly reduced in the ATP13A2 2/2 cells transduced with lentivirus carrying an empty vector, which was completely reversed by overexpression of V5-tagged wild-type ATP13A2 (V5ATP13A2). (B) TMRM labelling was significantly reduced in ATP13A2 2/2 cells compared with the control. Treatment of the cells with the mitochondrial uncoupler CCCP decreased TMRM labelling to a similar extent in both the cell lines. (C) CM-H 2 DCFDA was used to detect H 2 O 2 in hONs cells. In the vehicle-treated groups, ATP13A2 2/2 cells displayed a significantly lower CM-H 2 DCFDA signal compared with the control. When treated with increasing concentrations of ZnCl 2 (0, 100, 500, 1000 mM) for 30 min, ATP13A2 2/2 cells displayed a dose-dependent increase in CM-H 2 DCFDA fluorescence signals with a significant increase at concentrations .500 mM. (D) Quantitative real-time RT-PCR detected a significant upregulation in the expression level of genes encoding the cellular antioxidant enzymes; superoxide dismutase 1 (SOD1), catalase (CAT) and glutathione peroxidase 1 (GPX1) and a significant down-regulation in superoxide dismutase 2 (SOD2), while b-actin mRNA (ACTB) was expressed at similar levels. All reactions were repeated twice in triplicate. Values in the graphs are represented as mean + SD. CCCP, carbonyl cyanide 3-chlorophenylhydrazone. #P , 0.05 and ##P , 0.01 by Mann-Whitney U test and * P , 0.05 and * * P , 0.01 by Kruskal-Wallis one-way ANOVA followed by post hoc Tukey's HSD multiple comparison test. a reduction in ATP production that leads to cell death in ATP13A2 2/2 cells.
DISCUSSION
We demonstrate that ATP13A2 plays a crucial role in maintaining zinc homeostasis in a KRS-patient-derived cell model that lacks ATP13A2. The ATP13A2-deficient patient hONs cells showed abnormal zinc metabolism including low [Zn 2+ ] i , altered expression of ZnTs/ZIPs and impaired sequestration of Zn 2+ into ALP vesicles.
Several in vitro models have been used to manipulate ATP13A2 expression and show variation in cellular responses to Mn 2+ ; Yeast devoid of YPK9, an orthologue of human ATP13A2, showed an increased sensitivity to Mn 2+ , while overexpression conferred resistance (16). In studies using mammalian cell models, Mn 2+ at high concentrations (.1 mM) increased cell death and also induced expression of endogenous ATP13A2 mRNA, while overexpression of wild-type ATP13A2, but not pathogenic variants, protected against its toxic effect (9, 10). Contrary to these findings, it has been demonstrated that overexpressed human ATP13A2 failed to protect cells against Mn 2+ toxicity, raising questions over the biological relevance of the function of human ATP13A2 in manganese metabolism (41).
In this study, we have demonstrated abnormal zinc metabolism in the setting of ATP13A2 deficiency in KRS-patient-derived hONs cells. Increased sensitivity to the exogenous application of ZnCl 2 and H 2 O 2 (both of which increase [Zn 2+ ] i by direct uptake into cells for ZnCl 2 and oxidant-induced release from zinc-binding proteins for H 2 O 2 ), together with the protective effects of antioxidant treatment (NAC) and Zn 2+ chelation (TPEN), underpin the pathophysiology of Zn 2+ toxicity in ATP13A2-deficient cells (Fig. 1). Moreover, ATP13A2 2/2 cells had a significantly lower [Zn 2+ ] i (Fig. 2) and altered mRNA expression of ZnTs/ZIPs (Fig. 3), indicating compensatory changes in ATP13A2 2/2 cells and thus, providing strong support for zinc dyshomeostasis in the setting of ATP13A2 deficiency. By using hONs cells expressing mRFP-LC3 to investigate Zn 2+ sequestration in the ALP vesicles (Fig. 4), we were able to show that fewer Zn 2+ containing mRFP-LC3-positive vesicles were present in ATP13A2 2/2 cells, suggesting impaired vesicular sequestration of Zn 2+ and therefore a reduced capacity to buffer Zn 2+ . The fact that mRFP-LC3-positive vesicles accumulated in cells to a similar degree is confirmatory of a genuine difference in co-localization. Although not significant, the observed increase in the FluoZin-3 fluorescence intensity per vesicle in ATP13A2 2/2 cells may reflect a protective response to buffer Zn 2+ via other ZnTs in the setting of ATP13A2 deficiency.
Tsunemi et al. (accepted manuscript co-submitted to HMG: HMG-2013-W-00998.R1) also reported increased toxicity to Zn 2+ with a lack of sensitivity to several biometals, including Mn 2+ in KRS-patient-derived fibroblasts and ATP13A2silenced primary neurons. These data, together with our findings, suggest that human ATP13A2 preferentially functions as a regulator for zinc rather than manganese, while ATP13A2 homologues from other species (e.g. yeast) likely have different substrate selectivities. Further investigations on the protein structure of ATP13A2 from various species and the amino acid residues involved in the interaction with substrates would be helpful in understanding the differences in species-specific cationic selectivity.
A number of ZnTs/ZIPs have been identified in the ALP vesicles, including lysosomes (e.g. ZnT2, ZnT4 and ZIP8, see Kambe et al (35) for a review), implicating the involvement of Zn 2+ in lysosomal function. Although the molecule pumping Zn 2+ in autophagosomes has not yet been identified, a recent study showed the existence of potential ZnTs in autophagosomes . Zn 2+ -mediated mitochondrial fragmentation in ATP13A2 2/2 hONs cells. hONs cells were treated with either ZnCl 2 alone or ZnCl 2 with IBMX and assessed for mitochondrial interconnectivity, ATP production rate and cell viability. (A) Cells were immunologically stained for Grp75 (green), a mitochondrial matrix protein and the nuclei were stained with 4',6-diamidino-2-phenylindole (blue). Mitochondrial form factor was calculated to determine the degree of mitochondrial interconnectivity (see Materials and Methods). Representative confocal images are presented for the control (upper panels) and ATP13A2 2/2 cells (bottom panels) that were treated as indicated. Scale bar ¼ 20 mm. (B) The mitochondrial form factor was found to be comparable between the control (white bar) and ATP13A2 2/2 (grey bar) cells in the vehicle control groups, while CCCP treatment reduced the mitochondrial form factor significantly in both cell lines, indicating mitochondrial fragmentation (n ¼ 65, 15-18 cells per coverslip from four coverslips in two independent experiments). Conversely, ZnCl 2 treatment decreased the mitochondrial form factor in ATP13A2 2/2 cells, while there was only mild reduction detected in the control. Promotion of mitochondrial fusion using IBMX treatment, prevented ZnCl 2 -mediated mitochondrial fragmentation in ATP13A2 2/2 cells and further increased mitochondrial interconnectivity in the control. (C) ZnCl 2 (100 mM) treatment caused a significant reduction in ATP production rate in both the cell lines, although to a greater extent in ATP13A2 2/2 cells (grey bars) compared with the control (white bars). IBMX co-treatment significantly blocked the Zn 2+ -mediated reduction in the ATP production rate in ATP13A2 2/2 cells. (D) The viability of ATP13A2 2/2 cells was significantly reduced upon exposure to ZnCl 2 (112.5 mM), while no difference was observed in the control. Further to this, co-treatment with IBMX (100 mM) blocked Zn 2+ -mediated cytotoxicity in ATP13A2 2/2 cells. Values in the graphs are represented as mean + SD. CCCP, carbonyl cyanide 3-chlorophenylhydrazone; IBMX, 3-isobutyl-1-methylxanthine. ##P , 0.01 by Mann-Whitney U test and * P , 0.05 and * * P , 0.01 by Kruskal-Wallis one-way ANOVA followed by post hoc Tukey's HSD multiple comparison test. and the crucial role of Zn 2+ in the normal function of autophagy (37). We showed the decreased capacity for sequestration of Zn 2+ into the ALP vesicles and increased Zn 2+ toxicity in ATP13A2 2/2 cells, suggesting that ATP13A2 functions as a common Zn 2+ regulator for the pathway to protect cells from the toxicity of excessive Zn 2+ . Such a protective function has also been observed for ZnT2, which accumulates Zn 2+ into target cellular organelles and blocks Zn 2+ toxicity (42). While our data indicate that ATP13A2 facilitates sequestration of Zn 2+ into the ALP vesicles, it is not clear whether ATP13A2 is involved in the transportation of Zn 2+ from the ALP vesicles to cytosol under physiological [Zn 2+ ] i . The elevated level of ZnT4 transcripts and the lack of ZIP8 expression observed in our patient cells are suggestive of a bidirectional function for ATP13A2 due to the reported localization of these transporters in lysosomes/endosomes. Further studies measuring vesicular Zn 2+ using radioactive 65 Zn in the control and patient-derived cells under patho/physiological [Zn 2+ ] i are warranted to confirm the role of ATP13A2 in zinc transport.
Several studies have reported mitochondrial dysfunction in KRS-patient-derived fibroblasts and mammalian cell models (6,28,39). Consistent with these, we also observed mitochondrial dysfunction, as characterized by a reduction in ATP production and DC m , in our ATP13A2 2/2 cells (Figs 5 and 6). Our patient cells showed decreased levels of ROS production under normal growing conditions (Fig. 5C); a state which may be due to efficient ROS removal, as implicated by the increased mRNA expression levels of antioxidant proteins (Fig. 5D and below). In agreement with the suggested role of ROS as an effector of Zn 2+mediated toxicity (Fig. 1), exogenous Zn 2+ increased H 2 O 2 production in ATP13A2 2/2 cells (Fig. 5C). The failure to detect a difference in mitochondrial O 2 2 production (data not shown) may alternatively be due to a short half-life of O 2 2 or the subtle difference in O 2 2 levels induced by ZnCl 2 treatment. Our data are clearly in line with previous reports that have shown Zn 2+ translocation into mitochondria via Zn 2+ transporting uniporters (36) followed by an increase in ROS production (34), although the exact mechanisms involved in this process are still unclear.
Furthermore, we observed an oxidant-induced increase in [Zn 2+ ] i resulting in mitochondrial depolarization in ATP13A2 2/2 cells, which was effectively prevented by Zn 2+ chelation with TPEN (Fig. 6). Damaged and dysfunctional mitochondria that are incapable of carrying out their normal function undergo fragmentation via inhibition of mitochondrial fusion before degradation (40). Consistently, we have observed a more fragmented mitochondrial network in patient fibroblasts whose mitochondria were inherently dysfunctional (28). However, we detected no difference in hONs cells, which could be due to cell-specific differences between the cell lines. Nevertheless, exogenous Zn 2+ administration was capable of inducing mitochondrial fragmentation in ATP13A2 2/2 cells (Fig. 7). Increased ROS levels are known to induce mitochondrial damage, thereby activating the mitochondrial fission pathway and inhibiting the fusion pathway in order to segregate dysfunctional mitochondria from the healthy reticulum (43). Mitochondrial fragmentation induced by Zn 2+ in our hONs cells was likely mediated by ROS production induced by Zn 2+ (Fig. 5). The adverse effect of elevated [Zn 2+ ] i on mitochondrial morphology was blocked by IBMX treatment, resulting in an extensively interconnected mitochondrial network (Fig. 7). IBMX is known to induce accumulation of cAMP by inhibiting its degradation, in turn activating protein kinase A, which phosphorylates dynamin-related protein 1 (DRP1), an essential mitochondrial fission factor (44). Phosphorylation of DRP1 prevents it from interacting with the mitochondrial outer membrane, thereby impeding mitochondrial fission in favour of mitochondrial fusion. As well as its effect on mitochondrial interconnectivity, exogenous Zn 2+ was also found to cause ATP depletion and cell death (Fig. 7). These data indicate mitochondria as a primary target of Zn 2+ toxicity in the setting of ATP13A2 deficiency. Promotion of mitochondrial fusion through the introduction of IBMX was beneficial in protecting cells from the toxic effects of Zn 2+ , highlighting the role of mitochondrial fragmentation in Zn 2+ toxicity. These findings indicate that abnormal mitochondrial function is closely linked to ATP13A2 deficiencymediated zinc dyshomeostasis, strongly supporting the loss of ATP13A2 as the cause of mitochondrial dysfunction in our KRS-patient-derived cell line.
Two recent studies reported an increase in ROS production, mitochondrial membrane potential and mitochondrial fragmentation in ATP13A2-silenced cells (6,39), seemingly contradicting our observations in ATP13A2 2/2 cells grown under the basal conditions. These changes were most likely caused by the toxicity of transiently increased [Zn 2+ ] i due to uncompensated impairment in the cellular Zn 2+ buffering system upon the acute loss of ATP13A2. In contrast, our patient cells inherently harbouring ATP13A2 deficiency have demonstrated compensatory changes (e.g. altered expression of ZnTs/ZIPs and antioxidant proteins) which result in lowered [Zn 2+ ] i and ROS production, allowing the cells to avoid possible damage by Zn 2+induced ROS production. Despite the beneficial effect on cell survival, low [Zn 2+ ] i may also have caused mitochondrial dysfunction in ATP13A2 2/2 cells due to its adverse effect on mitochondrial function as shown in TPEN-mediated impairment of DC m and ATP production (45,46).
A schematic model summarizing the pathogenic mechanisms of how ATP13A2 deficiency likely causes zinc dyshomeostasis and mitochondrial dysfunction is illustrated in Figure 8; loss of ATP13A2 results in a limited cellular buffering capacity of cytosolic Zn 2+ due to the impairment of Zn 2+ sequestration by ALP vesicles and thus zinc dyshomeostasis, which in turn results in mitochondrial dysfunction. When the [Zn 2+ ] i is elevated, high levels of cytosolic Zn 2+ induced by inefficient sequestration trigger mitochondria to increase their production of ROS, which in turn leads to mitochondrial damage when the level of accumulated ROS exceeds cellular antioxidizing capacity, causing aggravation of mitochondrial dysfunction and oxidative stress. Extensive dysfunction in mitochondria causes mitochondrial fragmentation, leading to ATP depletion and consequently cellular degeneration.
In this study, we show that human ATP13A2 is involved in Zn 2+ transportation into the ALP vesicles and a loss of which results in zinc dyshomeostasis and abnormal energy metabolism. Our results indicate that human ATP13A2 is a common molecule associated with the mechanisms underlying zinc dyshomeostasis and mitochondrial dysfunction. The findings extend our current knowledge of the pathogenesis of PD, which may facilitate the development of a neuroprotective strategy to treat PD.
Chemicals
All chemicals used here were purchased from Sigma (St Louis, MO, USA) unless stated otherwise.
Cell culture
The protocols for establishment and culture of hONs cell lines have previously been described (3). hONs cells were subcultured to a maximum of 10 passages for all experiments. This study was approved by the Northern Sydney & Central Coast Health Human Research Ethics Committee.
Lentivirus production and establishment of cell lines
V5-tagged wild-type ATP13A2 (V5ATP13A2) in pcDNA3-V5ATP13A2 (3) was subcloned into a pER4 lentiviral vector. Lentivirus for the expression of mRFP-LC3 (38) and V5ATP13A2 was produced using the Lenti-X Lentiviral Expression system (Clontech, Mountain View, CA, USA) and Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instruction. The medium containing lentivirus was collected at 48 and 72 h post-transfection followed by concentration using the Lenti-X concentrator before measurement of viral titre.
hONs cells were transduced with one to two multiplicity of infection (MOI) lentivirus in the presence of 4 mg/ml polybrene for 24 h and used for subsequent experiments. Expression of target molecules in the cells was confirmed by western blotting according to the previously publishedprotocol (3)orfluorescencemicroscopy. For generation of stable cell lines expressing mRFP-LC3, the cells were grown in culture media containing 1 mg/ml puromycin for selection.
Neutral red uptake assay hONs cells were plated at 5 × 10 4 cells per well in a 24-well plate and grown to confluency. Following incubation in serum-free media for 16-24 h, the cells were exposed to different combinations of test chemicals for 24 h as indicated. For IBMX treatment, cells were pre-treated with 100 mM IBMX for 16 h before co-treatment with ZnCl 2 . The Neutral red uptake assay for cell viability was performed according to a protocol described elsewhere (33).
Quantification of transcripts for ZnTs/ZIPs and antioxidant enzymes
hONs were plated at 2 × 10 5 cells per well in a six-well plate and grown for 24 h. Total RNA was extracted using the RNeasy Mini kit (Qiagen, Germany) and 2 mg of total RNA was used to synthesize complementary DNA (cDNA) using the Superscript III first-strand synthesis kit for RT-PCR (Invitrogen) following DNase I treatment (Promega, Madison, WI). qRT-PCR was performed using the QuantiTect SYBR green PCR kit (Qiagen) and Figure 8. Schematic model of zinc dyshomeostasis and abnormal energy metabolism in ATP13A2 deficiency. Loss of ATP13A2 (green ellipse) results in a limited cellular buffering capacity for cytosolic Zn 2+ due to the impairment in sequestration of Zn 2+ (black circle) into LC3 (red circle) positive vesicles (single and double membraned organelles) associated with the ALP. The ensuing zinc dyshomeostasis results in mitochondrial dysfunction (lower DC m and ATP levels). When the [Zn 2+ ] i is elevated, cytosolic Zn 2+ levels also increase due to inefficient sequestration by LC3-positive vesicles in the setting of ATP13A2 deficiency and instead induce the accumulation of Zn 2+ in mitochondria, which increases ROS production. An elevated level of ROS in turn causes mitochondrial damage, worsening mitochondrial dysfunction that subsequently leads to reduced energy production, fragmentation of the mitochondrial network and cellular degeneration due to ATP depletion. specific primers for the genes encoding human ZIP and ZnT families (KiCqStart SYBR Green Primers, Sigma) in a Rotorgene 6000 real-time PCR machine (Qiagen) according to the manufacturer's instructions. Primers were annealed at 608C over 45 cycles. Primers used to amplify the genes encoding antioxidant proteins are listed in Supplementary Material, Table S1. At the end of each qRT-PCR run, melting curve analysis was performed to confirm specific target gene amplification. In order to assess Zn 2+ levels in the ALP vesicles, hONs cells expressing mRFP-LC3 were grown in m-Dishes, as mentioned above. On the day of assay, the cells were treated with 100 nM bafilomycin A1 for 4 h, the last hour of which was in co-treatment with 5 mM FluoZin-3 AM. After removing extraneous dye by washing with HBSS, the cells were incubated with 0.75 mM H 2 O 2 for 30 min, followed by confocal microscopy.
Imaging of intracellular free zinc ions
Fluorescence was visualized using a Leica SP5 confocal microscope (Leica, Germany). In each experiment, the same parameters were applied to acquire images from all samples. Image J software (version 1.43 m, National Institutes of Health, Bethesda, MD, USA) was used to analyse the images to determine fluorescence intensity and the co-localization coefficient.
Monitoring of mitochondrial membrane potential (DC m ) hONs cells were seeded in a black 96-well plate at 1 × 10 4 cells per well and grown for 24 h. For assessment of DC m , the cells were incubated with either dimethyl sulfoxide (DMSO) or 25 mM CCCP for 4 h in serum-free media. After washing with HBSS, the cells were stained with 25 nM TMRM for 15 min in a cell culture incubator before measurement of fluorescence using a Victor 3 V1420 multilabel plate counter (Perkin Elmer, Waltham, MA, USA).
To determine the effect of elevated [Zn 2+ ] i on DC m , hONs cells plated in a 35 mm m-Dish were incubated with either 0.9 mM H 2 O 2 or 0.9 mM H 2 O 2 with 1 mM TPEN for 5 h in serumfree media. In the last hour, the cells were co-incubated with 500 nM JC-1 (Invitrogen). Fluorescence was visualized using a Leica SP5 confocal microscope (Leica) with constant parameters applied to acquire images from all samples. The area occupied by mitochondria in red fluorescence per cell was calculated in morphologically intact cells using Image J software (version 1.43 m).
Measurement of ROS production
hONs cells were plated at 5 × 10 4 cells per well in a black 96-well microplate and grown to confluency. The cells were then stained with 5 mM CM-H 2 DCFDA (H 2 O 2 indicator, Invitrogen) or 5 mM MitoSox Red (O 2 2 indicator, Invitrogen) for 15 min at 378C. After washing off extraneous dyes with HBSS, the cells were treated with increasing doses of ZnCl 2 and the fluorescence from cells was immediately measured using a Victor 3 V1420 multilabel plate counter (Perkin Elmer) with measurements every 5 min for 30 min.
Assessment of ATP production rate ATP production rate was determined following the previously described protocol (47). Briefly, the cells were harvested by trypsinization before determining the total protein concentration using a BCA protein assay kit (Thermo Scientific, Rockford, IL, USA) according to the manufacturer's instructions. Cells were diluted in a cell suspension buffer [150 mM KCl, 25 mM Tris-HCl pH 7.6, 2 mM EDTA pH 7.4, 10 mM KPO 4 pH 7.4, 0.1 mM MgCl 2 and 0.1% (w/v) BSA] at 1 mg/ml total protein. ATP synthesis was induced by incubation of 250 ml of the cell suspension with 750 ml of substrate buffer (10 mM malate, 10 mM pyruvate, 1 mM ADP, 40 mg/ml digitonin and 0.15 mM adenosine pentaphosphate) for 10 min at 37 8C. Following this incubation, the reaction was stopped by the addition of 450 ml of boiling quenching buffer (100 mM Tris-HCl, 4 mM EDTA pH 7.75) into a 50 ml aliquot of the reaction mixture and subsequently incubated for 2 min. The resulting reaction mixture was further diluted 1:10 in quenching buffer, and the quantity of ATP was measured in an FB10 luminometer (Berthold Detection Systems, Germany) using the ATP bioluminescence assay kit (Roche Diagnostics, Switzerland), according to the manufacturer's instructions.
Determination of mitochondrial interconnectivity
Mitochondrial network interconnectivity was assessed according to the previously described protocol (28). Briefly, hONs cells grown on coverslips were treated with either 100 mM ZnCl 2 or 10 mM CCCP or 100 mM ZnCl 2 and 100 mM IBMX for 24 h and then fixed in 4% (w/v) paraformaldehyde. For ZnCl 2 and IBMX co-treatment, the cells were pre-treated with IBMX for 16 h before initiation of co-treatment. After permeabilization with 0.1% (v/v) Triton X-100, mitochondria were labelled with an anti-Grp75 antibody (Abcam, Cambridge, UK) and the Zenon immunolabelling kit (Invitrogen) according to the manufacturer's protocols. Fluorescence signals were assessed by confocal microscopy. Image J software (version 1.44) was used to measure the length of the mitochondrial perimeter (P m ) and the area of mitochondrion (A m ). Mitochondrial interconnectivity was determined by calculating the form factor (form factor ¼ [P m 2 ]/[4pA m ]).
Statistical analysis
All experiments were repeated three times in triplicate and the values are expressed as percentage change relative to vehicletreated control groups unless otherwise stated in the text. All datasets were tested for normality using the Shapiro -Wilk test and analysed for statistical significance using SPSS (version 21, IBM, Armonk, NY, USA). A P-value of ,0.05 was considered to be statistically significant. | 10,179.4 | 2014-01-07T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Mechanical and Microstructural Assessment of Inhomogeneities in Oxide Ceramic Matrix Composites Detected by Air-Coupled Ultrasound Inspection
: Ceramic Matrix Composites (CMC) are promising materials for high-temperature applications where damage tolerant failure behavior is required. Non-destructive testing is essential for process development, monitoring, and quality assessment of CMC parts. Air-coupled ultrasound (ACU) is a fast and cost-efficient tool for non-destructive inspections of large components with respect to the detection of material inhomogeneities. Even though ACU inspection is usually used for visual inspection, the interpretation of C-scan images is often ambiguous with regard to critical defects and their impact on local material properties. This paper reports on a new approach to link the local acoustic damping of an oxide CMC plate obtained from ACU analysis with subsequent destructive mechanical testing and microstructural analyses. Local damping values of bending bars are extracted from ACU maps and compared with the results of subsequent resonant frequency damping analysis and 3-point bending tests. To support data interpretation, the homogeneous and inhomogeneous CMC areas detected in the ACU map are further analyzed by X-ray computed tomography and scanning electron microscopy. The results provide strong evidence that specific material properties such as Young’s modulus are not predictable from ACU damping maps. However, ACU shows a high, beneficial sensitivity for narrow but large area matrix cracks or delaminations, i.e., local damping is significantly correlated with specific properties such as shear moduli and bending strengths.
Introduction
Oxide Ceramic Matrix Composites (Ox-CMC) with porous matrices have been developed since the mid-1990s and have been commercially available since the early 2000s. Porous Ox-CMC provide a unique combination of properties, e.g., high thermal resistance, damage tolerant failure behavior, and good corrosion resistance, which makes them promising materials for light weight, high-temperature structural applications for aerospace such as ducts, nozzles, or mixers for exhaust gases of turbine engines [1,2]. State-of-the-art Ox-CMC typically consist of continuous alumina (α-Al 2 O 3 , corundum) or aluminosilicate (Al 6 Si 2 O 13 , mullite) fibers and porous matrices of micro-to nanoscaled α-Al 2 O 3 , Al 6 Si 2 O 13 , and mixtures thereof. Some Ox-CMC also include additional zirconia (ZrO 2 ) to prevent excessive grain growth and densification of the porous matrices. Typical matrix porosities are in the range of 30 to 50 vol%. With typical fiber volume contents between 30 and 50 vol%, total porosities of Ox-CMC range between 15 to 35 vol%. A high matrix porosity inevitably results in relative low strength and toughness. Consequently, this class of materials is commonly referred to as "Weak Matrix Composites". Fracture is substantially governed by microcracking and disintegration of the porous, low toughness ceramic matrix prior to fiber rupture. This results in the typical 'failure-tolerant' behavior of porous Ox-CMC. frequency was 200 kHz and horizontal and vertical step size during measurement was 0.6 mm, respectively. From selected CMC areas appearing 'homogeneous' and 'inhomogeneous' in the amplitude-based C-scan image, 90 × 10 mm 2 bending bars were cut by a diamond saw. Two principal cutting directions were defined in order to obtain samples with ±45 • and 0/90 • fiber orientation. Quantitative damping values for each bending bar were extracted from the ACU raw data by a self-developed Python-based script.
Resonant Frequency Damping Analysis (RFDA)
Elastic properties of bending bars were measured by resonant frequency and damping analysis (RFDA Professional, IMCE NV, Genk, Belgium) using a 3 mm metal projectile, a node-distance of 49.9 mm, and an impulse power of 52%. The automatic excitation unit and the microphone were positioned at diagonally opposite corners to test the sample simultaneously in torsional and flexural vibration mode. The measured frequencies were in the range of F (flexural) 1976-2049 Hz and F (torsional) 7513-7883 Hz for the 0/90 • fiber orientation and F (flexural) 1657-1727 Hz and F (torsional) 9183-10,055 Hz for the ±45 • fiber orientation.
X-ray Computed Tomography
Non-destructive analyses of selected bending bars were performed by micro-focus computed tomography (µCT, v|tome|x L, GE Sensing & Inspection Technologies GmbH, Wunstorf, Germany). A small volume of 20 × 10 × 3 mm 3 from the specimen's central part (i.e., estimated loading/failure zone during bending test) was scanned. The minimum voxel size of CT-scans was 12 µm 3 .
Mechanical and Microstructural Analysis
Three-point bending tests were performed in a universal testing system (UTS 10, Zwick-Roell, Ulm, Germany) at a support span of 80 mm and a cross head speed of 2 mm/min. Force was measured with a 2.5 kN load cell A.S.T. KAF-TC. Displacement of the specimen was measured inductively with a Millitron 1310 system in the center of the bending specimen, and additionally at −15 mm and +15 mm from the center to determine only the specimen deformation without any deformation of the fixtures or loading system. The bending modulus was determined using a linear fit for a defined stress interval (±45 • : 15-35 MPa and 0/90 • : 20-50 MPa) in the linear elastic region of the stress-strain curve. Microstructural analyses were performed by scanning electron microscopy (Ultra 55, Carl Zeiss Microscopy, Oberkochen, Germany).
A schematic overview of sample selection and experimental methodology is presented in Figure 1.
Air-Coupled Ultrasound Inspection of the Ox-CMC Plate
The amplitude-based C-scan image depicted in Figure 2a shows the left part of the Ox-CMC plate with a view field of 305 × 305 mm 2 . The local attenuation of the ultrasound signal is given as a heatmap in dB (logarithmic scale). Based on the attenuation level of the signals, a distinction can be made between undisturbed material of good quality, delaminations, and material gaps (e.g., matrix cracks or pore accumulations) [23]. The majority of the C-scan shows an undisturbed, homogeneous image of the material with an average attenuation of the ultrasound signal of about −17 dB and a narrow signal distribution, i.e., low variation in the attenuation. Two distinctly inhomogeneous areas with significant scattering and lower attenuation are visible in the upper left and lower center areas of the C-scan heatmap. A lower attenuation of the ultrasonic signal compared to the undisturbed material is typically linked to material gaps, whereas large area delaminations are typically characterized by a high attenuation of the ultrasound signal. On the basis of the C-scan map, two sample classes were defined with fiber orientation ±45° and 0/90°, which are commonly employed to assess the fiber-and matrix-dominated failure of CMC, respectively. From all areas of interest, bending bars were cut according to the cutting scheme overlaid in Figure 2b. Samples 1 to 10 of each class are located in the plate area with significantly scattering attenuation. Samples 11 to 15 of each class are located in areas of undisturbed material and act as the benchmark for subsequent analyses.
Air-Coupled Ultrasound Inspection of the Ox-CMC Plate
The amplitude-based C-scan image depicted in Figure 2a shows the left part of the Ox-CMC plate with a view field of 305 × 305 mm 2 . The local attenuation of the ultrasound signal is given as a heatmap in dB (logarithmic scale). Based on the attenuation level of the signals, a distinction can be made between undisturbed material of good quality, delaminations, and material gaps (e.g., matrix cracks or pore accumulations) [23]. The majority of the C-scan shows an undisturbed, homogeneous image of the material with an average attenuation of the ultrasound signal of about −17 dB and a narrow signal distribution, i.e., low variation in the attenuation. Two distinctly inhomogeneous areas with significant scattering and lower attenuation are visible in the upper left and lower center areas of the C-scan heatmap. A lower attenuation of the ultrasonic signal compared to the undisturbed material is typically linked to material gaps, whereas large area delaminations are typically characterized by a high attenuation of the ultrasound signal. On the basis of the C-scan map, two sample classes were defined with fiber orientation ±45 • and 0/90 • , which are commonly employed to assess the fiber-and matrix-dominated failure of CMC, respectively. From all areas of interest, bending bars were cut according to the cutting scheme overlaid in Figure 2b. Samples 1 to 10 of each class are located in the plate area with significantly scattering attenuation. Samples 11 to 15 of each class are located in areas of undisturbed material and act as the benchmark for subsequent analyses.
Resonant Frequency Damping Analysis
The first characterization method for the prepared samples was the resonant frequency damping analysis (RFDA), frequently referred to as impulse excitation technique (IET). The samples were measured in out-of-plane flexure and torsional vibration mode to determine the Young's and G-Modulus. The measurement accuracy of the IET device is ±1.0 GPa for the Young's modulus and ±0.3 GPa for the G-modulus for given sample dimensions. Figure 3 displays data of the ten bending bars cut from inhomogeneous CMC areas (left columns), together with the five reference samples (right columns). Evidently, the inhomogeneous areas show significant variations, in particular the E-and G-Moduli of ±45° samples (matrix dominated properties, cf. Table 1). Samples with 0/90° fiber orientation generally show a much lower scattering with the Young's moduli within the measurement accuracy of ±1.0 GPa and a slight difference in the G-Moduli of 2 GPa (fiber dominated properties). Both datasets, in particular the ±45° direction, reflect well the visual observation from the ACU heatmap. Based on the IET results, a preselection was made for a subsequent CT examination. From both datasets, two characteristic samples with high and low E-and G-moduli were selected and marked with diamonds (±45°) and squares (0/90°), respectively. The color of filling is representative for high (green) and low (red) values. For example, sample #1 of the ±45° set shows highest values and sample #3 of the 0/90° set shows the lowest values. All further analyses were performed with special focus on these samples.
Resonant Frequency Damping Analysis
The first characterization method for the prepared samples was the resonant frequency damping analysis (RFDA), frequently referred to as impulse excitation technique (IET). The samples were measured in out-of-plane flexure and torsional vibration mode to determine the Young's and G-Modulus. The measurement accuracy of the IET device is ±1.0 GPa for the Young's modulus and ±0.3 GPa for the G-modulus for given sample dimensions. Figure 3 displays data of the ten bending bars cut from inhomogeneous CMC areas (left columns), together with the five reference samples (right columns). Evidently, the inhomogeneous areas show significant variations, in particular the E-and G-Moduli of ±45 • samples (matrix dominated properties, cf. Table 1). Samples with 0/90 • fiber orientation generally show a much lower scattering with the Young's moduli within the measurement accuracy of ±1.0 GPa and a slight difference in the G-Moduli of 2 GPa (fiber dominated properties). Both datasets, in particular the ±45 • direction, reflect well the visual observation from the ACU heatmap. Based on the IET results, a preselection was made for a subsequent CT examination. From both datasets, two characteristic samples with high and low E-and G-moduli were selected and marked with diamonds (±45 • ) and squares (0/90 • ), respectively. The color of filling is representative for high (green) and low (red) values. For example, sample #1 of the ±45 • set shows highest values and sample #3 of the 0/90 • set shows the lowest values. All further analyses were performed with special focus on these samples.
X-ray Computed Tomography Analysis
The microstructures of bending bars from homogenous and inhomogeneous areas according to the ACU-inspection were extracted and analyzed by µCT. Figure 4 shows the reconstructed 3-dimensional pore volume within specimens with fiber orientation of ±45°. Sample #1 and #2 were both cut from homogenous areas, #5 and #6 from inhomogeneous areas according to the ACU image in Figure 2. The pores between the fiber bundles are visible as elongated and tubular-like cavities throughout the laminate microstructure. Although few accumulations of pores (dark-red) can be detected, no significant difference between the four samples can be observed. The four bending specimens with orientation of 0/90° from homogenous (sample #9 and #10) and inhomogeneous (sample #3 and #4) areas were analyzed accordingly. The results are shown in Figure 5. Compared to the results of ±45°, the calculated pore volume of sample #9 and #10 is noticeably lower than the one of the samples from inhomogeneous area (sample #3), which is consistent with the observations of the ACU inspection. The microstructures of bending bars from homogenous and inhomogeneous areas according to the ACU-inspection were extracted and analyzed by µCT. Figure 4 shows the reconstructed 3-dimensional pore volume within specimens with fiber orientation of ±45 • . Sample #1 and #2 were both cut from homogenous areas, #5 and #6 from inhomogeneous areas according to the ACU image in Figure 2. The pores between the fiber bundles are visible as elongated and tubular-like cavities throughout the laminate microstructure. Although few accumulations of pores (dark-red) can be detected, no significant difference between the four samples can be observed. The four bending specimens with orientation of 0/90 • from homogenous (sample #9 and #10) and inhomogeneous (sample #3 and #4) areas were analyzed accordingly. The results are shown in Figure 5. Compared to the results of ±45 • , the calculated pore volume of sample #9 and #10 is noticeably lower than the one of the samples from inhomogeneous area (sample #3), which is consistent with the observations of the ACU inspection. The 3-dimensional pore volume was calculated based on detected pores at a voxel size of 12 µm 3 divided by the investigated volume (see Table 2). It is noteworthy that the reconstructed 3-dimensional pore volume is far off the 26.2% porosity measured by the Archimedes method for the investigated CMC-plate (cf. Table 2). Due to the voxel size of 12 µm 3 , smaller pores and especially the microporosity of the matrix phase are not detected in XCT. Table 1). Comparing the samples from inhomogeneous areas, no significant differences can be identified with regard to the flexural modulus. The reason for this might be that the flexural modulus is mainly influenced by the fiber reinforcement and only to a small amount by the porous matrix and the defects herein [24,25]. The 3-dimensional pore volume was calculated based on detected pores at a voxel size of 12 µm 3 divided by the investigated volume (see Table 2). It is noteworthy that the reconstructed 3-dimensional pore volume is far off the 26.2% porosity measured by the Archimedes method for the investigated CMC-plate (cf. Table 2). Due to the voxel size of 12 µm 3 , smaller pores and especially the microporosity of the matrix phase are not detected in XCT. Table 1). Comparing the samples from inhomogeneous areas, no significant differences can be identified with regard to the flexural modulus. The reason for this might be that the flexural modulus is mainly influenced by the fiber reinforcement and only to a small amount by the porous matrix and the defects herein [24,25].
Data Extraction and Analyses
The employed ACU system yields 2D distribution maps of the acoustic damping of the current material as heatmap plots. A low scattering damping is considered as representative for 'good' or 'strong' and 'homogeneous' material. In the present case, 'good' material shows damping values of approximately −17 dB, see Figure 7b,d. For deeper analyses and understanding, however, meaningful damping values representing local effects are mandatory. Therefore, a routine was developed to extract comparable damping values for individual samples, i.e., bending bars. For this purpose, a Python-based script was developed to calculate line-data from the corresponding map areas. The concept is illustrated in Figure 7: for each bending bar, five parallel linescans were extracted (see thin grey lines and inset) and averaged (thick lines). In order to obtain a single damping value for the bending bars, damping data was averaged over the entire sample length (dashed lines). Figure 7a,b shows a comparison between 'inhomogeneous' sample #6 and 'homogenous' sample #3 selected from the ±45 • set. An averaged damping of −13.04 dB was calculated for sample #6, i.e., about 23% less than the 'homogenous' sample #3. Similar values were extracted for bending bars #3 and #10 representing the 0/90 • fiber orientation (see Figure 7c,d).
Data Extraction and Analyses
The employed ACU system yields 2D distribution maps of the acoustic damping of the current material as heatmap plots. A low scattering damping is considered as representative for 'good' or 'strong' and 'homogeneous' material. In the present case, 'good' material shows damping values of approximately −17 dB, see Figure 7b,d. For deeper analyses and understanding, however, meaningful damping values representing local effects are mandatory. Therefore, a routine was developed to extract comparable damping values for individual samples, i.e., bending bars. For this purpose, a Python-based script was developed to calculate line-data from the corresponding map areas. The concept is illustrated in Figure 7: for each bending bar, five parallel linescans were extracted (see thin grey lines and inset) and averaged (thick lines). In order to obtain a single damping value for the bending bars, damping data was averaged over the entire sample length (dashed lines). Figure 7a,b shows a comparison between 'inhomogeneous' sample #6 and 'homogenous' sample #3 selected from the ±45° set. An averaged damping of −13.04 dB was calculated for sample #6, i.e., about 23% less than the 'homogenous' sample #3. Similar values were extracted for bending bars #3 and #10 representing the 0/90° fiber orientation (see Figure 7c,d). The objective was to identify quantifiable correlations between ACU results and CMC properties to obtain localized data for material quality at high 2D resolution. As a first step, the acquired data was analyzed for 'homogeneous' and 'inhomogeneous' regions in 'bending bar resolution'. Assuming that homogenous regions in the ACU map are representative for good mechanical properties, the measured values and extracted damping data of the five samples for each ±45 • and 0/90 • fiber orientation were averaged and act as reference data. Data obtained from samples located in inhomogeneous areas are treated individually. All values were normalized and plotted against their corresponding reference values. As a hypothesis, a simple linear correlation between elastic/mechanical properties and ACU damping was assumed, and datapoints were fitted accordingly. In the following viewgraphs, the specific samples with low and high values as detected by RFDA (also analyzed by XCT) are highlighted by green or red filled marks, respectively (Note that each sample is marked by distinct symbols/indexes). First, relative elastic and shear moduli as derived from RFDA measurements are plotted against their relative damping. Evidently, both ±45 • and 0/90 • E-moduli derived from RFDA show little to no correlation to damping, as indicated by very low slopes of both linear fits (Figure 8a). On the other hand, irrespective of fiber orientation G-moduli show significant linear correlation to damping, but more pronounced in the case of ±45 • fiber orientation (Figure 8b). With respect to bending strength, there is a linear correlation to damping in both fiber orientations, the fit curves also exhibit similar slopes (Figure 9a). With respect to strain at flexural strength, there is some ambiguity-while the 0/90 • fiber orientation shows clearly distinguishable data following a linear trend, the datapoints of the orientation can be fitted linearly but with a relative low slope. In order to assess the quality of all linear fits, the parameters R2 along with an ANOVA-type analysis (confidence level > 95%) was calculated and compiled in Table 3. It tuns out that, besides both E-moduli linear fits, the strain at flexural strength linear fit for ±45 • is also not significant, i.e., does not exhibit slopes sufficiently different from zero. The strongest linear correlation is found between sample damping and shear moduli, whereas bending strengths show significant, but generally weaker correlation. It must be emphasized, that 'strong' as well as 'weak' samples indexed by green/red filled symbols follow trends quite closely. Moreover, clustering into 'good' and 'bad' samples is seemingly possible, independent of the statistical evidence of linear approximations. The objective was to identify quantifiable correlations between ACU results and CMC properties to obtain localized data for material quality at high 2D resolution. As a first step, the acquired data was analyzed for 'homogeneous' and 'inhomogeneous' regions in 'bending bar resolution'. Assuming that homogenous regions in the ACU map are representative for good mechanical properties, the measured values and extracted damping data of the five samples for each ±45° and 0/90° fiber orientation were averaged and act as reference data. Data obtained from samples located in inhomogeneous areas are treated individually. All values were normalized and plotted against their corresponding reference values. As a hypothesis, a simple linear correlation between elastic/mechanical properties and ACU damping was assumed, and datapoints were fitted accordingly. In the following viewgraphs, the specific samples with low and high values as detected by RFDA (also analyzed by XCT) are highlighted by green or red filled marks, respectively (Note that each sample is marked by distinct symbols/indexes). First, relative elastic and shear moduli as derived from RFDA measurements are plotted against their relative damping. Evidently, both ±45° and 0/90° E-moduli derived from RFDA show little to no correlation to damping, as indicated by very low slopes of both linear fits (Figure 8a). On the other hand, irrespective of fiber orientation G-moduli show significant linear correlation to damping, but more pronounced in the case of ±45° fiber orientation (Figure 8b). With respect to bending strength, there is a linear correlation to damping in both fiber orientations, the fit curves also exhibit similar slopes (Figure 9a). With respect to strain at flexural strength, there is some ambiguity-while the 0/90° fiber orientation shows clearly distinguishable data following a linear trend, the datapoints of the orientation can be fitted linearly but with a relative low slope. In order to assess the quality of all linear fits, the parameters R2 along with an ANOVA-type analysis (confidence level > 95%) was calculated and compiled in Table 3. It tuns out that, besides both E-moduli linear fits, the strain at flexural strength linear fit for ±45° is also not significant, i.e., does not exhibit slopes sufficiently different from zero. The strongest linear correlation is found between sample damping and shear moduli, whereas bending strengths show significant, but generally weaker correlation. It must be emphasized, that 'strong' as well as 'weak' samples indexed by green/red filled symbols follow trends quite closely. Moreover, clustering into 'good' and 'bad' samples is seemingly possible, independent of the statistical evidence of linear approximations.
Supporting Microstructural Analysis
The observed inhomogeneities in the ACU maps are evidently correlated to local CMC material properties. The 2D resolution of ACU, however, is low, and microstructural features responsible for damping effects are presumably not detectable. Previous XCT analyses suggest that defects require high resolution imaging only provided by electron microscopy. In order to distinguish 'strong' and 'weak' samples, two cross sections were prepared for each fiber orientation with observation plane perpendicular to the long sample axes, respectively. The SEM images in Figure 10 prove that stronger samples 1 and 9 (upper row) exhibit much fewer small cracks than both 'weak' samples #5 and #3 in the lower row. Cracks are mainly aligned horizontally and are located at boundaries of fiber laminates or at matrix-rich regions. From this, it can be concluded that the main damage occurring in the investigated CMC plate is horizontal microcracking. microscopy. In order to distinguish 'strong' and 'weak' samples, two cross sections were prepared for each fiber orientation with observation plane perpendicular to the long sample axes, respectively. The SEM images in Figure 10 prove that stronger samples 1 and 9 (upper row) exhibit much fewer small cracks than both 'weak' samples #5 and #3 in the lower row. Cracks are mainly aligned horizontally and are located at boundaries of fiber laminates or at matrix-rich regions. From this, it can be concluded that the main damage occurring in the investigated CMC plate is horizontal microcracking.
General View on the Employed Methodology
A synopsis of the analyses reveals significant as well as weak correlations of specific testing methods and material properties. XCT and the derived porosities did not reflect mechanical properties, i.e., higher measured porosity was not associated with lower flexural strength. ACU damping correlated well to RFDA analyses in case of Young's and shear moduli for ±45°, i.e., matrix-dominated sample directions. On the other hand, in the fiber-dominated 0/90° direction, only a correlation of RFDA shear moduli and ACU damping could be found. Between ACU and 3-point bending tests, various significant
General View on the Employed Methodology
A synopsis of the analyses reveals significant as well as weak correlations of specific testing methods and material properties. XCT and the derived porosities did not reflect mechanical properties, i.e., higher measured porosity was not associated with lower flexural strength. ACU damping correlated well to RFDA analyses in case of Young's and shear moduli for ±45 • , i.e., matrix-dominated sample directions. On the other hand, in the fiberdominated 0/90 • direction, only a correlation of RFDA shear moduli and ACU damping could be found. Between ACU and 3-point bending tests, various significant correlations could be found: in particular flexural strengths were closely linked to measured local ACU damping. On the other hand, bending moduli could not be linked to local ACU damping. Combing these findings with SEM analyses of cross-sections, it can be stated that ACU mapping is highly sensitive to locally reduced flexural strengths and shear moduli of the investigated Ox-CMC samples caused by large-area (mm 2 ) but thin (sub-µm) laminar matrix cracks. A great advantage of ACU could be the non-destructive detection of such flaws in large-scale specimen.
Conclusions
Air-coupled ultrasound mapping can be used to detect inhomogeneities in large Ox-CMC plates which are not easily detected by other NDT methods such as standard resolution XCT. Data retrieved from an Ox-CMC plate revealed significant correlations between local signal damping and specific mechanical properties. It appears that ACU is very sensitive to sub-micron horizontal microcracks and associated interlaminar properties, in particular shear moduli measured by resonant frequency damping analysis.
As the present findings were obtained from a single Ox-CMC plate only, i.e., a very limited sample count, further investigations are required to strengthen our database. In particular similar experiments regarding interlaminar shear strengths will be helpful for the validation of the concept. However, some uncertainty will inevitably remain as samples cut from ACU regions of interest can only be destructively tested once, i.e., either "in-plane" or "out-of-plane". As the present study was performed with a planar Ox-CMC fabricated from woven fiber fabrics, the question remains still open if other processing methods and/or more complex geometries, such as filament-wound Ox-CMC tubes, can be accessed in a similar manner. In particular, the determination of site-specific mechanical properties from non-planar samples will presumably be challenging. In any case, the results obtained in this study appear to be a promising pathway to develop relatively straightforward, cost-and time-efficient quality assurance methods for Ox-CMC components. There is also a prospect for using ACU maps and corresponding mechanical properties as training datasets for automated quality assessment of CMC parts, as results indicate clear separation in 'weak' and 'strong' which can be very helpful for data clustering. | 6,322.4 | 2021-10-23T00:00:00.000 | [
"Materials Science"
] |
An uncertain model-based approach for identifying dynamic protein complexes in uncertain protein-protein interaction networks
Background Recently, researchers have tried to integrate various dynamic information with static protein-protein interaction (PPI) networks to construct dynamic PPI networks. The shift from static PPI networks to dynamic PPI networks is essential to reveal the cellular function and organization. However, it is still impossible to construct an absolutely reliable dynamic PPI networks due to the noise and incompletion of high-throughput experimental data. Results To deal with uncertain data, some uncertain graph models and theories have been proposed to analyze social networks, electrical networks and biological networks. In this paper, we construct the dynamic uncertain PPI networks to integrate the dynamic information of gene expression and the topology information of high-throughput PPI data. The dynamic uncertain PPI networks can not only provide the dynamic properties of PPI, which are neglected by static PPI networks, but also distinguish the reliability of each protein and PPI by the existence probability. Then, we use the uncertain model to identify dynamic protein complexes in the dynamic uncertain PPI networks. Conclusion We use gene expression data and different high-throughput PPI data to construct three dynamic uncertain PPI networks. Our approach can achieve the state-of-the-art performance in all three dynamic uncertain PPI networks. The experimental results show that our approach can effectively deal with the uncertain data in dynamic uncertain PPI networks, and improve the performance for protein complex identification. Electronic supplementary material The online version of this article (10.1186/s12864-017-4131-6) contains supplementary material, which is available to authorized users.
Background
Over the past decade, yeast two-hybrid, mass spectrometry and other high-throughput experimental have generated a mass of protein-protein interaction (PPI) data. Such PPI data construct the large-scale PPI networks for many organisms. Great efforts have been made to understand organizational principles underlying PPI networks. Many cellular principles have been uncovered by analysis of these networks, such as the scale-free topology [1], disassortativeness [2] and modularity [3].
A protein complex consists of a group of proteins and multiple PPIs at the same time and place, forming single multi-molecular machinery [4]. Since most proteins are only functional after assembly into protein complexes, protein complexes are critical in many biological processes [5]. Over the past decade, great effort has been made to detect complexes on the PPI networks. The Molecular Complex Detection (MCODE) algorithm proposed by Bader and Hogue is the first time to exploit computational methods to identify complexes based on PPI networks [6]. Markov Clustering (MCL) [7] can use random walks to identify based on PPI networks. Liu et al. [8] propose Maximal Cliques Clustering (CMC) to predict complexes from large PPI networks. Based on the core-attachment structural feature [9], Leung et al. [10] propose CORE algorithm to identify proteincomplex cores by calculating the p-values for all pairs of proteins. Similarly, Wu et al. [11] present COACH algorithm to identify protein complexes, which detects the core structure and attachments of complex respectively. Nepusz et al. [12] propose ClusterONE algorithm which effectively improves the performance to identify the overlapping complexes. Zhang et al. [13] propose CSO algorithm to predict complexes by integrating GO data and PPI networks.
A protein complex is formed by a group of proteins at the same time, which interacted with each other by associated polypeptide chains. However, modeling biology systems as static PPI networks will lose the temporal information. It is necessary to construct dynamic PPI networks for both identifing protein complexes and further understanding molecular systems. Since gene expression data is helpful to analyze the temporal information of proteins, some studies [14][15][16][17][18] have used gene expression data to construct dynamic PPI networks and reveal the dynamic character of PPI networks. For example, Faisal et al. [14] predict human aging-related genes by integrating aging-related gene expression data with human PPI data. Wang et al. [15] construct dynamic PPI networks and detect complex by exploiting gene expression data and PPI data.
Another issue in complexes identification is PPI networks contain much noise data including false positive and false negative rates [16]. Some studies have been proposed to improve the reliability of PPI networks [17]. Using uncertain graph model to deal with such PPI networks is more reasonable than traditional graph model. Uncertain model have been applied to analyze social networks, electrical networks and biological networks. Recently, Zhao et al. [18] use uncertain model to detect protein complexes in static PPI networks. Nonetheless, few studies apply uncertain model to analyze dynamic PPI networks.
In this study, we firstly construct dynamic uncertain PPI networks (DUPN) by integrating gene expression and PPI data. The active time point and the existence probability of each protein is calculated based on gene expression data. The existence probability of each PPI is calculated based on the topological property of high-through PPI data. We then attempt to use uncertain graph model to identify the protein complexes in DUPN, and propose a clustering algorithm named CDUN. Finally, we evaluate our method in different datasets and the experimental results show that our method achieves the state-of-the-art performance for complex identification.
Methods
In this section, we introduce how to integrate the gene expression data with the PPI data to construct the DUPN, and then describe the clustering algorithm CDUN for identify protein complexes based on the DUPN in details.
Active time points and probability of proteins
In a living cell, proteins and PPIs are not static but changing over time [19]. The gene expression is useful to analyze the temporal information of the proteins. In recent years, some studies [15,20,21] have use gene expresstion data to calculate the active time points of proteins.
The gene expression data consist of n time point profiles. Let G i (p) denote the gene p expression value at i time point. Let α(p)and σ(p)be the arithmetic mean and the standard deviation (SD) of Gi(p), respectively.
In this study, we use the Eqs. (3) and (4) to calculate protein active probability at the different time points.
We use the Eq. (3) to calculate the k-sigma (k = 1,2,3) threshold for the gene p. Ge_thres k is determined by the values of α(p),σ 2 (p)and k (the times of sigma). If σ 2 (p)is very low, it indicates that the fluctuation of the expression curve of gene p is also very small and the value of G i (p) tends to be very close to α(p). In this case, the value of Ge_thresh k is close to α(p). If σ 2 (p)is very high, it indicates much noise in the gene expression data of the gene p. In this case, the value of Ge_thresh k is close to α(p) + k · σ(p). In the Eq. (3), the range of k (the times of sigma) is in (0, 3), and 3 is the maximum times of sigma. The larger k is, the higher Ge_thresh k gets. A higher value of Ge_thresh k indicates that using more strict rules to identify the active time point of a protein [20].
We use the Eq. (4) to calculate the active probability of a protein in the i time point. Thus, the protein active probability contains four levels (0.99, 0.95, 0.68 and 0) based on the sigma rules (P{|X-α| < σ} ≈ 0.6827, P{|X-α| < 2σ} ≈ 0.9545 and P{|X-α| < 3σ} ≈ 0.9973) [15,20]. Figure 1 shows an illustration example of the DUPN construction. Firstly, we use the PPI data to construct the static PPI networks in Fig. 1a. Secondly, we use gene expression data to calculate the active time points and the probability of each in Fig. 1b. In this study, the active probability only include three values P1 = 0.99, P2 = 0.95 and P3 = 0.68 based on the Eq. (4). Although a PPI imply physical contact between two proteins, it does not mean that the interaction occur in a cell at any time [22]. The real PPI networks are changing during the lifetime of a cell, because the active time points of proteins are different. Thirdly, we can inject the static PPI networks into a series of PPI subnetworks based on the dynamic information of the proteins in Fig. 1c. These PPI subnetworks associated with the different active time points construct a dynamic PPI network. All proteins in the PPI subnetworks Ti are active with an active probability at Ti time point. Finally, we assign an uncertain value to each protein and PPI in the dynamic PPI networks to construct the DUPN in Fig. 1d. In this way, we can distinguish the uncertain level of both protein and PPI in the DUPN. The existence probability of each protein is the active probability calculated based on Eq. (4). Zhao et al. [18] proposed a method to calculate the existence probability of PPI based on the topology structure of the PPI networks. In this study, we use the same method to calculate the existence probability of each PPI on the Fig. 1d based on the topology structure of the PPI subnetworks in the Fig. 1c. The existence probability between the two proteins v j and v k is defined as follows:
Construction of DUPN
where N j and N k are the sets consisting of all neighbors of v j and v k at Ti time point in Fig. 1c, respectively. Our method to construct DUPN is different from the work [18]. In the DUPN, we assign an uncertain value to each protein and PPI, which can distinguish the uncertain level of each protein and PPI in the dynamic PPI networks.
Uncertain graph model
A static PPI network generally can be modeled as G is the function that assigns a probability of existence to each protein and P Ti E : E Ti → [0,1] is the function that assigns a probability of existence to each PPI at T i time point.
To deal with uncertain data, some uncertain graph models and theories [18,23,24] have been proposed to analyze social networks, electrical networks and biological networks and so on. In this study, we assume the probabilities of proteins and PPIs are independent. Let . The instantiation is a deterministic network with an observing probability. We denote the relationship between G ' j and UG Ti as UG Ti G ' j . The probability of Pr(G ' j ) is given as follows: The Eq. (6) gives a probability distribution over all instantiations of the uncertain PPI network UG Ti at T i time point. Based on the Eq. (6), if an uncertain PPI network UG Ti consists of n instantiations {G ' In an uncertain PPI network, identifying protein complexes has to take into account all possible instantiations {G ' 1 , G ' 2 ,…, G ' n } that are associated with the probabilities defined in Eq. (6).
is the probability associated with instantiation G ' j ∈PG Ti . Given a set of protein vertices in UG Ti , V S V Ti , the expected density of V S is defined as follow: where h j is the number of PPIs among the proteins of . Given a set of protein vertices V S ⊂ V Ti , a protein vertex v a ∈V Ti and v a ∉V S , the attached score between v a and V S in the UG Ti , is given as follows: where m j is the number of PPIs between v a and V S in the instantiation G ' j . As the uncertain graph model, an uncertain PPI network can generate a large amount of different possible instantiation. According to the Eqs. (7) and (8), the computational complexity is very high in an uncertain PPI network. Based on the studies [18,24], the Eqs. (7) and (8) can be efficiently calculated by the Eqs. (9) and (10), respectively.
Thus, based on the uncertain graph model, we can use the Eqs. (9) and (10) to efficiently calculate the expected density and the attached score for protein complex identification in an uncertain PPI network, respectively.
The CDUN algorithm
Some studies has revealed the complex core-attachment organization [25]. A protein complex generally contains of a core structure and some attachment proteins. In the core structure, the proteins share high functional similarity, which are highly co-expressed [9]. The attachment proteins assist the core proteins to perform subordinate functions. Based on the core-attachment structure of protein complexes, the CDUN algorithm identifies protein complexes from all the uncertain PPI networks of a DUPN in turn. Algorithm 1 shows the pseudo-codes of the DUPN algorithm.
CDUN algorithm consists of two phases. CDUN firstly detects candidate protein complexes from all UG Ti ∈DG in turn at line 1-31. The candidate complexes are added into Candidate_complex set. Then, CDUN removes the highly overlapped protein complexes from Candidate_complex at line 32-44, based on their ED value.
In the first phase, CDUN firstly calculates the expected density of all edges in UG Ti based on Eq. (9) at line 4-5.
ED ({u,v}, UG Ti ) denotes the expected density of the edge between u and v. The edge will be added into Seed_set, if its expected density is not less than Core_thresh that is a predefined threshold parameter. The effect of Core_thresh is discussed in The effect of Core_thresh section. Average expected density of all edges is calculated at line 10. Secondly, CDUN augments each seed to generate the core structure at line 11-20. If the ED value of the core structure is not less than Core_thresh, CDUN will add the neighbor protein p into the core structure at line 25-28. We use the same parameter (Core_thresh) in lines 7 and 16 to keep the expected density of both the seeds and the core structures are not less than the Core_thresh. Finally, CDUN detects the attachment proteins for each core structure based on the AS score that is calculated by Eq. (10), and adds the attachment proteins into each core structure to form the candidate complex set Candidate_all at line 22-30.
The candidate protein complexes in Candidate_all are identified from all UG Ti ∈DG, which generally overlap with each other. In the second phase, CDUN calculates the ED value of all candidate protein complexes in line 32-34. We rank the candidate complexes in descending order of the ED value (Candidate_list = (cc 1 , cc 2 ,…, cc n )) at line 35. The candidate complex with highest ED value in will be removed from Candidate_list and added into Complex_set. CDUN checks the overlapped degree between cc i ∈Candidate_list and cc 1 . CDUN will removes cc i from Candidate_list at line 39-42, if the overlapped degree is larger than the Overlap_thresh. In our experiments, we set the Overlap_thresh as 2/3. The above steps will be repeated until Candidate_list is empty and the final complex set Complex_set is generated.
Datasets
The PPI datasets used in our experiments are the DIP [26], MIPS [27] and STRING [28] datasets, respectively. The PPI data of STRING dataset are from biomedical literature data, high-throughput data, genomic context data and co-expression data. Table 1 lists the statistics of the dataset in our experiments.
We download the gene expression data GSE3431 [29] from Gene Expression Omnibus, which involves 36 different time intervals. The GSE3431 consists of 3 cycles and each cycle is 12 time intervals. We calculate the average value at 12 active time points for each gene To evaluate the protein complexes identified by our method, the gold standard data are CYC2008 [30]. The CYC2008 benchmark consist of 408 protein complexes, which includes some complexes of size 2. In some cases, it is hard to evaluate the performance of the methods by using the complexes of size 2. Therefore, we use 236 complexes of size more than 2 in the CYC2008 to evaluate the complexes identified in the experiments.
Evaluation metrics
Overall, most of the complexes identification methods use two type of evaluation metrics to evaluate the performance of complexes prediction [19]. One type of evaluation metrics are precision, recall and F-score. The other type are sensitivity (Sn), positive predictive value (PPV) and accuracy.
Let P(V P , E P ) be an identified complex and B(V B , E B ) be a known complex. The neighborhood affinity score NA(P,B) between P(V P , E P ) and B(V B , E B ) is defined as follows: In most studies of complex prediction, the P(V P , E P ) is considered as matching the B(V B , E B ) if NA(P,B) is larger than 0.2 [16]. In our experiments, we use the same threshold of NA(P,B).
Precision, recall and F-score are used to evaluate of our experimental results, which are defined as follows: where N ci and N cb are the number of detected protein known complexes by our method, respectively. Identi-fied_Set and Benchmark_Set denote the set of complexes identified by our method and gold standard dataset, respectively. In additional, we also report Sn, PPV and accuracy in our experiments. The definitions of Sn, PPV and accuracy are described in the study [16].
The effect of Core_thresh
In this experiment, we evaluate the effect of the threshold parameter Core_thresh on the performance of CDUN.
The Core_thresh determines not only the number of the seeds in the Seed_set, but also the expected density of the core structures generated from the seeds. We use DUPN_DIP to evaluate the effect of Core_thresh. Table 2 shows the detailed experimental results of Core_thresh ranged from 0 to 1. It can be seen that when Core_thresh takes from 0 to 1, the number of complexes identified by our method decreases constantly. When Core_thresh = 0, CDUN can identify 763 protein complexes on the DUPN_DIP. It indicates that too many seeds are generated due to the value Core_thresh is too small. When Core_thresh = 1.0, CDUN cannot identify any complexes on the DUPN_DIP. It indicates that no seeds can be generated due to the value Core_thresh is too large. Overall, with the increase of Core_thresh, the precision and PPV are increased, and the recall, Sn and Accuracy are. The F-score of CDUN ranges from 0.246 to 0.575. When Core_thresh is set as 0.4, the major metrics F-score achieves the highest value of 0.575.
The '#Complexes' refers to the number of identified complexes with different Core_thresh. The highest value in each row is in bold
Comparison with other methods
Then, we compare CDUN with other complex identification methods: CSO [13], Cluster ONE [12], COAN [17], CMC [8], COACH [11], HUNTER [31], MCODE [6], Transitivity Clustering method (TransClust) [32] and Spectral Clustering method (SpecClust) [33]. We test these methods on all three static PPI networks DIP, MIPS and STRING, respectively, and choose the optimal parameters. CDUN is performed on the DUPN_DIP, DUPN_-MIPS and DUPN_STRING, respectively. The Table 3 lists the comparison results using CYC2008 as the benchmark. Firstly, we use DIP dataset to compare the performance of complex detection methods. From Table 3, it can be seen that CDUN and CSO and COAN achieve the F-score of 0.575, 0.553 and 0.486, respectively, which significantly outperforms other methods. Both CSO and COAN exploit the GO data, which contain much valuable information related to protein complexes curated by experts. However, CDUN can achieve the highest Fscore of 0.575 without integrating GO annotation data. HUNTER achieves the highest precision of 0.852. Trans-Clust achieves the highest recall of 0.674, Sn of 0.622, PPV of 0.725 and accuracy of 0.672, respectively. But the precision of TransClust is only 0.13, which leads to a low F-score of 0.218.
Secondly, we use MIPS dataset to compare these methods. On MIPS dataset, CDUN achieves the highest F-score of 0.377, which are superior to other methods. HUNTER achieves the highest precision of 0.538. Trans-Clust achieves the highest recall of 0.623, Sn of 0.544, PPV of 0.71 and accuracy of 0.621, respectively.
Thirdly, we use STRING dataset to compare these methods. STRING dataset is much larger than other two datasets. This makes more difficult for protein complex identification on STRING dataset than other two datasets. From Table 3, we can see that CDUN achieve the For instance, Cluster ONE achieves a very low F-score of 0.188 on STRING dataset, which is much lower than on other datasets. This is mainly because the STRING PPI network is much more complex than the PPI networks constructed by other datasets. In addition, STRING dataset integrates PPIs not only from highthroughput experiments, but also from biomedical literatures, co-expression data, genomic context data. The multiple source data generally lead to more noise data in STRING dataset. These noise data also have impact on the performance of protein complex identification methods. Compared with other methods, CDUN integrates gene expression data and STRING dataset to construct DUPN_STRING which consists of 12 uncertain PPI subnetworks, {UG T1 , UG T2 , …, UG T12 . Then, CDUN identify the complexes from such uncertain PPI subnetworks. Eventually, CDUN achieve a high F-score of 0.537 on STRING dataset. We also note that CDUN does not achieve high recall and accuracy in some cases. For instance, CDUN only achieve accuracy of 0.526 and 0.387 on DIP and MIPS dataset, respectively. In the future work, we will try to improve the recall and accuracy of our method further.
In additional, we compare CDUN with DCU [21] on the DIP dataset. In the study [21], the DCU method was evaluated using all the 408 complexes in the CYC2008. Therefore, we also compare CDUN with DCU using all the 408 complexes of CYC2008. The comparison results are listed in the Table 4. It can be seen that CDUN achieves higher precision and F-score than DCU on DIP dataset.
The significance of the identified complexes
In this experiment, we use GO data to evaluate biological significance of the identified complexes. The GO classifies gene product functions along biological process, molecular function and cellular component. SGD's GO::TermFinder [34] is used to calculate the pvalue of an identified complex with respect to GO data in our experiment. If the p-value is less than 0.01, we consider the identified complex to be statistically significant. In Table 5, We calculate the proportion of identified protein complexes with p-value less than 0.01 on the three PPI datasets.
An study of cdc28-cyclin complexes identified by CDUN Our method can identify many protein complexes, as well as their active time points. The cellular systems are highly dynamic and responsive to cues from the environment. These dynamic complexes results are more valuable to reveal the cellular function and organization than the static complexes results. In Fig. 2, we present an example to illustrate this.
The PPI networks including the 10 proteins have extracted from MIPS dataset in Fig. 2. The PPI networks don't contain YGR109C and YPR120C, because there are no PPIs between the two proteins YGR109C and YPR120C with the other eight proteins in MIPS dataset. It is very difficult to identify the multiple Cdc28-cyclin complexes only based on the topology structure of PPI networks. Our method can use gene expression data to calculate the dynamic information of these proteins, which also have been listed in Fig. 2. From the protein dynamic information, we can see that these proteins manly are active at T2, T9, T10 and T11. For instance, YBR160W, YGR108W, YDL155W, YMR199W and YLR210W are active at T2 together. Then, our method constructs DUPN_MIPS based on PPI networks and protein dynamic information. Eventually, Cdc28-cyclin complex 1, 2 and 3 are identified from UG T2 , UG T9 , UG T10 andUG T11 by CDUN, which all matched in CYC2008 dataset.
From Fig. 2, we can see that the three different protein complexes are overlapped each other in the static PPI networks. Since our method constructs the DUPN, CDUN can effectively identify the three Cdc28-cyclin complexes. Furthermore, our method can identify the active time points of the three Cdc28-cyclin complexes.
Cdc28-cyclin complex 1 and 2 are associated with T2 and T9, respectively. Cdc28-cyclin complex 3 is associated with T10 and T11. The experimental results reveal the dynamic property of Cdc28-cyclin complexes in the cellular systems. Firstly, the kinase catalytic subunit, YBR160W, associated with YGR108W, YDL155W, YMR199W and YLR210W to construct the Cdc28cyclin complex 1 at T2. Then, the kinase catalytic subunit, YBR160W, associated with YAL040C to construct the Cdc28-cyclin complex 2 at T9. Finally, YBR160W associated with YPL256C and YMR199W to construct the Cdc28-cyclin complex 3 at T10 and T11. | 5,674.2 | 2017-10-01T00:00:00.000 | [
"Computer Science"
] |
Kubo-Martin-Schwinger relation for an interacting mobile impurity
In this work we study the Kubo-Martin-Schwinger (KMS) relation in the Yang-Gaudin model of an interacting mobile impurity. We use the integrability of the model to compute the dynamic injection and ejection Green’s functions at finite temperatures. We show that due to separability of the Hilbert space with an impurity, the ejection Green’s in a canonical ensemble cannot be reduced to a single expectation value as per microcanonical picture. Instead, it involves a thermal average over contributions from different subspaces of the Hilbert space which, due to the integrability, are resolved using the so-called spin rapidity. It is then natural to consider the injection and ejection Green’s functions within each subspace. We rigorously prove by reformulating the refined KMS condition as a Riemann-Hilbert problem, and then we verify numerically, that such Green’s functions obey a refined KMS relation from which the original one naturally follows.
The notion of thermal equilibrium is one of the fundamental concepts of physics.In quantum many-body systems, thermal states are described by the density matrix ρ = e −β Ĥ /Z with trρ = 1, where Ĥ is the Hamiltonian of the system.The same operator Ĥ is responsible for the time evolution of the system.This double role of the Hamiltonian is crucially important in establishing the Kubo-Martin-Schwinger [1][2][3] (KMS) condition between the Green's functions tr (ρA(t − iβ)B(0)) = tr (ρB(0)A(t)) , where A, B are operators and A(t) follows the Heisenberg evolution A(t) = e i Ĥt Ae −i Ĥt .The KMS condition involves analytic continuation of the Green's function tr(ρA(t)B(0)) to imaginary times in a strip 0 < Im t < β.
Such a relation is central to the concept of thermal equilibrium.For example, the thermal density matrix ρ can be actually defined as the one for which the relation holds [3][4][5].Moreover, the KMS condition is an example of a detailed balance relation which guarantees stability of thermal equilibrium under fluctuations [6][7][8] and is also a foundation for fluctuation-dissipation relations as established in the original works [1,2].Finally, the relation can also be promoted to higher point functions [9].Whereas the KMS condition simply follows from cyclicity of the trace, it is generally very difficult to show this relation explicitly by evaluating thermal two-point functions in interacting many-body systems and comparing both sides of the equality.The aim of our work is to demonstrate such computations for an impurity Green's functions.
The thermal expectation values appearing in (1) involve averaging over different eigenstates.In the thermodynamically large system and under the equivalence between the grand canonical (GCE) and microcanonical (MCE) ensembles, they can be computed over a single eigenstate, tr (ρA(t)B(0)) = ⟨ρ|A(t)B(0)|ρ⟩, where the eigenstate |ρ⟩ is chosen such that its energy equals the average energy in the canonical ensemble.Our results show that unexpectedly the thermal average reduces to a single expectation value on one side of the KMS relation but not on the other.This makes the relation itself highly non-trivial.Without the trace, there is no simple derivation of it and it must rely on a more subtle structure of the thermal expectation values which we unravel.
The two important correlation functions for the impurity problems are injection and ejection Green's function (we define them precisely below).The two Green's functions are related by the KMS condition, see Fig. 1.However, as we show, the equivalence between GCE and MCE holds only for the injection Green's function and it is broken for the ejection.We attribute its breaking to the Hilbert space separability -the Hilbert space of states with a single impurity separates into different sectors -with each sector contributing a different value to the thermal correlator.Then the latter cannot be reconstructed by a single expectation value.Interestingly, we can define the injection and ejection Green's function between individual sectors of the Hilbert space.As we will show later, this refined Green's function obey a refined KMS relation.
The separability of the Hilbert space in the Yang-Gaudin model is ultimately related to its integrability.However, a similar mechanism called Hilbert space fragmentation [31,32] appears more generically and leads to breaking of the Eigenstate Thermalization Hypothesis describing the thermalization of closed quantum manybody systems [33][34][35][36][37][38][39].
The model and its correlation functions: The Yang-Gaudin model [40] describes a 1d gas of spinful fermions with spin polarized interactions, The Hamiltonian density is (3) and we consider repulsive interactions, g > 0. The total number of particles of each spin, N σ = ´dxψ † σ (x)ψ σ (x) is a constant of motion and the Hamiltonian can be diagonalized in a subspace with fixed numbers of particles of the two kinds.The relevant for the impurity problem are then subspaces with N spin up particles and 0 or 1 spin down particle.We denote them by H 0 and H 1 respectively.The dynamics in H 0 is then this of a free spinless Fermi gas, in H 1 it is of the free spinless Fermi gas with a single impurity.
We define two equilibrium dynamic Green's functions of the impurity where the trace is over either H 0 or H 1 .We note that these are not normalized Green's functions [41].To re-store proper normalization one needs to divide them by the partition function Z i = tr i e −β(H−µN ↑ ) .The two (normalized) Green's functions describe response of the system to injection (in) and ejection (ej) of the impurity respectively and can be measured in spectroscopy experiments [42][43][44][45][46].The KMS relation between the two Green's function was derived in [47,48] and in our notation takes the following form The extra factor can be seen as a ratio of Z 0 /Z 1 defined for the Green's functions (4), (5).
After this introduction we can now state our results.These confirm the microcanonical picture for the injection function and disprove it for the ejection function.In the former case we find with |ρ⟩ a representative state of the thermal equilibrium for free fermionic gas.Instead, for the ejection function there is a single remaining degree of freedom, the rapidity Λ, which is related to the way the impurity and the gas share the total momentum in the system, with the free energy F(Λ) associated with varying Λ, the corresponding partition function Z ej , and the Λ-resolved ejection function G ej (x, t, Λ).Whereas the microcanonical picture does not work for the whole correlator, once the value of Λ is fixed, the thermal average reduces to a single expectation value, with |ρ Λ ⟩ denoting a representative thermal eigenstate with fixed value of Λ.This is similar to the generalized ETH [49,50] appearing in the equilibration processes of integrable models.There, the expectation value can be represented by a single eigenstate once all the conserved charges are fixed.Here, it is sufficient to fix a single additional degree of freedom Λ.This observation is formalized by realizing the Hilbert space with a single impurity through a direct sum of subspaces.In a finite system, values of Λ are quantized and infinite.Denoting the possible values by Λ m with m ∈ Z we can formally write (10) This structure allows us to formally define the projection operator in different subspaces denoted H 1 (Λ), The projection operator can be now used to define a Λresolved injection function such that The two Λ-resolved correlation functions obey a refined KMS condition Prove of this relation is the main result of work.This new KMS relation implies that Λ acquires a thermodynamic meaning, its refines the concept of thermal equilibrium to be specified not only by the temperature and the chemical potential µ but also by the rapidity Λ.The Λ-resolved KMS relation implies then the stability of this generalized equilibrium state.Integrating ( 14) over Λ provides an alternative to [47,48] derivation of the KMS relation (6).
Bethe ansatz solution to the McGuire model:
We present now the relevant ingredients of the Bethe ansatz solution to the Yang-Gaudin model with a single spin down and refer to [51] for details.This special case is known as McGuire model [52,53].The system's eigenstates |{k j }, Λ⟩ are specified by a set of (N+1) rapidities {k j } together with an extra rapidity Λ.For a system of length L with periodic boundary conditions the rapidities k j are solutions to the Bethe equations where quantum numbers n j are integers and obey the Pauli principle and the phase shift is The allowed values for the rapidity Λ are obtained by requiring that the total momentum is quantized Therefore, the state of the system is characterized by a choice of quantum numbers {n j } and I and the rapidities Λ and {k j } follow from the Bethe equations.Because the choice of the quantum number I is independent of the other quantum numbers the whole Hilbert space separates into subspaces with fixed value of I, or equivalently, with fixed value of Λ as anticipated in (10).In a large system there is then a density of states a(Λ) associated to a given Λ, where σ(k) = 1/(1 + e β(k 2 /2−µ) ) is the Fermi-Dirac distribution.The density of states which is expected to appear in (8) and ( 13) is absorbed in the definition of the Λ-resolved Green's functions.Finally, the impurity free energy is [54,55] Impurity Green's functions: The program of computations of impurity Green's functions was initiated in [56] where the zero temperature static injection Green's function was computed.This was subsequently generalized to dynamic correlators [57] and to finite temperatures [58].Similar techniques were later applied to determine the momentum distribution function of the impurity in a zero-temperature polaron state [59].On the other hand, the static finite temperature ejection Green's function was computed in [55].This can be then generalized to time-dependent Green's function as we show in the Supplementary Materials [60].This approach culminates in the following expressions for the two Green's functions: Here det σ (1 + K) denotes a Fredholm determinant of the kernel K(q, q ′ ) with a measure given by the Fermi-Dirac distribution σ(k) [61].The kernels appearing above take the following universal form V (q, q ′ ) = 1 π e + (q)e − (q ′ ) − e + (q ′ )e − (q) q − q ′ , ( W ± (q, q ′ ) = ± 1 π e ± (q)e ± (q ′ ), (23) where e + (q) = e(q)e − (q).Functions e − (q) and e(q) have different expressions for the injection and ejection.Namely, for the injection we have e in − (q) = e itq 2 /4−ixq/2 (24) and additionally For a real argument we specify how we deform a contour to pass over or under a real line by ±i0 shifts.
For the ejection the formulas read The expressions for the two Green's functions, however sharing similar structures, in the end involve different functions and a priori any simple relation between G in (x, t, Λ) and G ej (x, t, Λ) is unexpected.It is clear from the formula for G in (x, t, Λ) that it depends on the spin rapidity Λ in a non-trivial way which is a sufficient condition for breaking of the equivalence between GCE and MCE.
Refined KMS relation and the Riemann-Hilbert problem: We now sketch a derivation of the refined detailed balance relation [62].The idea of the proof relies on interpreting the two Green's function as solutions to the Riemann-Hilbert problem (RHP).The RHP in its simplest formulation is a problem of determining a function which is analytic everywhere but along the real line and which asymptotically approaches 1.The non-analytic behaviour along the real line is characterized by the jump condition: χ + (x) = χ − (x)G(x), where χ ± (x) = lim ϵ→0 χ(x ± iϵ).For our application we need a correspondent generalization to matrix-valued functions [63,64].
Representing the Green's functions through a solution to the RHP problem corresponds to identifying the jump matrices G in,ej .We then perform a series of manipulations of jump matrices which demonstrates that they can be made equal after an appropriate change of the coordinate and time.This allows us to conclude that with a space-time-independent function Therefore W (β, Λ) can be found by considering t = 0 and taking x → ∞ limit, where the x-asymptote of the Fredholm determinants can be evaluated with the effective form-factors approach [65][66][67].The result is W (β, Λ) = exp(βF(Λ)), thus finishing the proof of the refined KMS relation.[68] The Green's function can be evaluated numerically [69].This requires a numerical evaluation of the Fredholm determinant and can be done by the quadrature method [70].In Fig. 2 we show the KMS relation between the injection and ejection Green's function.This constitutes a numerical proof of this relation.
In the limit of vanishing impurity-gas interactions, the Green's functions become those of a free particle with its dispersion controlled by Λ [71] and additionally for the ejection weighted by the thermal gas distribution In the momentum space the correlators are then proportional to δ(k ± Λ) -in the non-interacting limit the spin rapidity Λ can be identified with momentum k.This shows that the Λ degree of freedom emerges from the interacting nature of the system.It is an open question how to observe this effect for non-integrable models, for example using the variational method [47,48].
Conclusions:
In this work we studied exact thermal Green's functions of an interacting impurity model.We have shown that due to the interactions the theory acquires a new thermodynamic parameter, the spin rapidity Λ of the nested Bethe Ansatz.As a result, the ejection Green's function involves a sum over an ensemble of systems weighted with Λ-dependent free energies.The injection Green's function can also be resolved into contributions from different Λ's.This appears as a result of quantum-mechanical averaging rather then thermal av-eraging.Despite that, the two Λ-resolved Green's functions obey the KMS relation.The existence of such a relation effectively promotes Λ to a thermodynamic parameter describing the equilibrium state of the system.On the other hand the KMS relation is a statement about analytic continuation of correlation functions.In order to prove it, it requires non-perturbative control over the correlators, which is provided within our mathematical framework.We then employed the Riemann-Hilbert problem to prove the KMS relation and we further verified it numerically.
The methods developed here apply beyond the thermal equilibrium.For example, the expressions for the Λ-resolved correlation functions are valid for any distribution σ(k), not necessarily thermal.An interesting question in this direction is about the existence of KMS relation beyond the thermal equilibrium.Closely related statements of detailed balance were found in similar circumstances for the 1d interacting Bose gas [72,73].
S1 Yang-Gaudin model and the impurity problem
In this section we review the Bethe ansatz solution of the Yang-Gaudin model.We focus on the sectors of the theory with zero or one spin up particle in a thermodynamically large sea of spin down particles.These sectors are relevant for the impurity Green's function.In our presentation we follow [1,2].We also discuss the excitation spectrum of the theory, specifically we show a degeneracy excitations over finite temperature state.This degeneracy can be traced back as a microscopic foundation for the refined detailed balance.
The Hamiltonian of the Yang-Gaudin model is The Hamiltonian commutes with the number operators of spin up and down particles and with the total momentum operator, and particle number of each type is a conserved quantity.The eigenstates of the system can be then characterized by total number of particles N = N ↑ + N ↓ and M = N ↓ number of spin down particles.We call tuple (N, M ) a sector.
In each sector (N, M ) the Hamiltonian can be written in the first quantized form In the following we set ℏ = 1 and m = 1.
The Bethe ansatz solution
The eigenstates in the sector (N, M ) are characterized by two sets of rapidities (quasi-momenta), {k} = k 1 , . . ., k N and {λ} = λ 1 , . . ., λ M which are real numbers and obey Pauli principle in each set separately.The basis of the Fock space of the model is constructed in the standard way by defining a vacuum state |0⟩ such that The eigenstates take then the following form with wave function Ψ {s} N,M ({k}, {λ}|{x}) and set of positions {x} = x 1 , . . ., x N .For the wave function in sector (N, M ) to be non-zero the set of spins {σ} = σ 1 , . . ., σ N must be such that exactly M elements among them are spin downs.Finally, the rapidities are solutions to the nested Bethe equations The momentum and energy of an eigenstate are (S1.9) The wave function Ψ {s} N,M ({k}, {λ}|{x}) takes the typical structure of Bethe ansatz solvable models, it consists of a superposition of plane waves with rapidity-dependent amplitudes.In the following we focus on sectors (N, 0) and (N + 1, 1) which are relevant for the impurity problem.
In sector (N, 0) the system is a free Fermi gas of N spin up particles.The second set of rapidities is empty, {λ} = 0 and the Bethe equations reduce to the standard quantization conditions with the wave function given by a Slater determinant of an N × N matrix, (S1.11) In sector (N + 1, 1) the wave-function can also be represented by a determinant.To achieve this one changes the coordinate system to the impurity's (the spin down particle) rest frame, a transformation known as the Lee-Low-Pines transformation [3], applied to the McGuire model in [4,5].Here we follow [2].The resulting wave-function is . . . . . . . . .
with the normalization factor and with and is related to the original wave function through two relations where by ↓ j we abbreviate a set of spins {σ} with a single spin down at position j in the sea of spin up particles.The rapidities {k j } and the single spin rapidity Λ obey The second equation can be understood as the quantization condition of the total momentum.Indeed taking the product over all l of the first equation and then using the second equation we find
Bethe equations and excited states
The Bethe equations in the logarithmic form are where quantum numbers n j are integers and obey the Pauli principle.The phase shift is The spin rapidity Λ can be fixed by specifying values of other integrals of motions in this model.Traditionally, we require that the total momentum given by is fixed, i.e.P ({k j }, Λ) = Q.The Λ dependence in (S1.23) is implicit through k j as solutions to the Bethe equations.Therefore, for given Q and the set of integers one can resolve condition (S1.23) and thus solve the Bethe equations.The Bethe equations for rapidities {k j } can be rewritten in terms of a single function obeying a transcendental equation, where Function f depends only on the parameters of the system α and L but not on the rapidities.The expression for k j gives a bound on the total momentum of the state.Denoting and using that arctan(x) is a bounded function we find The maximal and minimal values of rapidity correspond to Λ = ±∞.In that case the rapidities can be computed analytically.The solution is For the other cases Λ is finite and equations have to be solved numerically.Degeneration of states: Consider a set of quantum numbers {n j } and corresponding Λ such that the Bethe equations are fulfilled and denote rapidities by k j .We can now build another state related to it by a parity operation.Choose nj = −n j + 1 and Λ = −Λ.The corresponding rapidities are kj = −k j and obey the Bethe equations.The two states have the same energy and opposite momenta.Derivation where in the last step we used the symmetry property of f (x) and that arctan(x) is an odd function.This can be also derived directly from the original formulation.
Excitations far from the ground state: In the thermodynamically large system, a state can be described by the distribution of rapidities σ(k) and value of the rapidity Λ.With respect to such state we consider excitations.This take form of modifying some subset of quantum numbers n j and/or adding/removing the impurity.
Consider a state without the impurity described by quantum numbers {n j } N j=1 and a state with the impurity described by quantum numbers {n ′ j } N +1 j=1 and Λ.We assume that the two sets share most of the quantum numbers with m quantum numbers different such that m/L → 0 in the thermodynamic limit.Then the two states are described by the same distribution n(k).Consider subleading in the system size corrections to the values of the rapidities.For j = 1, . . ., N , Unless n ′ j − n j ∼ L, the difference between the rapidities is of the order 1/L.For the quantum numbers that were not modified we then have where we replaced the quantum number 2πn ′ j /L by the rapidity k j as the error is of the order 1/L and therefore has subleading contribution to k j − k ′ j .Consider now difference in momenta between the two states Similar computations for the energy difference give These excitation can be summarized by We can also consider an opposite type of excitation which involves annihilating the impurity, The momentum and energy carried by this excitation is We now show that there exist a symmetry between the two types of excitations.This symmetry underlies the refined detailed balance.Choosing Λ ′ = −Λ, p ′ j = h j and h ′ j = p j the two excited states carry exactly opposite momenta and energies with respect to their background states if n(k) is a symmetric function of k.Reformulating this, for every excited state in which we create an impurity there exist an excited state in which we annihilate the impurity such that the two excitations have exactly opposite energies, momenta and Λ's.
S2 Impurity Green's functions at finite temperatures
In this Section we derive the expression for the ejection Green's function at finite temperature and show that it breaks the Eigenstate Thermalization Hypothesis.We also derive explicit expression for the finite temperature ejection and injection Green's function.For the ejection Green's function we generalize the static finite temperature result of [6].For the injection the finite tempearture dynamic Green's function was computed in [2].
We consider now evaluation of the ejection Green's function in a finite system.In thermal equilibrium it is given by with thermal density matrix Denoting eigenstates in finite system by |{k}, Λ⟩ we write with the partition function Z having analogous representation The building block of the finite temperature correlation function is an expectation value in a single eigenstate, which we denote The normalized form-factors ⟨{q}|Ψ(0)|{k}, Λ⟩ were computed in [1] and take the following form Here the derivative of k j over Λ is formal and explicitly equals to Notice that in the sum in the denominator, one can ignore 1/L corrections and obtain The result of [6], here generalized to dynamic correlation function, is that in the thermodynamic limit the expectation value ⟨{k}, Λ|Ψ † (x, t)Ψ(0, 0)|{k}, Λ⟩ depends on the underlying distribution of {k j } and on the spin rapidity Λ ⟨σ(k), Λ|Ψ † (x, t)Ψ(0, 0)|σ(k), Λ⟩ = lim td ⟨{k}, Λ|Ψ † (x, t)Ψ(0, 0)|{k}, Λ⟩. (S2.9) This implies that when performing the thermal averaging one can use the saddle-point argument to localize the sum over {k j } at configuration that maximizes the free energy.The free energy of the system takes the following form [6] where F th (σ(k)) corresponds to the free fermions energy, thence not depending on the rapidity Λ.However, the subleading contribution F(σ(k), Λ) does depend on it.Therefore, the thermal sum in the numerator of the ejection Green's function evaluates in the thermodynamic limit to with constant factors coming from the saddle-point evaluation.The same factors will appear in the evaluation of the partition function and therefore they cancel in the final expression for the Green's function.This takes then the following form , (S2.12) Rewriting the sum over Λ through integrals we obtain where we defined with a(Λ) defined in (S2.8).Finally, because the saddle point configuration comes from the extremum of the free energy of the non-interacting Fermi gas, the distribution σ(k) takes the Fermi-Dirac form The final formula for G ej (x, t, Λ) can be obtained from generalization of the static Green's function.In [6] it was derived The kernels are V (q, q ′ ) = e + (q)e − (q ′ ) − e − (q)e + (q ′ ) q − q ′ , Ŵ− (q, q ′ ) = − 1 π e − (q)e − (q ′ ), (S2.17 with functions e ± (q) given by e + (q) = 1 π e iqx/2+iδ(q) , e − (q) = e −iqx/2 sin δ(q).(S2.18) We also note that G ej (x, −Λ) is a complex function such that G ej (x, −Λ) = G * ej (x, Λ).Therefore the resulting one-body function G ej (x) is a real function.
To include the time dependence we include the energy contribution to functions e ± (q), Notice that for x > 0 and t = 0 the integral in e + (q) vanishes.In this way we obtain the formula for the ejection Green's function G ej (x, t; σ, Λ) reported in the main text.Injection Green's function: Similar analysis can be performed for the injection Green's function.We find there the ETH works directly at the level of the full correlator reducing the thermal averaging to a single expectation value.However, as first investigated in [2], it is possible to truncate the internal sum to states with fixed value of Λ thus in practice realizing the projection operator P Λ .The formula for the Λ-resolved correlation function was derived in [2].The expressions are Here det [−Q,Q] (1 + K) denotes a Fredholm determinant of the kernel K(q, q ′ ) with a measure given by the Fermi-Dirac distribution σ(k) with Q the Fermi rapidity.The kernels appearing above are where e + (q) = e(q)e − (q) and functions ϵ − (q) and ϵ(q) given by e − (q) = e itq 2 /4−ixq/2 , (S2.22) and additionally H = 1 − F (κ + )/α + F (κ − )/α.Here κ ± = (Λ ± i)/α and This function can be expressed in terms of error function.For a real argument we specify how we deform a contour to pass over or under a real line by ±i0 shifts.
S3 Riemann-Hilbert problem and the KMS relation
In this section, we reformulate the computation of Fredholm determinants in terms of a Riemann-Hilbert problem (RHP).We perform a transformation on the RHP for the injection and ejection cases to derive the refined detailed balance and mostly follow Ref. [7].Let us start by recalling the statement of the matrix Riemann-Hilbert problem.
Let Σ be an oriented contour in a complex plane and let G be a matrix valued function defined on Σ.The task is to find a matrix valued function χ(z) which is holomorphic in the complement of Σ.Additionally, let us denote by χ ± (z) the value of χ(q) when it approaches Σ from one of its two sides.We require χ(z) to fulfill two conditions • the limiting values are related through the jump matrix G, i.e. χ + = χ − (1 + 2πiG), • the function χ(z) approaches the identity matrix when |z| → ∞ in the complement of the contour Σ.
In our case, the Σ contour will be simply the real line.
Our strategy to prove the refined the detailed balance is the following.We will reformulate the computation of the two Green's as Riemann-Hilbert problems.This amounts to specifying the contour Σ (which in both cases is real line) and the jump matrix.We will then perform some transformations of the RHP to arrive at the same jump matrix.From this, assuming the uniqueness of the solution to the RHP, we derive the following relation between the two Green's functions Importantly, from the RHP we can infer that function the ratio of the determinants is independent of x and t.Therefore it can be evaluated by taking a suitable limit of t = 0 and x → ∞.Computing the ratio amounts then to extracting the asymptotic behavior of the two determinants.This can be achieved with the effective form-factors approach.The result is thus proving the refined detailed balance relation.
We start with the injection case and rewrite the kernel V in in the vector notations where we have introduced "bra" and "ket" vectors | = (e + (q), e − (q)).(S3.4) In these notations, we can immediately recognize the kernel as a generalized sine-kernel [8].Notice also that in the original kernels for both injection and ejection Eqs.(S2.17),(S2.21)we can replace factors responsible for time dynamics e itq 2 /2 → e itε(q) , with the shifted energy ε(q) = q 2 /2 − µ, which results in the rescalings of the components e ± (q) → e ± (q)e ∓iµt/2 and the total correlation functions Because of the special (integrable) form of the operator V its resolvent R defined via also have an integrable form Here, similar to (S3.4), we have put The components satisfy linear integral equations where we remind that the action of the operator is given by the convolution with its kernel.This system of linear integral equations can be reformulated as a Riemann-Hilbert problem.To do so we formally introduce 2 × 2 matrices and verify that |F in (q)⟩ = χ in (q)|E in (q)⟩, ⟨F in (q)| = ⟨E in (q)| χin (q), χ in (q) χin (q) = 1. (S3.11) The matrix function χ is analytic everywhere except the real line, where it experiences a jump with G in (q) = |E in (q)⟩⟨E in (q)| = e − (q)e + (q) −(e + (q)) 2 (e − (q)) 2 −e − (q)e + (q), (S3.12) Therefore χ in (q) is a solution to the Riemann-Hilbert problem.Formulation of χ(q) as a solution to the RHP allows us to infer about its asymptotics.Asymptotically, the solution to the RHP takes the following form where using notations (S3.4) and (S3.8) we explicitly write These expressions are also usually referred to as potentials.The symmetry of the kernel V (q, q ′ ) = V (q ′ , q) is reflected in the relation B +− = B −+ .One can now easily express Green's function via potentials [7].In particular, using relation (S3.9), for the injection Green's function we obtain where we used that W (q, q ′ ) = e + (q)e + (q ′ ) is rank one and introduced the τ -function as the determinant of the operator Vin , Now let us consider different RH problem for the matrix ϕ in The corresponding jump Ḡin in ϕ in (q) across the real line is Ḡin = 1 π σ(q)s − σ(q)e −i(tε(q)t−qx) s − s + (1 − σ(q))e i(tε(q)−qx) −σ(q)s + .(S3.18) To derive this expression we have extensively used that s − − s + = 2is − s + and φ + (q) − φ − (q) 2πi = e −itε(q)+iqx π s − s + , (e + − φ ± e − )(q) = σ(q)s ± (q) e itε(q)/2−iqx/2 π .
(S3. 19) The jump matrix is valid for any distribution σ(q).For the thermal distribution it simplifies to Ḡin = 1 π(e βε(q) + 1) For q → ∞, ϕ(q) has the same expansion as χ(q) but with modified potentials b ij and c ij .Moreover, from Eq. (S3.17) we conclude the relation between the old and new potentials For the Green's function we then have This expresses the injection Green's function through the potential b ++ and the τ -function.The potential b ++ appears from a solution to the RHP with specific jump matrix Ḡin .We will find now a similar representation for the ejection Green's function.Now let us consider the ejection.We denote the corresponding RHP matrix as χ ej and the potentials with the subscripts ej.Following the same procedure as for the injection we obtain We perform again the conjugation The jump matrix ϕ ej for reads Gej = 1 π −s + σ(q) s − s + σ(q)e itε(q)−iqx (1 − σ(q))e itε(q)−iqx σ(q)s − .(S3.26) This jump matrix is structurally similar to Ḡin from eq. (S3.18) but with s + and s − replaced.This can be fixed by performing a further conjugation, ϕ ej = σ 1 ψ ej σ 1 . 1 The resulting jump matrix is Finally specifying σ(q) to be a thermal distribution we conclude the following jump matrix for Ψ Comparing this expression with the jump matrix (S3.20) we observe that which leads to the following identity By comparing the asymptotic expansion of both sides of this equality we conclude that For the ratio of the Green's function we find We will now prove that the ratio of the determinants is independent of x and t.To this end consider derivatives of the kernels V in and V ej .For the injection, taking into account specific dependence of the kernel, we find 1 Matrix σ 1 appears here on both sides to ensure the same asymptotic behavior of φ ej and ψ ej .
S4 Fredholm determinant representation of the Green's functions
In this Section we recall the definition of the Fredholm determinant and provide details on the numerical evaluation of it that we use in our work.
The kernels are of a Sine-type and are of the form (f (x) − f (x ′ ))/(x − x ′ ).To avoid the problematic x = x ′ point we discretize x and x ′ on grids shifted with respect to each other such that x j ̸ = x ′ k for any j and k.To establish the convergence of the results we evaluate the determinants on grids with different number of points.To quantify the convergence we define conv = G (N2) (x, t, Λ) − G (N1) (x, t, Λ) G (N1) (x, t, Λ) , (S4.5) where G (N ) (x, t, Λ) is the Green's function (either injection or ejection) computed on a grid consisting of N points.
The results for the grids of N = 100, 200, 500, 1000 points and for both correlators are shown in fig.S1.We have verified that the fast convergence rate holds also for other values of the parameters of the system.
FIG. 1. a) The standard KMS relation infers the detailed balance condition Gin(k, ω) = e −βω Gej(k, ω), between the Fourier transforms of injection and ejection Green's function, which implies that probabilities for fluctuations creating or destroying impurity (denoted by a red circle) are the same in the thermal equilibrium.b) The thermal state with an impurity is an ensemble of different states enumerated by Λn. c) The refined KMS relation guarantees the stability of each sub-ensemble under the fluctuations creating or destroying the impurity described by Gin,ej(k, ω, Λ).
Kubo-Martin-Schwinger relation for an interacting mobile impurity Contents S1.Yang-Gaudin model and the impurity problem 1 S2.Impurity Green's functions at finite temperatures 5 S3.Riemann-Hilbert problem and the KMS relation 7 S4.Fredholm determinant representation of the Green's functions 12 References 13 FIG.S1: Plots of the convergence measure (S4.5).We plot real and imaginary parts of the two Green's function for the parameters of the system shown above the plots.The results show that increasing the size of the grid from N = 500 to N = 1000 points changes the values of the two functions by at most 0.1%. | 8,288 | 2023-08-12T00:00:00.000 | [
"Mathematics"
] |
Nonadiabatic dynamics: The SHARC approach
We review the Surface Hopping including ARbitrary Couplings (SHARC) approach for excited‐state nonadiabatic dynamics simulations. As a generalization of the popular surface hopping method, SHARC allows simulating the full‐dimensional dynamics of molecules including any type of coupling terms beyond nonadiabatic couplings. Examples of these arbitrary couplings include spin–orbit couplings or dipole moment–laser field couplings, such that SHARC can describe ultrafast internal conversion, intersystem crossing, and radiative processes. The key step of the SHARC approach consists of a diagonalization of the Hamiltonian including these couplings, such that the nuclear dynamics is carried out on potential energy surfaces including the effects of the couplings—this is critical in any applications considering, for example, transition metal complexes or strong laser fields. We also give an overview over the new SHARC2.0 dynamics software package, released under the GNU General Public License, which implements the SHARC approach and several analysis tools. The review closes with a brief survey of applications where SHARC was employed to study the nonadiabatic dynamics of a wide range of molecular systems. This article is categorized under: Theoretical and Physical Chemistry > Reaction Dynamics and Kinetics Software > Simulation Methods Software > Quantum Chemistry
We review the Surface Hopping including ARbitrary Couplings (SHARC) approach for excited-state nonadiabatic dynamics simulations. As a generalization of the popular surface hopping method, SHARC allows simulating the fulldimensional dynamics of molecules including any type of coupling terms beyond nonadiabatic couplings. Examples of these arbitrary couplings include spin-orbit couplings or dipole moment-laser field couplings, such that SHARC can describe ultrafast internal conversion, intersystem crossing, and radiative processes. The key step of the SHARC approach consists of a diagonalization of the Hamiltonian including these couplings, such that the nuclear dynamics is carried out on potential energy surfaces including the effects of the couplings-this is critical in any applications considering, for example, transition metal complexes or strong laser fields. We also give an overview over the new SHARC2.0 dynamics software package, released under the GNU General Public License, which implements the SHARC approach and several analysis tools. The review closes with a brief survey of applications where SHARC was employed to study the nonadiabatic dynamics of a wide range of molecular systems.
This article is categorized under: Theoretical and Physical Chemistry > Reaction Dynamics and Kinetics Software > Simulation Methods Software > Quantum Chemistry
K E Y W O R D S
Ab initio molecular dynamics, excited states, nonadiabatic dynamics, surface hopping, SHARC
| INTRODUCTION
Nonadiabatic dynamics in molecules involves processes in which the nuclear motion is affected by more than one electronic state. These processes can, for example, take place when a molecule is irradiated by light. In such a situation, nuclear motion cannot be described anymore in the frame of the Born-Oppenheimer approximation, which assumes that only one electronic state affects the nuclei. Instead, whenever two or more electronic states have similar energies and state-to-state couplings are sufficiently large, population transfer from one state to another will take place. Depending on the type of electronic states involved and the type of the state-to-state couplings, nonadiabatic processes can be classified into internal conversion (IC) and intersystem crossing (ISC). In IC, states of the same spin multiplicity (e.g., two singlet states) interact with each other via the so-called nonadiabatic couplings (NACs). In ISC, states of different spin multiplicity (e.g., a singlet and a triplet) interact via the relativistic spin-orbit couplings (SOCs), while in the frame of nonrelativistic quantum chemistry, ISC is naturally forbidden by spin symmetry.
Nonadiabatic phenomena are relevant in many photophysical and photochemical processes, including some fundamental biochemical phenomena such as visual perception, photosynthesis, bioluminescence, DNA photodamage and repair, or vitamin D synthesis. Therefore, a number of computational methods have been developed in the last decades to simulate nonadiabatic dynamics, with trajectory surface hopping (SH) being one of the most popular (Barbatti, 2011).
In this work, we focus on a generalized version of trajectory SH, coined SHARC (Surface Hopping including ARbitrary Couplings), as it can deal with any type of couplings on the same footing, for example, NACs and SOCs. SHARC was first developed in 2011 with the aim to perform nonadiabatic dynamics simulations including SOCs and field-matter interactions in systems with many degrees of freedom (Bajo et al., 2012;Richter, Marquetand, González-Vázquez, Sola, & González, 2012a). However, SHARC can be also used in a general way to study IC only-either within singlet states, or within states of other multiplicities, for example, triplets, in which case also SOCs are required. In 2014, after a major overhaul, the first version of the SHARC dynamics suite was made publicly available ; details of the early software implementations have been reported elsewhere (Mai, Marquetand, & González, 2015;Mai, Plasser, Marquetand, & González, 2018;Mai, Richter, Marquetand, & González, 2015b). Here, we present an up-to-date overview over the SHARC approach and its newest implementation, the new dynamics suite SHARC2.0 and its capabilities, as well as a brief survey of some of the applications that the excited-state dynamics community carried out using SHARC. In the following, we first briefly describe the main ideas behind SH, before introducing the particularities of the SHARC approach.
| What is surface hopping?
A full-dimensional quantum mechanical treatment of the nuclear motion of a large, polyatomic molecule is nowadays unfeasible due to the exponential scaling of the computational effort with the number of dimensions-this is called the dimensionality bottleneck of quantum mechanical methods (Meyer, Gatti, & Worth, 2009). This bottleneck spurred the development of mixed quantum-classical methods that only treat the electrons quantum mechanically, while the nuclear motion is treated classically (Doltsinis, 2006;Marx & Hutter, 2000;Tully, 1998). Among them, SH (Barbatti, 2011;Doltsinis, 2006;Subotnik et al., 2016;Tully, 1990;Tully & Preston, 1971; L. Wang, Akimov, & Prezhdo, 2016) is probably one of the most prominent methods, and it is the basis for SHARC. The advantages of SH (Barbatti, 2011)-which are the main reason for its popularity-are simplicity (which aids both development of SH methods and interpretation of results), practicality (allowing on-the-fly implementations and trivial parallelization), and the ability to include all nuclear degrees of freedom at feasible computational cost. The disadvantage of SH (Barbatti, 2011;Tully, 1990) is that it naturally misses truly quantum effects, such as a correct description of the zero-point energy, tunneling, or nuclear interferences.
In SH, the quantum and the classical parts are described as follows. The electrons are represented by a time-dependent electronic wave function |Φ el (t)i, written as a linear combination of electronic basis states where α runs over all basis states, c α (t) are the time-dependent coefficients, and |Ψ α (t)i are the basis states. As the choice of the set of these basis states is not irrelevant, it will be discussed in a separate section below. The temporal evolution of this wave function follows the time-dependent Schrödinger equation, and is affected by the classical nuclear coordinates R(t) through the parametric dependence of the electronic Hamiltonian on the vector R(t). Additionally, each nucleus A obeys the classical equation of motion where the classical force on nucleus A is the negative gradient of the electronic energy. In this way, the nuclei follow classical trajectories (defined by the positions R of all nuclei changing in time), which are influenced by the quantum-mechanically treated electrons. As can be seen, the classical nuclear evolution and the quantum-mechanical electronic evolution are intimately coupled. Unlike a quantum wave packet, the classical nuclei can only follow one particular force in each instant of time, and in SH, this force is given by the gradient of the active electronic state. In order to determine the active state, there exists quite a large number of prescriptions, which give rise to many different SH variants (e.g., the works of Herman, 2005 Webster, Wang, Rossky, & Friesner, 1994). As reviewing all these variants is beyond the scope of this work, here we focus on the idea of "fewest switches" SH (Hammes-Schiffer & Tully, 1994;Tully, 1990), where the composition of the time-dependent electronic wave function is monitored through the population in each electronic state, |c α (t)| 2 . When the population of the current active state decreases (and only then according to the fewest switches criterion), one computes the probabilities to switch the active state to any other state. Based on such probabilities, a stochastic algorithm chooses the new active state (Tully, 1990). If the active state is changed, a so-called surface hop is performed, which gave the method its name. From this instant of time, the trajectory continues on the new active state, possibly hopping again at a later time.
| Additional ingredients of surface hopping
Besides the main idea described above, several additional aspects are relevant in SH simulations.
| Kinetic energy adjustment
As the new active state after a hop will likely have a potential energy different from the former one, it is necessary to adjust the kinetic energy of the system so that the total energy is conserved during a hop. The original approach (Tully, 1990;Tully & Preston, 1971) is to rescale the component of the velocity vector parallel to the NAC vector between old and new active state. This approach is rigorous (Coker & Xiao, 1995;Hack, Jasper, Volobuev, Schwenke, & Truhlar, 1999;Herman, 1984) and size-consistent, but requires the computation of NAC vectors. If these are not available, the usual approach is to rescale the whole velocity vector v (Fabiano, Keal, & Thiel, 2008;Müller & Stock, 1997;Tapavicza, Tavernelli, & Rothlisberger, 2007), although Hack et al. (1999) also suggested rescaling along the gradient difference vector. It can also happen that the kinetic energy cannot be adjusted such that the total energy remains conserved, for example, because the kinetic energy available along the NAC vector is smaller than the potential energy difference in the hop. This is called a "frustrated" hop, which is rejected such that the active state does not change. However, some authors (Hammes-Schiffer & Tully, 1994) suggest to invert the component of the velocity vector parallel to the relevant NAC vector whenever a frustrated hop occurs.
The situation is slightly more complicated in the presence of a laser field, because in this case, the Hamiltonian is timedependent and thus the total energy is not necessarily conserved. In such a case, the algorithm needs to distinguish between nuclear-motion-induced hops and laser-induced hops (or needs to interpolate between these two limiting cases). A simple approach to this is to check whether the energy change during the hop is compatible with the laser frequency (Mai, Richter, Heindl, et al., 2018). Alternatively, one can inspect the origin of the Hamiltonian matrix elements which induced the hop (Bajo, Granucci, & Persico, 2014;Thachuk, Ivanov, & Wardlaw, 1998).
| Decoherence
In the evolution of the coefficients c α (t) (which will be discussed in more detail below) in SH, an artificial system with complete coherence is assumed (Granucci, Persico, & Zoccante, 2010;Jaeger, Fischer, & Prezhdo, 2012;Subotnik et al., 2016;Subotnik, Ouyang, & Landry, 2013). This means that for each trajectory, the electronic population situated on all states follows the gradient of the active state, whereas in quantum mechanics, the population of each state follows the gradient of its respective state. As a consequence, SH is usually overcoherent and a decoherence correction scheme needs to be applied to obtain reasonable results. These correction schemes typically reduce or collapse the population of the inactive states according to an estimate of the quantum-mechanical decoherence rate. The decoherence rate is usually estimated according to some semiclassical approximation, for example, based on phenomenological arguments Zhu, Nangia, Jasper, & Truhlar, 2004) or based on the assumption of frozen Gaussians which move apart because the different states have different energies (Granucci et al., 2010) or different gradients (Jain, Alguire, & Subotnik, 2016).
| Ensemble of trajectories
Because of the stochastic process involved in hopping, and also because a single classical trajectory cannot reproduce the branching of a wave packet into different reaction channels, SH requires an ensemble of trajectories, each starting from a different initial condition (Tully, 1990). The size of the ensemble should be large enough so that the initial wave packet (e.g., the vibrational ground state of the electronic ground state) is well characterized. Depending on the excitation procedure and the density of electronic excited states, it might also be necessary to consider in the ensemble a distribution over different initial states, such that the excitation process is well described (Barbatti, 2011). Due to the stochastic nature of the hopping procedure, in principle, one should also run multiple trajectories with identical initial conditions and different random number sequences, such that the encountered hopping situations are well sampled (Barbatti, 2011;Tully, 1990). In practice, however, to keep computational cost manageable, the actual number of trajectories is usually restricted; particularly, if the underlying on-the-fly method (see below) is computationally demanding. Even at the expense of statistical significance, most often quality is preferred to quantity: few trajectories with rather expensive but accurate potentials are preferable to many trajectories with cheap but unrealistic potentials.
| Generation of initial conditions
As hinted above, SH simulations require to define initial conditions, in terms of initial positions R(0), initial velocities v(0), and initial electronic states. Most often, generating initial conditions involves first sampling a large set of (R(0), v(0)) pairs from the relevant vibrational state in the electronic ground state. In practice, this involves either the computation of a Wigner distribution of a harmonic oscillator model around the ground state minimum, or carrying out a molecular dynamics simulation on the ground state potential energy surface (PES). After (R(0), v(0)) pairs are found, the initial electronic state is defined based on vertical excitation energies and oscillator strength as well as some assumptions about the excitation process.
| Choice of the electronic structure method
During a SH simulation, several electronic quantities are required at every simulation time step: energies, gradients, state-tostate couplings, (transition) dipole moments, and so on. In most cases, these quantities are directly computed along the simulation, which is often referred to as "on-the-fly" or "direct" dynamics (Helgaker, Uggerud, & Jensen, 1990). The quantities can be obtained with any electronic structure method that provides electronic excited states as well as their gradients and couplings. Those include many ab initio and density functional theory methods, but semiempirical methods (Thiel, 2014) or density functional tight binding (DFTB; Seifert & Joswig, 2012) can also be used. For large systems that can be partitioned, hybrid quantum mechanics/molecular mechanics (QM/MM) methods (Senn & Thiel, 2009) could also be applied. Alternatively, instead of relying on on-the-fly calculations, it is also possible to employ analytical model functions to describe the required quantities. Naturally, the method chosen affects the shape of the PESs and thus has a large impact on the reliability of the dynamical results and its interpretation. The electronic structure methods available within the SHARC2.0 package will be discussed below.
| Basis functions representations
The SHARC approach is a generalization of the SH method to the case of any arbitrary state-to-state couplings. The description of these couplings strongly depends on the definition of the electronic Hamiltonian, as well as on the choice of the basis functions |Ψ α (t)i for the electronic wave function |Φ el (t)i, cf. Equation (1). Therefore, in the following, we will briefly introduce the relevant theory and show how this choice affects the basic equations of SH within SHARC.
The standard electronic Hamiltonian in most quantum chemistry calculations is the molecular Coulomb Hamiltonian (MCH), which can be written (in a.u.) as: As the name suggests, this definition of the Hamiltonian consider neither fields external to the molecule nor any interactions beyond the Coulomb one. In order to incorporate arbitrary couplings into SH, one needs to extend this restrictive Hamiltonian, leading to the total Hamiltonian:Ĥ Note that withĤ total , we refer to the total electronic Hamiltonian, since the nuclear kinetic energy is always treated classically in SH. An example for a term inĤ additional is SOC, which is described by the Breit-Pauli Hamiltonian (Marian, 2012;Pauli Jr., 1927)-or the different mean-field approximations (Heß, Marian, Wahlgren, & Gropen, 1996;Marian, 2012;Neese, 2005) to it-and is necessary to simulate ISC. Another additional term could be the electric field-dipole moment coupling, which is necessary to describe light-matter interactions and thus to simulate explicitly all light-induced processes, such as excitation, stimulated emission, or Stark effects (Marquetand, Richter, By inserting Equation (1) and left-multiplying with hΨ β (t)|, one can derive the equation of motion for the electronic wave function coefficients (Doltsinis, 2006): where we used δ βα = hΨ β |Ψ α i, H βα = hΨ β jĤ total jΨ α i, and T βα = hΨ β |∂/∂t|Ψ α i. The time-derivative coupling T βα is usually computed as T βα = vÁK βα , where K βα = hΨ β |∂/∂R|Ψ α i is the NAC vector. All quantities in the equation of motion for the electronic propagation (6), as well as the electronic energy in Equation (2), depend on the choice of the set of electronic basis states {|Ψ α i}. Within this work, we refer to these different possible choices of basis state sets as "representations." As long as the basis set is complete, the choice of representation does not matter in a fully quantum-mechanical calculation. However, in the case of classical dynamics and more specifically in SH, the choice of representation does matter. For example, the representation affects the form of the PESs, such that in one representation a classically forbidden barrier appears, whereas in another representation the barrier can be surmounted by the classical nuclei. Moreover, the representation affects how localized or delocalized the state-to-state couplings in H or in K (Equation (6)) are, which in turn affects the number of hops in SH. For ISC dynamics, it should also be noted that the representation influences the population transfer involving the different components of multiplets (Granucci, Persico, & Spighi, 2012).
In principle, different representations could be used in SH, and there is a significant body of literature on the topic. Already Tully (1998) stated that the adiabatic representation should be superior for SH, compared to any diabatic representation. Moreover, Subotnik et al. (2016Subotnik et al. ( , 2013 and Kapral (2016) showed that SH is related to the mixed quantum-classical Liouville equation and (among other results) found that surface hops should optimally only occur in small regions of configuration space with large couplings. This means that the optimal basis for SH is the one where the state-to-state couplings are very localized. Abedi, Agostini, Suzuki, and Gross (2013) and Fiedlschuster, Handt, Gross, and Schmidt (2017) have shown that SH using the adiabatic representation reproduces very well the results of the exact factorization formalism, whereas other representations deliver unphysical results. This fully agrees with our earlier findings (Bajo et al., 2012), where different representations were compared in the presence of strong laser fields. Based on this body of literature, most authors agree that the best representation for SH should be the adiabatic basis, that is, the basis composed of the eigenstates of the total electronic Hamiltonian, as defined in Equation (4).
In principle, the eigenstates of the total electronic Hamiltonian should be obtained with quantum chemistry software.
Hence, if we can use quantum chemistry software to compute the eigenstates ofĤ total , together with energies, gradients, and NACs, we can apply the standard SH formalism without any modifications. However, most quantum chemistry software used nowadays is primarily intended to find eigenstates of the MCH. Considering the additional coupling terms inĤ additional during wave function computation makes the quantum chemistry computations much more involved. For example, if H additional =Ĥ SOC (e.g., the Breit-Pauli Hamiltonian or an effective one-electron spin-orbit operator) one is in the realm of relativistic quantum chemistry, which is considerably more complicated than nonrelativistic quantum chemistry. This is due to the need for at least two-component wave functions, significantly larger basis sets and configuration interaction (CI) expansions, and a larger number of states to compute (since multiplet components need to be converged separately). Furthermore, quantities such as NAC vectors are not routinely available from relativistic quantum chemistry. Alternatively, ifĤ additional includes electric field-dipole moment couplings, then the quantum chemistry calculation needs to include an electric field, which is relatively easy to do. However, since the electric field will vary quickly, it might be necessary to perform the quantum chemistry calculations using very short time steps. For example, at 400 nm, the electric field changes from zero to maximum amplitude in 0.25 fs, making time steps of 0.1 fs or smaller desirable and thus strongly increasing the computational effort.
| The basic idea behind SHARC
As we have seen, we are faced with the predicament that in the praxis quantum chemistry does not deliver the eigenstates of the total electronic Hamiltonian and all related quantities. In order to circumvent this problem, it is possible to find approximate eigenstates of the total Hamiltonian by applying quasi-degenerate perturbation theory (QDPT; Vallet, Maron, Teichteil, & Flament, 2000;F. Wang & Ziegler, 2005). In this approach, one computes first a suitable set of eigenstates of the MCH {|Ψ MCH α i}, for example, the few lowest singlet and/or triplet states. In the following, we shall call this set of eigenstates of the MCH the "MCH representation." Note that other authors refer to it as "adiabatic spin-diabatic" or "field-free" (Mitrić, Petersen, & Bona cıć-Koutecký, 2009) representation. In the basis of these states, we , which form the total Hamiltonian matrix in the MCH representation, H MCH . It is then possible to diagonalize this matrix to obtain approximate eigenenergies (the diagonal elements of H diag ) and eigenstates of the total Hamiltonian: Here, we call the set of eigenstates ofĤ total the "diagonal representation," with other authors referring to it as "spin-adiabatic" representation or "field-dressed" representation (Thachuk, Ivanov, & Wardlaw, 1996). The approximation inherent in this QDPT approach is that all couplings with higher MCH states than the ones computed are neglected. In a nutshell, the basic paradigm of the SHARC approach is to perform SH on the approximate eigenstates obtained through applying QDPT to a set of MCH states. As a consequence, in general, in SHARC simulations two representations are of prime relevance-the one in which the quantum chemistry is executed (the MCH representation) and the one in which the nuclear propagation is carried out (the diagonal one).
Additional representations might be beneficial for the a posteriori analysis of the results. An example would be a "diabatic" or "crude adiabatic" basis (Domcke, Yarkony, & Köppel, 2004), where the electronic wave function of each state does not change along nuclear coordinates. Such a basis cannot be rigorously defined for a polyatomic molecule (Kendrick, Mead, & Truhlar, 2000;Yarkony, 2004) but experimental observables from spectroscopy are often discussed in terms of such states and hence transforming the results to such a basis can be advantageous for the interpretation of the simulations. Hence, in the context of SHARC simulations, we label this representation the "spectroscopic" representation.
The three types of representations mentioned above-the MCH, diagonal, and spectroscopic ones-are exemplified in Figure 1. In this example, in the MCH representation, two states of the same multiplicity (S 1 and S 2 ) form an avoided crossing, where localized NACs mediate population transfer, whereas states of different multiplicity (S 2 and T 1 ) can freely cross and are coupled by delocalized SOCs; multiplet components are exactly degenerate. In the diagonal representation, no states cross (although they might touch at a conical intersection), multiplet components are split up, and all couplings are localized. In contrast, in a diabatic representation, all states can freely cross, and all couplings are delocalized.
| Practical implementations of the SHARC approach
In order to make the underlying idea of SHARC-SH on diagonal states obtained through QDPT-practical, several aspects of the SH algorithm needs to be adjusted to make it numerically stable and accurate. First, it is necessary to propagate the electronic wave function coefficients c diag (t) using only quantities in the MCH representation. The most straightforward procedure would be to apply the basis transformation analogous to Equation (7) to the equation of motion (6), which yields in matrix notation: In order to propagate the coefficients from one time step t to the next t + Δt step, this equation can be directly integrated by any suitable method, for example, Runge-Kutta/Butcher algorithms, or short-time matrix exponentials.
However, the derivative U † ∂U/∂t is numerically very difficult to handle because U is not uniquely determined by Equation (7) (see below) and because U might change very rapidly in the vicinity of near-degeneracy points. This situation is not rare in ISC dynamics, because in all molecules there will be state pairs with small mutual SOCs, and whenever these states cross (a type of "trivial crossing"), the matrix U changes rapidly, which leads to a highly probable hop in the diagonal representation. Hence, it is advisable to exclude U † ∂U/∂t from the integration of Equation (9). This can be achieved with the socalled three-step propagator approach (Mai, Marquetand, & González, 2015), where the computation of c diag (t + Δt) is split into three matrix-vector products: As can be seen, this equation describes first a transformation from c diag (t) to c MCH (t), second a propagation from c MCH (t) to c MCH (t + Δt), and third a transformation form c MCH (t + Δt) to c diag (t + Δt). Since the second step (multiplication by P MCH (t + Δt, t)) is the electronic propagation in the MCH basis and involves only MCH quantities, no U matrix is involved and numerical problems are avoided. The propagator matrix P MCH (t + Δt, t) can be obtained by integrating over H MCH and v ÁK MCH , for example, with Runge-Kutta/Butcher algorithms, or short-time matrix exponentials (details and equations can be found in Mai, Marquetand, and González (2015) or in the SHARC manual as given in "Further Reading"). The advantage of using the three-step propagator instead of a "one-step" propagator (i.e., directly integrating Equation (9)) is that the coefficients can be smoothly propagated with relatively long time steps even in the presence of state crossings with very small off-diagonal couplings (called "trivial crossings" by other authors (Fernandez-Alberti, Roitberg, Nelson, & Tretiak, 2012;Granucci, Persico, & Toniolo, 2001;Plasser et al., 2012;).
Instead of computing the propagator P MCH (t + Δt, t) from H MCH and vÁK MCH , it can be computed with the local diabatization procedure (Granucci et al., 2001). This procedure does not require NACs K MCH but instead employs the overlap matrix S MCH between states of subsequent time steps (i.e., at different geometries and with different orbitals), with elements: The local diabatization algorithm is very stable in the case of trivial crossings (where NACs are extremely narrow and large; Plasser et al., 2012), and the fact that no NAC vectors need to be computed is attractive as it makes more quantum chemical methods amenable to SH. The local diabatization algorithm was actually the original inspiration for the three-step propagator in SHARC (Mai, Marquetand, & González, 2015) and is the de facto standard way to propagate the electronic wave function in SHARC. To compute the required overlaps efficiently, SHARC2.0 comes with the WFOVERLAP program (Plasser et al., 2016), which makes extensive use of recurring intermediates, very efficient wave function truncation, and parallelization. This procedure was shown (Plasser et al., 2016) to be several orders of magnitude faster than a widely used previous implementation of such overlaps (Pittner, Lischka, & Barbatti, 2009).
The original fewest-switches SH formula proposed by Tully (1990) involves the right-hand side of the equation of motion (6). Within SHARC, one needs instead the right-hand side of Equation (9), which contains the problematic derivative U † ∂U/ ∂t. As the computation of this derivative should be avoided, another equation that adheres to the fewest-switches principle is needed. In SHARC, the following equation is used by default, which again is inspired by the local diabatization method (Granucci et al., 2001): Alternatively, in SHARC2.0 hopping probabilities can also be computed with the "global flux SH" formula (Lisinetskaya & Mitrić, 2011; L. Wang, Trivedi, & Prezhdo, 2014), which might be more advantageous if "super-exchange" mechanisms are present (i.e., if two states are only coupled via a classically forbidden third state).
In order to perform the dynamics simulations on the PESs of the diagonal states, it is necessary to compute the gradients corresponding to these states, based on the knowledge of the gradients of the MCH states and the transformation matrix. Similar to the gradient used in Ehrenfest dynamics (Doltsinis, 2006), the gradients of the diagonal states can be written as It can be seen that the full transformed gradient is a linear combination of several MCH gradients (first term), plus a contribution of the energy-difference-scaled NAC vectors (second term). The second term will significantly contribute if the NAC vector K MCH μν is large, which will usually occur close to conical intersections. However, the second term is negligible if all factors U * μα U να (μ 6 ¼ ν) vanish, which will happen if in each column and row of U only one value is nonzero-this will happen if SOCs are small and no significant state mixing occurs. If the NAC vectors are not available from quantum chemistry or too expensive to compute, then it is often a good approximation to neglect the second term in the gradient, especially in systems without large SOCs (hundreds of cm −1 ). The quality of this approximation can be checked by monitoring the total energy conservation in the trajectories. Beyond the terms in Equation (13), it might be necessary to also include Hellmann-Feynman in the gradient transformation-for example, if very heavy atoms dissociate from a molecule or if strong laser fields are present-although they tend to be difficult to obtain with electronic structure codes. Another very important aspect which needs to be considered in SHARC simulations is the tracking of the absolute phase of all parts of the electronic wave function. This is important for two mathematical objects-one is the electronic basis functions computed in the quantum chemistry calculations, and the other is the transformation matrix U.
Regarding the basis functions, it is known that in general, wave functions computed with any quantum chemistry program have a (usually) random sign. This does not affect energies or gradients, but any off-diagonal element Ψ 1 jÔjΨ 2 will change sign if one of the two state wave functions changes sign from one time step to the next. Within SHARC, these random sign changes are efficiently tracked through the computation of the state overlap matrix S (Equation (11)) and automatically removed. This always works as long as all matrix elements from the quantum chemistry software are based on the same wave functions and therefore any sign changes are consistent in all matrix quantities. The current interfaces in SHARC2.0 all provide this consistency, but one should nonetheless check the trajectories for random sign fluctuations in the involved quantities.
Regarding the transformation matrix U, each column of this matrix can be multiplied by a complex phase factor e iθ and still be an eigenvector of H MCH . Moreover, degenerate eigenvectors can freely mix, adding an arbitrary mixing angle to the set of undetermined parameters. This means that each numerical diagonalization during the dynamics could yield different, random phase factors which only depend on the implementation details of the diagonalization routine. These random phase factors would make U nonsmooth-a fact which is very detrimental to the electronic propagation and any subsequent analysis of the coefficients. Fortunately, in the above-described three-step propagator, the random phase factors cancel out during the propagation, so that tracking of the phase factors is not a big issue. Still, in our experience (Mai, Marquetand, & González, 2015), and as was pointed out by Pederzoli and Pittner (2017), it is preferable to perform phase tracking. The reason is that uncontrolled phase factors with the three-step propagator can lead to random population transfer among the components of a multiplet, possibly leading to unnecessary random hops between the components (Pederzoli, & Pittner, 2017). The original phase tracking algorithm in SHARC (Mai, Marquetand, & González, 2015) was based on the overlap matrix between the U matrices of two subsequent time steps (U † (t)U untracked (t + Δt)) and computes the phase-tracked U matrix at t + Δt as: whereÔ Löwdin orthonormalizes the matrix it acts on.Ĉ makes the matrix in square brackets commute with H diag (t + Δt), which is achieved by setting all matrix elements to zero if they correspond to nondegenerate eigenstates in H diag (t + Δt) (Mai, Marquetand, & González, 2015). This algorithm was claimed to fail in some cases where multiple states cross simultaneously between two time steps (Pederzoli & Pittner, 2017) because the algorithm would assign the phases to the wrong states. Despite that such a situation is not likely to occur often in molecular systems, in SHARC2.0 (Mai, Richter, Heindl, et al., 2018), the algorithm has been improved by using the state overlap matrix S (Equation (11)) to locally diabatize and thus transform away any state crossings before applying the algorithm. This is possible because the state overlap matrix is real and thus does not affect the complex phases which need to be tracked; with this change, the algorithm can be stated as This approach fully corrects the algorithm for the (already rare) cases where the old one failed. In the case that S is not computed, SHARC2.0 falls back to the old algorithm. Yet, all current interfaces available within SHARC2.0 allow to compute the overlap matrix S.
The effect of the new tracking algorithm is exemplified in Figure 2, where we repeated the computations of Pederzoli and Pittner (2017). The computation is based on a model of two coupled harmonic oscillators which cross with an uncoupled set of four degenerate states. Pederzoli and Pittner (2017) showed that in this model, the old tracking algorithm of SHARC failed, as shown in the middle panel. Conversely, the new tracking algorithm (right panel) produces the expected result of full population transfer when the harmonic oscillator states cross with the multiplet. We note, however, that the shown model (fully uncoupled multiplet with nonzero population) is very different from typical SHARC applications, where uncoupled states are typically omitted for efficiency reasons, and where population in uncoupled states is quickly collapsed by the decoherence correction schemes.
In the second approach, SOCs are properly incorporated into the electronic equation of motion. This can lead to population flux from one multiplicity to another, possibly inducing surface hops. However, in this approach, the PESs are not modified by the SOCs, which means that the dynamics is carried out on the surfaces corresponding to MCH states. Some authors refer to this approach as SH on "adiabatic/spin-diabatic" surfaces . The fact that the PES are not affected by SOCs has the advantage that the SH algorithm requires minimal changes-the only difference to regular SH is that in Equation (6) the Hamiltonian matrix elements H αβ do not form a diagonal matrix. This approach has been implemented in several SH codes published in the last years. For example, Franco de Carvalho and Tavernelli (2015) reported SH simulations with ISC for SO 2 using CPMD (2017) and LR-TDDFT electronic structure. As one variant of the "spin-diabatic" approach, they treat each multiplet as only one effective state, that is, they do not consider separate coefficients c α (t) (Equation (1)) for all the multiplet components. The employed "effective" SOC matrix elements are obtained as the sum of all SOC elements between the involved multiplet components. According to Granucci et al. (2012) and in our experience, this effective SOC approach should be treated with caution, as the electronic propagation is significantly different from a propagation including all components. This might be the reason that in SO 2, they find significant ISC to all three relevant triplet states (T 1 to T 3 ), whereas other works show that only one of these triplets is populated due to symmetry (Lévêque, Taïeb, & Köppel, 2014;Xie, Hu, Zhou, Xie, & Guo, 2013). The same general approach-SH in the "spin-diabatic" basis with merged multiplet components-is also implemented in the generalized trajectory SH method of Cui and Thiel (2014). Other related work using a "spin-diabatic" basis was reported by Habenicht and Prezhdo (2012) to study ISC in carbon nanotubes. Moreover, some earlier works on collision reactions performed SH including ISC on fully diabatic potentials (Fu, Shepler, & Bowman, 2011;Han & Zheng, 2011). We also note that in a number of more recent methods-like generalized ab initio multiple spawning (Curchod, Rauer, Marquetand, González, & Martínez, 2016;Fedorov, Pruitt, Keipert, Gordon, and& Varganov, 2016) or direct-dynamics vibrational multiconfigurational Gaussian (Richings et al., 2015)-the "spin-diabatic" basis is also used. However, the latter methods include more quantum mechanical effects and hence might not show the same dependence on representation as SH does. As discussed above, the optimal basis for SH is the diagonal basis, where the SOCs directly affect the shape of the PES. Hence, the third general approach for SH including ISC is to use the diagonal basis, which some authors refer to as "spin-adiabatic" basis . Besides SHARC and some early applications to scattering reactions (Maiti, Schatz, & Lendvay, 2004), some of the earliest followers of this approach were Persico and coworkers . They comprehensively showed that the spin-diabatic approach is incorrect because the effective SOC elements ignore the direction of the SOC vectors (i.e., containing the SOCs between all components of the involved multiplets), which become important if more than one singlet and one triplet are considered. Alternatively, if in the spin-diabatic approach, multiplet components are treated explicitly, then the approach does not guarantee rotational invariance of the results. Despite its clear superiority, the spin-adiabatic/diagonal approach is not yet widely spread. SHARC employs the diagonal basis since its birth in 2011, but SH in the diagonal basis has been implemented in the otherwise long-established Newton-X only in 2017 by Pederzoli and Pittner (2017). These authors have also introduced two new propagators in an effort to solve the problems with arbitrary phases in the matrix U. However, for these propagators, it is noted that the employed modified U matrices do not diagonalize the Hamiltonian-a fact which could be problematic if they couple hopping probabilities in the nondiagonalizing basis with nuclear gradients in the diagonal basis, because in this way electronic wave function and nuclear potentials could become inconsistent. On the contrary, if they employ gradients in the nondiagonalizing basis their algorithm loses the abovementioned benefits of the diagonal basis. Hence, the locally diabatic phase tracking of diagonal states in SHARC2.0 (Equation (15) and Figure 2) should be a safer solution to the arbitrary phase problem.
On a side note, we recall that SHARC can also be used to study molecules under the influence of electric fields. Analogously to the case of ISC dynamics, there are three general approaches which can be used for SH in this case. In the first, the dipole couplings can be simply added to the electronic Hamiltonian in the equation of motion (6), but with the nuclear dynamics evolving on field-free potentials. This approach-the counterpart of the "spin-diabatic" approach for ISC-is conceptually simple (Jones, Acocella, & Zerbetto, 2008) and it has been popularized by the field-induced SH method of Mitrić et al. (2009) and Mitrić, Petersen, Wohlgemuth, Werner, and Bona cić-Koutecký (2011), followed by a number of related implementations (Bajo et al., 2014;Tavernelli, Curchod, & Rothlisberger, 2010). The same approach also forms the basis of the external-field ab initio multiple spawning method (Mignolet, Curchod, & Martínez, 2016).
A second approach is to include the effect of dipole couplings on the PES. This idea was already proposed in the 1990s (Dietrich, Ivanov, Ilkov, & Corkum, 1996;Kelkensberg, Sansone, Ivanov, & Vrakking, 2011;Thachuk et al., 1996Thachuk et al., , 1998, is in principle equivalent to the "spin-adiabatic" approach, and is the one implemented in SHARC and used in the early laser field applications . Other authors refer to this approach as SH on field-dressed states (Thachuk et al., 1996(Thachuk et al., , 1998. The third approach for SH including laser fields is to perform SH on PES obtained after diagonalizing the electronic Hamiltonian in the Floquet picture (Bajo et al., 2012;Fiedlschuster et al., 2017;Fiedlschuster, Handt, & Schmidt, 2016). The advantage is that in this picture, the PES do not change as rapidly as in the "field-dressed" SH approach, where they change depending on field frequency. On the contrary, in the Floquet picture the potentials change only depending on the envelope function of the pulse. Recently, it was shown that Floquet-based SH delivers much better results than field-free or fielddressed SH when compared to exact dynamics (Fiedlschuster et al., 2017(Fiedlschuster et al., , 2016. Although the Floquet picture was used in one early application of the SHARC approach (Bajo et al., 2012), it is not currently implemented in SHARC2.0. However, Floquet-based SH can only be applied to laser fields where the Floquet treatment is appropriate. Rigorously, this is only the case if the laser field is strictly time periodic, but the approach is still approximately correct for fields with constant central frequency and slowly varying envelope. The SHARC2.0 program suite (Mai, Richter, Heindl, et al., 2018) provides an implementation of the SHARC approach together with a large set of auxiliary programs for various tasks such as setup, analysis, or quantum chemistry interfacing. The core program of SHARC2.0-the dynamics driver sharc.x-is written in Fortran 90, whereas most auxiliary programs are written in Python 2. The wave function overlap program WFOVERLAP (Plasser et al., 2016), which is essential for most ab initio dynamics simulations using SHARC2.0, is also written in Fortran 90.
The different parts of SHARC2.0 are presented in Figure 3 together with the general work flow during a full dynamics simulation project. The three columns in the figure show the work flow on three levels: (a) the ensemble level, where multiple trajectories are prepared, run, and analyzed; (b) the trajectory level, where nuclei and electrons are propagated from time step to time step; and (c) the time step level, where the quantum chemistry interfaces drive the electronic structure calculations. On the left of the figure, the different programs in the SHARC2.0 suite are given, next to the task they perform.
| Generation of initial conditions
The generation of initial conditions in SHARC2.0 consists of two general steps. In the first one, a large number of initial geometries and corresponding initial velocities are generated. For rather small, rigid molecules in the gas phase, the preferred approach is to sample geometries and velocities from a Wigner distribution of the ground state harmonic oscillator (Barbatti & Sen, 2016;Dahl & Springborg, 1988;Schinke, 1995). The effect of high temperature can be included by sampling from a Boltzmann-weighted combination of different vibrational states of the harmonic oscillator. This approach usually produces distributions of coordinates and energy which are close to the actual quantum distributions (Barbatti & Sen, 2016). Unfortunately, Wigner sampling cannot be applied to large and flexible systems with many degrees of freedom, such as those containing long alkane chains or flexible groups, biopolymers, or simply molecules in solution. The reason is that these systems possess a large number of local minima in the ground state PES as well as many anharmonic, nonlinear vibrational modes such as torsions or solvent diffusion, making the linear harmonic oscillator approximation invalid for these systems. Initial conditions for such systems can be prepared by running sufficiently long molecular dynamics simulations in the ground state and extracting snapshots from the trajectory (Garavelli et al., 2001). Within the SHARC2.0 suite, currently one can convert the results of AMBER (Case et al., 2017) simulations to the native SHARC format to create such initial conditions. Alternatively, initial conditions can be sampled from previous SHARC trajectories. The latter is not only useful to obtain initial conditions, but can also be used to restart excited-state trajectories with modified settings, for example, reducing the number of states after initial relaxation, switching level of theory, or following ground state dynamics after relaxation with singlereference methods.
The second step of preparing initial conditions in SHARC2.0 is to assign for each initial geometry, the corresponding initial electronic state. This state can be specified manually by the user, using either the MCH or the diagonal basis, or in a quasi-diabatic basis obtained through overlap computations between the initial geometry and a reference geometry with known electronic states. However, the more common procedure is to perform a single point calculation for each initial geometry and select the initial state stochastically (Barbatti et al., 2007), based on the obtained excitation energies, oscillator strengths, and some assumptions about the excitation process, for example, coming from experimental setups. Within SHARC, this stochastic selection process can either be carried out in the diagonal or the MCH basis, although for the employed delta pulse approximation only the MCH basis is rigorously correct.
| Dynamics driver
After preparation of the initial conditions, the SHARC trajectories are ready to be executed. The SHARC2.0 dynamics driver offers several popular algorithms for the coupled propagation of nuclei and electrons. The nuclei are generally propagated with the velocity-Verlet algorithm (Verlet, 1967). It is possible to propagate the nuclei on either MCH or diagonal PESs, although the latter is generally recommended. In that case, the nuclear gradients are computed by a transformation of the MCH gradients, as given in Equation (13). The contribution of the NACs to the gradient can optionally be neglected, if NACs are not available or to speed up the calculations.
The electronic wave function is propagated using the three-step propagator approach (Equation (10)). The MCH propagator P MCH (t + Δt, t) is computed as a product of matrix exponentials, using shorter time steps than in the nuclear propagation and linear interpolation of all quantities (Mai, Marquetand, & González, 2015). One can either employ the standard approach, which includes the NAC contribution v ÁK MCH , or the local diabatization approach (Granucci et al., 2001), which works with the wave function overlap matrix S (Equation (11); Plasser et al., 2016). Wave function and transformation matrix phases are always automatically tracked, as explained above.
The dynamics driver also carries out all steps related to the SH. Hopping probabilities can either be computed with Equation (12; Mai, Marquetand, & González, 2015) or with global flux SH (L. . Two decoherence correction schemes are available: the energy-based correction suggested by Granucci and Persico (2007) and the augmented SH algorithm put forward by Jain et al. (2016), which is based on propagating auxiliary trajectories for the nonactive states. Kinetic energy adjustment after a hop can either be omitted, or done parallel to the full velocity vector or the relevant NAC vector. In either case, frustrated hops can be treated with or without reflection. For QM/MM calculations, it is also possible to consider the kinetic energy of only a subset of atoms for decoherence correction and kinetic energy adjustments.
In case of computationally demanding systems, some interfaces take advantage of the parallel computing capabilities of the quantum chemistry programs (ADF, GAUSSIAN, and TURBOMOLE). Additionally, most interfaces can automatically schedule and distribute independent parts of the calculations to different CPUs, for example, wave function computations for several multiplicities, gradient computations for several states, or displacements for numerical gradients, in order to save wall clock time.
| Trajectory and ensemble analysis
The analysis of the simulation results can be performed in two general ways, either by manually analyzing individual trajectories or by statistical analysis of the ensemble. Although the latter is arguably more important, both approaches have their value. Individual analysis of trajectories is required to verify that all considered trajectories are physically sound and is a good basis for hypothesis building. Ensemble analysis can then be used to test these hypotheses and will provide most chemically interesting conclusions.
The quantities that can be analyzed in SHARC are divided into two groups Tully, 1990). The first group are physical observables. The most prominent examples are quantum yields, which can be defined either through the population of specific electronic states-ground state, long-lived triplet states, ionic states-or through nuclear coordinates, like in rearrangement reactions. The time evolution of certain quantum yields can be compared to experimentally measured lifetimes. For dissociation and scattering reactions, it is also possible to obtain velocity or kinetic energy distributions from the simulations. Other observables that can be obtained from the trajectories are different kinds of transient signals, such as transient absorption spectra (Berera, Grondelle, & Kennis, 2009), time-resolved photoelectron spectra (Stolow, Bragg, & Neumark, 2004), time-resolved infrared spectra (Nibbering, Fidder, & Pines, 2005), or time-dependent nuclear distribution functions (Bressler & Chergui, 2010;Sciaini & Miller, 2011). The second group of quantities that can be analyzed are descriptors Tully, 1990). These are not physical observables but are very useful for formulating reaction mechanisms, generalizing to classes of molecules, or comparing to other computational simulations. Examples are the character of electronic wave functions or internal nuclear coordinates.
The SHARC2.0 suite contains a number of tools which aid in the analysis of individual trajectories, in the detection of problems occurring in the simulations, and in the ensemble analysis. Electronic populations can be computed with a number of protocols, for example, by (a) summing up the diagonal quantum amplitudes |c diag α (t)| 2 over all trajectories or by (b) counting the numbers of trajectories in each diagonal state. By transforming the quantum amplitudes into the MCH repre- 2 , the sum of quantum amplitudes can also be computed for the spin-free MCH states (c), which is usually very helpful in interpreting the populations. Using this transformation to obtain the number of trajectories in each MCH state (d) can only be done in an approximate way. One can also (e) compute quasi-diabatic populations , or (f ) use histogram binning (Mai, Marquetand, Richter, González-Vázquez, & González, 2013), for example, to compute the number of trajectories whose oscillator strength is above 0.1. The population plots, together with monitoring population flux between the states, allow proposing kinetic models for the observed photoreactions. Additional tools allow fitting these kinetic models to the population data and to compute errors for the obtained kinetic parameters. These computed errors can be used to verify that a sufficient number of trajectories were computed for the employed population protocol and kinetic model. The evolution of the electronic wave function can also be monitored on-the-fly through charge transfer numbers computed with the TheoDORE package Plasser, Wormit, & Dreuw, 2014). The nuclear evolution can be analyzed through internal coordinates (bond lengths, angles and dihedrals), through normal mode coordinates (Kurtz, Hofmann, & de Vivie-Riedle, 2001;Plasser, 2009), or through essential dynamics analysis (Amadei, Linssen, & Berendsen, 1993). Furthermore, it is possible to extract automatically the geometries where hops between specific state pairs occur. Naturally, as the trajectory analysis is the most important and most specific step, users might need to carry out specific analysis procedures depending on the application, which might not be catered for in the general tools of SHARC2.0. However, all results in SHARC2.0 are stored in human-readable text files, allowing easy access to all raw data. A posteriori investigations of the PES for the interpretation of mechanisms are assisted by tools that allow optimization of minima and crossing points for all excited states encountered during the SHARC simulations. This is facilitated by an interface between the ORCA (Neese, 2012) external optimizer and the SHARC2.0 suite, which delivers necessary energies and gradients (Bearpark, Robb, & Schlegel, 1994;Levine, Coe, & Martínez, 2008) to ORCA. This interface allows optimizations that often are not possible within other quantum chemistry programs. In this way it is, for example, possible to optimize crossing points using gradients from ADF or TURBOMOLE, or from GAUSSIAN with TD-DFT.
| SHARC APPLICATION EXAMPLES
In the first publication of the SHARC method , both the influence of field-dipole couplings and SOCs were tested in an analytical model of the IBr molecule. Further tests included strong, off-resonant laser interactions via the socalled nonresonant dynamic Stark effect in the same model . Ideally, such interactions are treated in the Floquet picture, where the PESs of relevant states are replicated as many times as photons are considered to interact with the molecule in order to include multiphoton processes. This approach was exemplified in an analytical model of the Na 2 system (Bajo et al., 2012).
The first application of the SHARC method in an on-the-fly framework was devoted to the investigation of the excitedstate dynamics of the nucleobase keto-amino cytosine in gas phase (Mai et al., 2013;Richter, Marquetand, González-Vázquez, Sola, & González, 2012b). It was found that ISC to the triplet states can take place on a femtosecond timescale, competing with the well-known ultrafast IC pathways to the electronic ground state (Barbatti, Borin, & Ullrich, 2015;Crespo-Hernández, Cohen, Hare, & Kohler, 2004;Middleton et al., 2009). Despite rather weak SOCs (typicall about 20-40 cm −1 )-as expected for second row element atoms-the very small energetic separation between the singlet and triplet states led to efficient ISC on an ultrafast time scale. In Mai et al. (2013), the enol-amino tautomer of cytosine was also investigated. In comparison to the keto-amino tautomer, the enol-amino tautomer shows only negligible ISC. Also other relaxation processes, such as IC, happen on different time scales in the two tautomeric forms and a rather complex picture of the excited-state dynamics is obtained. This intricate dynamics can lead to enormous complications in attributing experimentally found time scales to the calculated molecular processes, as detailed by Ruckenbauer, Mai, Marquetand, and González (2016).
Related works using SHARC for nucleobases were focused on uracil (Richter & Fingerhut, 2016; and thymine (Mai, Richter, Marquetand, & González, 2015a, where also a low but nonnegligible population of triplet states was observed in gas phase. Generally speaking, these works show that in the isolated pyrimidine nucleobases the triplet states can potentially be relevant to understand its excited-state relaxation dynamics, and that they should not be neglected a priori. Driven by our interest in nucleobases and their reaction to ultraviolet irradiation, the role of triplet states on the thymine dimer formation-one of the most abundant DNA photolesions-was investigated using SHARC. Interestingly, the nonadiabatic simulations performed showed (Rauer, Nogueira, Marquetand, & González, 2016) that triplet states remain unpopulated along the reaction pathway on an ultrafast timescale. In contrast to the direct formation of the thymine dimers, the role of triplet states is well documented when a photosensitizer, which prepares the system containing two thymines directly in a triplet state, is employed. Using SHARC in conjunction with other methods, it could be elucidated that the thymine dimer formation in the triplet manifold is a stepwise reaction mechanism, where a long-lived triplet biradical intermediate is traversed before a bifurcation on the ground-state PES leads to the cyclobutane photoproduct with low yield (Rauer, Nogueira, Marquetand, & González, 2018).
SHARC was also applied to a large number of modified nucleobases, whose chemical formulae differ only slightly from the canonical nucleobases but whose excited-state dynamics can be dramatically different . In this regard, thio-substituted nucleobases (thiobases)-bearing a sulfur atom instead of an oxygen atom-are probably among the most interesting systems. Unlike their canonical counterparts, thiobases show exceptionally high-ISC yields, usually in the range of 90-100%. With SHARC, the excited-state dynamics of 2-thiouracil and 2-thiocytosine (Mai, Pollum, et al., 2016) was investigated. Figure 4, which is adapted from , shows an example of the time evolution of the singlet and triplet states in 2-thiouracil. The figure also presents the kinetic model assumed for the shown fit, including the obtained time constants for the IC and ISC processes. On the right panel, the figure also depicts the temporal evolution of two important internal coordinates, which are very helpful in analyzing the dynamics in the two triplet minima of 2-thiouracil (Koyama, Milner, & Orr-Ewing, 2017;Sanchez-Rodriguez et al., 2017). Both mentioned thiobases showed ISC in the time range of a few 100 fs, consistent with experimental results on these two and several other thiobases (Koyama et al., 2017;Mai, Pollum, et al., 2016;Martínez-Fernández, Corral, Granucci, & Persico, 2014;Pollum, Jockusch, & Crespo-Hernández, 2014Pollum, Ortiz-Rodríguez, Jockusch, & Crespo-Hernández, 2016;Sanchez-Rodriguez et al., 2017). Based on the SHARC results, a general explanation for this behavior of thiobases was put forward (Mai, Pollum, et al., 2016), stating that in thiobases the excitedstate minima are stabilized by thionation, whereas the S 1 /S 0 conical intersections retain the same energies as in the canonical bases. As a consequence, there is a very large barrier for ground state relaxation, making ISC the only viable deactivation route in these molecules and explaining the exceptionally high-ISC yields. Other nucleobase analogues-purine , 6-azacytosine (Borin, Mai, Marquetand, & González, 2017), or 5-bromouracil (Peccati, Mai, & González, 2017)-were also investigated with SHARC, showing that purine and 6-azacytosine do not exhibit ultrafast ISC.
SHARC has also been used to study the excited-state relaxation of the SO 2 molecule )-a system that has raised a lot of attention in the last years (Franco de Carvalho & Tavernelli, 2015;Wilkinson et al., 2014;Xie et al., 2013)-and the results agree nicely with independently published (Lévêque et al., 2014) exact quantum dynamics simulations on potentials of slightly higher accuracy. In particular, out of the three low-energy triplets of SO 2 , only one is significantly populated due to symmetry reasons, and this is nicely reproduced in the SHARC simulations . Furthermore, the release of singlet oxygen from cyclohexadiene endoperoxide (Martínez-Fernández, González-Vázquez, González, & Corral, 2014) was investigated and it was found that among the two competing pathways-cycloreversion and O-O homolysis-the latter is the dominant one with remarkable ISC efficiency. The mechanism of other photosensitizers was also investigated with SHARC, for example, the prototypical photosensitizer benzophenone (Marazzi et al., 2016), which can be used to promote thymine dimerization. That dynamical study showed that two Overview over results obtained from SHARC simulations for 2-thiouracil using the MS-CASPT2(12,9)/cc-pVDZ method . In (a), the time-dependent populations (thin lines) and kinetic model fits (thick lines). In (b), the assumed kinetic model with the obtained fit parameters and errors. In (c) and (d), the temporal evolution of two key geometric parameters (C=C bond length and thiocarbonyl pyramidalization angle).
(Reprinted with permission from . Copyright 2016 ACS, published under CC-BY license) discussed ISC mechanisms, involving the two lowest triplet states, coexist in a kinetic equilibrium. In the thiophene molecule (Schnappinger et al., 2017), photoexcitation leads to both ring puckering and ring opening followed by an interplay of IC and ISC due to the near degeneracy of several states. Furthermore, SHARC was also used to shed light on the ultrafast ISC pathways of 2-nitronaphtalene (Zobel, Nogueira, & González, 2018, rationalizing the high-ISC efficiency by virtue of small electronic and nuclear alterations of the chromophore when going from the singlet to the triplet manifold. Quite recently, SHARC was interfaced to ADF (Baerends et al., 2017), which is one of the few density functional theory codes that can perform perturbational spin-orbit computations, and hence is ideally suited to study ISC phenomena. With SHARC and ADF, Atkins and González (2017) recently investigated the ultrafast dynamics of [Ru(bpy) 3 ] 2+ , a prototypical transition metal complex widely utilized as a photosensitizer in photovoltaic and other photonic applications. These were the first trajectories using SOCs on-the-fly for a transition metal complex. They showed that the ultrafast ISC, taking place on a 25-fs time scale, is not only due to the high density of states and the large SOCs (>400 cm −1 ), but requires nuclear relaxation involving Ru-N bond vibrations, among other degrees of freedom (Atkins & González, 2017). In general, but particularly in transition metal complexes with their high density of states, it is extremely beneficial to follow the character of the electronic wave function on-the-fly . It is in this respect that the automatic characterization of charge transfer numbers using the TheoDORE code Plasser et al., 2014) can be extremely revealing. Figure 5 illustrates for one exemplary trajectory of [Re(CO) 3 (im)(phen)] + (im = imidazole, phen = phenanthroline) in water the evolution of electronic wave function from predominantly Re(CO) 3 ! Phen (metal-to-ligand charge transfer) to Im!Phen (ligand-to-ligand charge transfer).
SHARC has a growing number of users, as documented by various publications from other research groups. Corrales et al. (2014) studied bond breaking times for alkyl iodides with alkyl chains of different lengths. They observed a linear relationship between the reduced mass of the chain and the bond breaking time, using a modified version of SHARC. A subset of the same authors employed the same approach for investigating the photodissociation of chloroiodomethane (Murillo-Sánchez et al., 2017). Cao, Xie, and Yu (2016) ruled out the participation of previously proposed triplet intermediates in the N, O rearrangement reaction of oxazole and instead proposed singlet pathways. Cao (2018) also investigated the role of ring puckering and ring opening in the photorelaxation of thiazole and isothiazole using a modified SHARC-MOLPRO interface. Banerjee, Halder, Ganguly, and Paul (2016) studied the electron-catalyzed photofragmentation of 5-phenyl-2H-tetrazole, in which upon photoexcitation, an electron is injected from one part of the molecule into another part, where bond dissociation takes place, and afterward the electron returns to its originating part of the molecule. Bellshaw et al. (2017) showed that the dynamics of the CS 2 molecule is strongly affected by SOCs, as the dissociation barrier is much smaller in the triplet states than in the singlet ones. Siouri, Boldissar, Berenbeim, and de Vries (2017) used SHARC to identify ISC pathways in the photorelaxation of 6-thioguanine tautomers. Pederzoli and Pittner (2017) investigated ISC processes in thiophene, as mentioned in the SHARC section above. Squibb et al. (2018) found out that, according to SHARC calculations based on CASSCF electronic structure properties, triplet states play a role in the photofragmentation of acetylacetone.
| CONCLUSIONS
We have presented the SHARC approach, as it is implemented in the SHARC2.0 program package (Mai, Richter, Heindl, et al., 2018). The SHARC approach is an extension of the popular SH method, which allows simulating the full-dimensional excited-state dynamics of molecules including IC. With the SHARC approach, it is possible to incorporate arbitrary coupling terms in the electronic Hamiltonian, opening up the possibility to treat also other processes beyond IC, such as ISC or laserinduced excitation. The central idea of SHARC is that SH should be performed on the PESs of the eigenstates of the total electronic Hamiltonian, in contrast to many other SH approaches, where the eigenstates of the MCH are used. The eigenstates of the total electronic Hamiltonian are computed by diagonalization of the Hamiltonian matrix obtained from quantum chemistry. This diagonalization step makes it necessary to perform a number of basis transformations, which affect most of the working equations in SH. The working equations in SHARC are designed for optimal numerical accuracy and stability, which is one of the biggest achievements of the SHARC approach.
We have also provided a brief overview over the SHARC2.0 package, which has been released in the beginning of 2018. The core program of the new version of the SHARC package is the SHARC2.0 dynamics driver, which is currently interfaced to six quantum chemistry packages-MOLPRO , MOLCAS (Aquilante et al., 2015), COLUMBUS (Lischka et al., 2011), ADF (Baerends et al., 2017), GAUSSIAN (Frisch et al., 2016), and TUR-BOMOLE (Furche et al., 2014)-enabling dynamics simulations based on many popular electronic structure methods. The SHARC2.0 package also contains a large number of auxiliary programs, which automatize all steps in the preparation of the simulations and provide a wide array of analysis tools.
Finally, we have shown that the SHARC approach (in its previous implementation, Mai, Richter, et al., 2014) has been very successful in describing many excited-state phenomena in a variety of molecular systems. Some highlights include the work on nucleobases and nucleobase analogous, the simulation of ISC in transition metal complexes such as [Ru(bpy) 3 ] 2+ , and diverse works on small inorganic and organic chromophores.
As mentioned above, one of the most important ingredients for any SHARC simulation is an appropriate and efficient electronic structure method, which can facilitate accurate simulations over sufficiently long time scales and statistically large number of trajectories. Hence, a constant focus of the ongoing SHARC development efforts is to broaden to further efficient quantum chemical codes. For example, the simulation of very large chromophores can be made feasible with graphics processing unit accelerated electronic structure codes, as the inspiring work of Penfold (2017) recently showed for the directdynamics variational multiconfigurational Gaussian method. Moreover, for the treatment of chromophores embedded in complex biological environments, SHARC will benefit from further development of interfaces to hybrid QM/MM methods. On the other extreme, small systems will profit from very accurate electronic structure methods with analytical gradients (Celani & Werner, 2003;MacLeod & Shiozaki, 2015). An entirely different possibility is offered by using machine learning potentials (Behler, 2017;Gastegger, Behler, & Marquetand, 2017;Hase, Valleau, Pyzer-Knapp, & Aspuru-Guzik, 2016;Ramakrishnan & von Lilienfeld, 2017) and extending them for the treatment of nonadiabatic dynamics.
FURTHER READING
The SHARC2.0 package and the WFOVERLAP program, as well as comprehensive documentation and tutorials, can be obtained at http://sharc-md.org/ | 15,150.8 | 2018-05-09T00:00:00.000 | [
"Physics"
] |
Trends in modeling Biomedical Complex Systems
In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented.
Introduction
New "omics" technologies applied to molecular genetics analysis are producing huge amounts of raw data. Biomedical research laboratories are moving towards an environment, created through the sharing of resources, in which heterogeneous and complex health related data, such as molecular data (e.g. genomics, proteomics), cellular data (e.g. pathways), tissue data, population data (e.g. genotyping, SNP, epidemiology), as well as data generated by large scale analysis (e.g. Simulation data, Modelling, Systems Biology), are all taken into account as shown in Figure 1.
The future of biomedical scientific research will be to use massive computing data-crunching applications, data grids for distributed storage of large amounts of data and to develop new approaches to the study of the medical implications of the genome-enabled medical science. Microarrays, NMR, mass spectrometry, protein chips, gel electrophoresis data, Yeast-Two-Hybrid, QTL mapping, gene silencing, and knockout experiments are all examples of technologies that capture thousands of measures, often in single experiments.
In this review we introduce the term Biomedical Complex System, together with some examples, to characterize the complexity of current models for biological processes involved in normal and pathological states that make full use of the current available high-throughput data and potential applications are highlighted.
Complex system application in human diseases
Human diseases result from abnormalities in an extremely complex system of molecular processes. In these processes, virtually no molecular entity acts in isolation and complexity is caused by the vast amount of dependencies between molecular and phenotype features. It is a very intuitive concept to represent such complex information as networks. The field of network theory has progressed rapidly over the last years (see [1] for a review of recent results and references) and not surprisingly, this representation of complex information has found its way into medical research [2,3]. It has been suggested that a systems based approach using network analysis could offer means to combine disease related information and to identify the most important factors for the phenotype of the disease. In particular, it has been stressed that the combination of genomic, proteomic, metabolomic and environmental factors may provide insights into pathognomonic mechanisms and lead to novel therapeutic targets. Post-genomic approaches have already contributed to the understanding of specific aspects of the disease process and to the development of diagnostic and prognostic clinical applications. Cardiovascular obesity, diabetes, autoimmune diseases, and neurodegenerative disorders are some of the disease areas that have benefited from these types of data. Such diseases are the result of disturbances at different scales in several molecular interactions and processes, which contribute to an increased susceptibility to aging, morbidity and mortality. For such diseases, a vast amount of data originating from different sources is typically available, but in common clinical practice different types of data are interpreted in isolation. It is therefore poorly understood how different factors act in synergy to cause a complex disease phenotype. The patterns of dependencies between these factors may be effectively reflected in different, connected networks that associate patients with clinical and molecular abnormalities as well as environmental determinants. This process of data integration will allow to understand better the disease phenotype and to assign patients to specific disease subtypes [4,5]. We foresee that complex diseases will prompt the development of classifiers and kernel-based approaches for clinical decision support, in which many genome-wide data sources are combined with physiological parameters within the patient domain, making use of novel modeling methodologies.
A great challenge for contemporary Molecular Medicine is the modeling, description and ultimately the comprehension of the origins of complex and multifactorial pathologies within a Systems Biology framework. Terms 'multifactorial' and 'polygenic' express the idea that multiple genes act in combination with lifestyle and environmental factors. Inheritance of polygenic traits and diseases does not fit simple patterns as in a pure Mendelian case, but there is also a strong environmental component. Many common traits, such as skin colour, height, and even intelligence, are inherently multifactorial, and also many common diseases, such as type-2 diabetes, obesity, asthma, cancers, mental retardation aging related diseases, cardiovascular diseases and obesity, tend to be multifactorial.
As an example of complex pathology, we can consider human aging. The ageing process is caused by the progressive lifetime accumulation of damages to macromolecules and cells. The capability of the body to set up a variety of molecular and cellular strategies to cope with and neutralize these damages is considered a key feature of longevity. The aging process can be influenced by several variables such as lifestyle, environment, genetics and intrinsic stochasticity. For example, transcriptional noise measurements in young and old cardiomyocytes by global mRNA amplification as well as quantification of mRNA levels in a panel of housekeeping and heartspecific genes increase in the old age compared to the young one [6]. The understanding of the aging process raises the question of stability over time of biological functions (anthagonistic pleiotropy) and of discrimination among biological and chronological age. Novel strategies may help to identify new molecular targets that can be addressed to prolong the lifespan and to improve the quality of life during aging.
Psychiatric disorders seem to particularly lend themselves for a systems based analysis approach. It is well known that schizophrenia has a strong genetic compound with concordance rates in monozygotic twins reaching approximately 50%. This increased risk is conferred by a multitude of different genes with the most important genetic polymorphisms accounting for 1% of increased risk. It seems likely that the disease is ultimately precipitated by a complex interplay of genetic predisposition and of a broad spectrum of environmental and nutritional factors (see [3] for updated references on schizophrenia and its identification as a complex network disease). In this context, epidemiological factors such as urbanicity, geographical distribution, and migration behaviour, but also maternal risk factors (such as infections, malnutrition, adverse life events during pregnancy or season of birth), have been suggested to be associated with the risk of schizophrenia onset. The relationship between these factors and the interplay with genetic determinants remains unknown, and integrated, system based investigations seem to be a promising approach to obtain deeper insights into the disease aetiology.
Metabolic syndrome is a combination of medical disorders that increase the risk of developing atherosclerosis, cardiovascular diseases, diabetes and other pathologies. It affects a significant part of population in western countries, and its prevalence increases with age. The exact patho-physiological mechanisms of metabolic syndrome are not yet completely elucidated, due to the number of involved factors, and to their interaction complexity. The most important factors are: weight, genetics, aging, and lifestyle, i.e., low physical activity and excess caloric intake. There is debate regarding whether obesity or insulin resistance (IR, i.e. the condition in which normal amounts of insulin are inadequate to produce a normal insulin response) are the cause of the metabolic syndrome or if they are consequences of a more far-reaching metabolic derangement. A number of markers of systemic inflammation, including C-reactive protein, are often increased, as are fibrinogen, interleukin 6 (IL-6), Tumor necrosis factor-alpha (TNFα) and others. Some have pointed to a variety of causes including increased uric acid levels caused by dietary fructose. In vivo and in vitro studies of insulin signalling network have provided insights into how insulin resistance can develop in some pathways, whereas insulin sensitivity is maintained in others. In a Systems Biology perspective, this phenomenon can be modelled as a form of adaptation with consequent switch between stable phenotypes. This model is supported by experimental observations showing that the pathways leading to IR contain several phosphorylation steps, and this can be sufficient to support multistability and switching among phenotypes [7].
An emerging field of Medicine is the so called Ecological Medicine, that is trying to define the health state in terms of biological community abundance, composition and type. Recent studies on gut microbiota (the intestinal bacteria heterogeneous population) show that its composition may change with pathological state and ageing. Since it is also modulated by the Immune System, it can be seen as a crucial node for determining the interactions between environment (food) and internal machinery (Immune and Metabolic system), especially for those pathologies related to both factors (see for example [8]).
Multidisciplinary complex system theory
A number of physico-mathematical theories are dealing with systems characterized by a high number of degrees of freedom, non linear relations between parameters, high variability and stochastic behaviour. The science of Complex Biological Systems (ranging from Biochemistry, Physics, Biology, Medicine to Social Sciences) is trying to understand global behaviour and "emergent properties" (such as self-organization, robustness, formation of memory patterns, etc...) on the basis of microscopic factors like interacting molecules, complexes, organelles or whole cells, depending on the scale of the system under study. The unifying framework is that biological systems are constituted by a very high number of mutually interacting elements, that organize themselves in functional and dynamic networks, at different levels of complexity. The fundamental unit of living organisms is the cell (that constitutes a complex system in itself) representing the building block of higher levels of organization, such as tissues, organs and whole organisms. Different organisms organize themselves in societies and ecological systems, in which hundreds or even thousands of different species coexist in a dynamic equilibrium. The evolutionary history of biological systems, but also the history of single organisms, entails a series of constraints that can influence their structure and functional capacities: the role of evolution and environment can thus provide useful information about how to treat a specific problem (e.g. disease). Whereas a thermodynamic approach (in the limit of system elements going to infinity) is suitable for complex systems. Thus, the role of stochastic fluctuations has recently received a renewed interest, since the focus has moved to mesoscopic scales in which the number of interacting elements is quite small, the noise features are not so trivial (i.e. gaussian) and can drive the system towards unexpected behaviour [9,10]. Recent studies [11][12][13] based on fluorescence measurements onto the genome of simple bacteria (like E. coli) have showed that biological noise can be classified as intrinsic and extrinsic noise. Extrinsic fluctuations are those that affect equally gene expression in a given cell, such as variations in the number of RNA polymerases or ribosomes, that can make cell activity diverge in an initially uniform population. Intrinsic fluctuations are instead those due to the randomness inherent to transcription and translation; they should affect independently each element of the same network (e.g. gene or protein levels in the same cell) adding uncorrelated variations in the overall levels of cellular activity. A deeper understanding of the role of such noises could help in explaining the different responses of organisms to the same stressogen and pathologic input.
Measuring and data analysis
The inherent complexity of biological systems requires suitable experimental, statistical, and computational strategies: generally speaking, biological experiments show fundamental differences from physical experiments, such as a higher and non-trivial (non-gaussian) variability, a lower number of available measurements (such as the number of points in a kinetic experiment, or simply the number of experimental repetitions) and the lack or poor availability of small-scale (single-molecule) experiments. During the last decade a large class of in vivo and in vitro measurements has been developed and/ or improved to fill this gap, such as Quantitative massspectrometry, high throughput sequencing, proteomics, genomics, metabolomics, measurements. Imaging techniques, such as microscopy, ultrasound, CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), using molecular probes, such as quantum dots and nanoshells, are capable to produce quantitative and model-confirmatory data in a wide range of spatial and temporal intervals, from cells to organs or individuals, and from microseconds to hours. A central feature of all these imaging techniques is the ability to produce "in vivo" molecular data in a dynamic way (see [14] for a review).
A common denominator of these methodologies is the need for powerful computational analysis and sophisticated statistical elaboration. As an example, the highthroughput gene expression experiments (microarrays) have posed new classes of statistical problems, due to the huge number of statistical tests to be performed simultaneously over widely heterogeneous data, for which an accurate control of false positive/negative rate is a crucial issue. The solution of this problem has been faced in several ways, for example by developing posthoc correction methods for the significance threshold [15,16], or by including "a priori" biological information. The observation that gene expression measurements follow a highly skewed and fat-tailed has raised the question of reconstructing the underlying network of interactions able to describe such observations [17,18]. Significance analysis at single gene level may suffer from the limited number of samples and high experimental noise, that can severely limit the power of the chosen statistical test. This problem is typically approached by applying generalized null models [19] to control the false discovery rate, or taking into account prior biological knowledge. Pathway or gene ontology analysis can provide an alternative way to enforce single-gene statistical significance, with the advantage of suggesting a clearer biological interpretation. The use of "a priori biological knowledge", as coded in pathways or ontologies, may help to detect relationships at multiple scales, grouping single gene analyses into clusters (pathways, ontologies) and super-clusters (networks of pathways, higher-order ontologies) with precise biological functions.
Among different approaches that have been proposed to identify significant gene groups, a large number is based on lists of differentially expressed genes such as GOstat [20] that compares the occurrences of each GO (Gene Ontology) term in a given list of genes with its occurrence in a reference group on the array. In the context of pathway analysis, a similar approach is used by Pathway Miner [21], which ranks pathways by means of a one-sided Fisher exact test. Other methods allow investigators the possibility to define their own genegrouping schemes. For example, the Global Test package [22] applies a generalized linear model to determine if a user-defined group of genes is significantly related to a clinical outcome. With the Gene Set Enrichment Analysis (GSEA [23]) an investigator can test if the members of a defined gene set tend to occur towards the top or the bottom of a ranked significance list obtained from differential expression analysis. Other methods combine pathway information with Fisher exact test for 2 by 2 contingency tables and its variations [24] allowing dimensionality reduction of the problem (from 10 4 probes to 10 2 pathways) and increasing biological interpretability of the studied processes. By these methods, it is possible to consider the single-gene relevance at different levels of biological organization: groups of genes as provided by several ontology classes, pathways and metapathways. The further direction is to integrate different kinds of biological knowledge (protein-protein interaction, transcription factor network, biochemical reaction network, as well as clinical and aetiological information about the samples) into a unified framework.
From high throughput data to modelling
Nowadays, an important area of investigation focuses on using statistical inference for mechanistic models of partially observed dynamic systems. This area represents the challenge task of combining statistical methods with models of dynamical systems. Dynamic models, usually BMC Bioinformatics 2009, 10(Suppl 12):I1 http://www.biomedcentral.com/1471-2105/10/S12/I1 written in forms of differential equations (DEs), describe the rate of change of a process. They are widely used in medicine, engineering, ecology and a host of other applications. One central and difficult problem is how to estimate DE parameters from noisy data. Direct approaches (such as least squares) give rise to difficulties partly because of the intrinsic definition of the mathematical model. A formal approach in specify uncertainty in systems of differential equations within a statistical inferential framework is something that mathematicians have only very recently started to consider. There is a great motivation, within the area of Computational Systems Biology, to fully define and propagate all sources of uncertainty in model-based reasoning, with reference to the genetic, biochemical and cellular mechanisms initiating and regulating fundamental biological processes. These systems are non-linear, nonsteady state, and contain many unknown parameters. A single nonlinear differential equation model can describe a wide variety of behaviours including oscillations, steady states and exponential growth and decay, with relatively few parameters. Noteworthy, many DEs do not have an analytic solution, implying that a likelihood centred on the solution to the DE is full of local maxima, ridges, ripples, flat sections, and other difficult topologies. If only parts of a genetic, biochemical and cellular network can be observed directly, structural non-identifiability may then arise and manifests itself in functionally related model parameters which cannot be estimated uniquely. The challenge in implementing robust predictive analyses is that integrals over high-dimensional parameter spaces are usually involved that can neither be evaluated analytically, nor numerically in a straight-forward way. Although inference techniques, such as Maximum Likelihood, are relatively easy to implement, they suffer from drawbacks, such as not fully exploring the entire parameter space. As a solution to this problem, the generalized profiling method [25] was proposed, in which DE solutions are approximated by nonparametric functions, which are estimated by penalized smoothing with DE-defined penalty. The existing inference methods have substantial limitations upon the form of models that can be fitted and, hence, upon the nature of the scientific hypotheses that can be made and the data that can be used to evaluate them. Instead, the so called plug-and-play methods require only simulations from a model and are thus free from such restrictions. Plug-and-play inference is extremely useful when one wishes to entertain multiple working hypotheses translated into multiple mechanistic models [26]. The Bayesian methodology provides one such inferential framework, however, whilst beautifully elegant in principle, computational challenges associated with its practical instantiation are formidable [27], due to a combination of non-linear non-steady state differential equations containing many parameters in conjunction with a limited amount of data. Some currently useful computational tools are: Laplace's method of asymptotic approximation and Markov Chain Monte Carlo (MCMC) methods, including multi-level Metropolis-Hastings algorithms with tempering, the Gibbs sampler and the Hybrid Monte Carlo algorithm. Particle filter algorithms are useful for sequential Bayesian state estimation when the Kalman filter is not applicable because of non-linear dynamics and/or non-Gaussian probability models.
Methodologies for modelling technique
What does modelling a complex system mean? From a strictly physico-mathematical point of view, it means reproducing the main features of a real system (like phase transitions and bifurcations, parametric and stochastic resonance phenomena) by means of a model with as few parameters as possible, in order to get a completely controllable system, that can possibly be treated analytically. Usually, the aim is to learn as much as possible from such a simplified problem, with the hope that the original system can be seen as a "small perturbation" of that (i.e. we are hoping to have caught the peculiar features). For a complex system, it is expected that the number of parameters and basilar elements is not too little, since, very often, the level of complexity is given by a high number of agents (composing the system) that are interacting in a nontrivial way, so that a "mean field" approach is not suitable. The search for such simplified models is, without any doubt, very useful, especially from a theoretical biology point of view, but it may not be the case for an applied biomedical problem. First, the essential ingredient of simplification, that is passing from the specific case to the more general one, is to discard as many details as possible. But in a biomedical problem we can be more interested in a particular solution of our problem, that deals with as many details as possible of the system (e.g. the features of a specific pathway involved, or the past history of the sample). The multiplicity of subclasses in which the parameter space associated to our system can be divided is the goal of our modelling, rather than something to "average out" as in a classical thermodynamic approach to a physical system. Similarly, we might be more interested in a multiparametric model that, even if analytically untractable, can be repeatedly simulated and "tuned" to a real situation, rather than in an idealized toy model that has lost any relationship with reality. Examples that go in this directions are the so called flux balance analysis (FBA) or flux optimization, in which the parameters (e.g. reaction rates) of a real system are changed in order to study how the reaction yields are affected [28][29][30]. In this BMC Bioinformatics 2009, 10(Suppl 12):I1 http://www.biomedcentral.com/1471-2105/10/S12/I1 sense, the paths to complex system modelling, from a theoretical and a more applied point of view, may run together in the beginning, but they may possibly divide along the road and take different directions. The tradeoff between the search for a model as simple as possible and an adequate description of the original complex system is more markedly present in the fields of biology and medicine (rather than in physics), because a beautiful theoretical model may be of no help if it cannot be brought back to reality. One practical task of modelling may be to help in the generalization of the results obtained from simpler organisms (that can be massively tested by experiments) to humans, by following the analogies connecting them. This actually is a common practice for inferring protein functions and interactions by looking at their (structural) similarities with proteins from simpler (and more studied) organisms. Thus, related to this task, deepening the knowledge about such analogies and their limits is of fundamental importance for fruitful biomedical achievements. This can be pursued by cleverly scanning databases, repositories and ontologies in search for common modules and structures, and a good modelling must provide hints and tools for adequate simulations of these different complex systems (e.g. in their dynamics of response to stimuli [31,32]).
A classical case study in biophysics is the induction and maintenance of memory in biological systems (from small genetic circuits to whole cells as neurons). The mathematical formulation of this problem is related to the concept of bistability, both in a deterministic and in a stochastic formulation. Deterministic bistability is typically governed by feed-back, auto-catalysis and non linear interactions, and can be appreciated by stability and robustness analysis. Stochastic bistability is more subtle, crucially related to noise level as well as to the real size of the systems (e.g. the number of proteins participating in a reaction). An approach that has received renewed attention, is based on the so called Chemical Master Equation (CME) that describes the temporal evolution of the probability of having a given number of molecules for each chemical species involved. The discrete probabilistic approach, as with CME, is attractive because it ensures the correct physical interpretation of fluctuations in the presence of a small number of reacting elements (as compared to continuum approaches as Langevin and Fokker-Planck formalism [33]) and because it provides an unitary formulation for many biological processes, from chemical reactions to ion channel kinetics. The CME theory can be related to predictions on the noise levels in selected biological processes, as for example during transcription and translation [34,8]. In particular, the observation that mRNA is produced in bursts varying in size and time has led to the development of new models capable of better explaining the distributions of synthesized products [35].
Computing and standards for large scale simulation
Due to large data sets and accompanied large number of parameters being produced by high throughput techniques such as next-generation sequencing able to accelerate the entire process from sample preparation to data analysis, there is a growing usage of high performance computers based on clustering technologies and high performance distributed platforms. A first approach to scalable computer infrastructure has been the use of large supercomputer cluster following by the introduction of the grid computing; a more recent one is the cloud computing. Grid infrastructures are based on a distributed computing model where easy access to large geographical computing and data management resources is provided to large multidisciplinary Virtual Organizations (VOs). The distributed High Performance Computer (HPC) is considered the way to realize the concept of virtual places where scientists and researchers work together to solve complex problems, despite their geographic and organizational boundaries. Cloud computing is defined and characterized by massive scalability and Internet-driven economics realised as a pool of virtualized computer resources. A Cloud Computing platform supports redundant, self-recovering, highly scalable programming models that allow workloads to recover from many unavoidable hardware/software failures, as well as to monitor resources use in real time for providing physical and virtual servers on which the applications can run. In a Cloud Computing platform software is migrating from the desktop into the "clouds" of the Internet, promising users anytime, anywhere access to their software and data.
Characteristics of biological data sources
A huge amount of biological and medical information is now publicly available. Emerging knowledge domains, tightly linked to systems biology, like interaction networks and metabolic pathways, are contributing with even huge amounts of data. Information in secondary databases represents an essential resource for researchers since they target special research interests. Many databanks are created and maintained by small groups or even by single researchers. As a result of this diffused and uncoordinated development, data is spread over hundreds of Internet sites and included in a high number of heterogeneous databases, the majority of which are of a great interest for systems biology, where it is stored using different database management systems and data structures. There are little common information sets and the semantics of data, i.e. the actual meaning associated to each piece of data, is left to the developers. It can therefore be different, even when using same or similar names, thus leading to potential confusion. User interfaces and query methods are also different and searching, retrieving and integrating information may become very difficult.
Data integration
One of the main issues in systems biology data management is data integration in order to represent the global view of biological information. The data management involves retrieval of information from multiple databases and the execution of large scale data analysis. Data integration can be best achieved when the information and desired analysis are stable in time and based on standardization of data models and formats. In biology the domain's knowledge changes very quickly and the complexity of information makes it difficult to design complex data models. Integrating biological information in a distributed, heterogeneous environment requires expandable and adaptable technologies and tools that are able, at the same time, to cope with the heterogeneity of data sources and to select and manage properly the right information, i.e. by recognizing its semantics.
Among current Information and Communication Technologies (ICT), the eXtensible Markup Language (XML) [36] together with XML based biologically oriented languages, and Semantic tools, like ontologies, are the most interesting ones in view of the achievement of a standardized environment for systems biology. A Markup Language (ML) is a mechanism aimed at defining parts of a document (i.e. data) by surrounding it with a start and an end tag. XML specification defines a way to add markup (tags) to documents and thus assign meanings to data explicitly. A set of tags and their relationships defines an XML language and constitutes a namespace, the context where those tags are valid. XML languages are defined by using Document Type Definitions (DTDs) or XML Schemas [37].
Many XML languages have been created for biology, more than can be reviewed here. For the reasons of their adoption and a short list, see [38][39][40]. They range from the basic one, e.g. for the storage of databanks information in alternative formats that can improve traditional flat-file management, and for the description and archiving of results of main analysis tools, to the most complex, like those used in specialized knowledge domains (e.g., the Polymorphism Markup Language (PML) [41] that has been developed as a common data exchange format to overcome the heterogeneity of SNPs databases.
XML languages can also support data interchange. In order to simplify interoperation between bioinformatics tools, the HOBIT XML schemas, that refers to some bioinformatics data types (sequence, RNA structure and alignment), and the BioDOM software library for their management were developed [42].
An ontology is the "specification of conceptualization" in a given domain of interest. It consists of a set of concepts in that specific domain, expressed by using a controlled vocabulary, and of the relationships among these concepts. Ontologies can add semantic metadata to the resources, improve data accessibility and support integrated searches. Many biomedical ontologies have been, or are being, developed, mainly in the context of the Open Biomedical Ontologies (OBO) initiative [43].
Standards for systems biology
Many reviews have already been published on standardization of data and tools in support of systems biology development and research. We here refer to them, due to their completeness and authoritativeness.
Brazma et al published an accurate review in 2006 [44]. They pointed out the main objectives of standardization in life sciences, gave a classification of existing standards and produced extended and accurate lists of acronyms, definitions, URLs. Their classification is based on a table where rows represent three areas of systems biology (biological knowledge, evidence produced by technologies, general frameworks), and columns represent the four steps of standardization that they define in the review, namely informal semantics, formal semantics, formal syntax and tools.
Strömbäck et al [45] and Wierling et al [46] published reviews where the focus was posed on tools, data standards, the role of XML languages for data exchange and how ontologies are used to develop new formats, thus constituting an essential component of standardization.
The Systems Biology Markup Language (SBML) [47,48] is defined as "a computer-readable format for representing models of biochemical reaction networks". The same objectives are driving the development of the Cell System Markup Language (CSML) [49] with the aimed to visualizing, modelling and simulating biopathways. The most used data standards are summarize in Table 1.
Database supporting systems biology research
Systems biology research depends on availability of well structured data sources. Moreover, data is rarely integrated in databases that alone can support research in even the smallest biological domains. Eils et al suggest BMC Bioinformatics 2009, 10(Suppl 12):I1 http://www.biomedcentral.com/1471-2105/10/S12/I1 an "integrative database for systems biology", defined as a "data warehouse system" supporting all activities of a systems biology project [50]. The system would consist of three modules, one for each of the main involved data subsets: i) experimental data, ii) components and reactions of biological systems, and iii) mathematical models. Both functional models and simulations are stored by using the SBML format, thus emphasizing the role and the relevance of standardization in this field.
Many databases are relevant for systems biology and a more extended list with emphasis on databases on models and pathways is presented in Table 2. It is implicit that most of the primary databases, from gene and protein interaction, are also of interest for systems biology.
BioModels is a database of annotated computational models [51], manually curated at the European Bioinformatics Institute (EBI) as part of the broader BioModels.net initiative [52]. The BioCyc Database Collection [53] is a collection of Pathway/Genome Databases, either derived from literature or computed, including more than 500 species, mainly bacteria, but also including homo sapiens and mus musculus. MetaCyc is instead a database of non-redundant, experimentally elucidated, metabolic pathways [54].
A methodology for the development of new tools for systems biology The following methodology for the development of new tools for systems biology is meant to implement ways for sharing data models and definitions based on common data interchange formats [55]: • XML schemas can be used for the creation of common models of biological information, • XML based languages can be adopted for data storage, representation and exchange, • Web Services can be made available for the interoperability of software, • ontologies can semantically support Web Services discovery, selection and interoperation, • Workflow Management Systems can then be used to implement automated processes.
Although this methodology can be seen very difficult to be implemented, the Microarray Gene Expression Data Group (MGED) [56] initiative, lead along the above lines, can instead be seen as a success story. MGED is an international society of biologists, computer scientists, and data analysts that aim to facilitate the sharing of microarray data. This initiative was devoted to the creation of a common data structure for communicating microarray based gene-expression (MAGE) data. This activity started by defining the Minimum Information About a Microarray Experiment (MIAME) data set. MIAME describes the data that is needed to interpret unambiguously results of any experiment and potentially reproduce it [57]. MIAME includes raw and normalised data for each hybridisation in the study, annotations of the sample and of the array, and other related information. In order to improve specification of MIAME information, and therefore its accessibility, a data exchange model (MAGE-OM) and related data formats were then defined. Formats are specified as spreadsheets (MAGE-TAB) and as an XML language (MAGE-ML). In addition, the MGED Ontology was developed for the description of key concepts. A software toolkit (MAGE-STK) was finally developed to facilitate the adoption of MAGE-OM and MAGE-ML.
Along these lines, many 'Minimum Information' datasets have been defined in other biological domains. Currently, the Minimum Information for Biological and Biomedical Investigations (MIBBI) initiative lists some 30 such specifications [58]. This is an extremely good starting point towards a widespread adoption of above methodology.
Methodology for the description of complex biochemical systems
In the last decade, different computing paradigms and modelling frameworks for the description and simulation of biochemical systems have been introduce to describe the complex biological system. Usually the parameter values are generally unknown or uncertain due to the lack of measurements, experimental errors and biological variability and this is a great problem in the development of cellular models [59]. In the following paragraph we introduce some of the most important methodologies used to describe the complex biochemical systems: • Nonlinear Ordinary differential equations (ODE) Nonlinear ordinary differential equations represent an important approach used to describe cellular dynamical properties when it is possible to consider component diffusions instantaneous and concentrations sufficiently high [60]. Considering the cell cycle, ODEs are widely used to describe the dynamics of its regulations and all the related published models are stored in the Cell Cycle Database [61]. Moreover, a specific set of parameters values describe a model in a particular physiological state and in a peculiar species. The core regulation of the cell cycle is widely conserved across different species but time scales can vary from minutes to hours and the key elements interactions involved in processes typically vary across species [62].
• Membrane system (P system) Membrane systems, also known as P systems have been introduced in [63], are computation models as a class of unconventional computing devices of distributed, parallel and nondeterministic type, inspired by the compartmental structure and the functioning of living cells.
In order to define a basic P system, three main parts need to be introduced: the membrane structure, the objects and the rules. For a complete and extensive overview of P systems, we refer the reader to [64] and to the P Systems Web Page http://ppage.psystems.eu.
• Tissue system (tP system) tP systems have been introduce to describe a tissue like architecture, where cells are placed in the nodes of a directed graph, and objects are communicated along the edges of the graph. These communication channels are called synapses. Moreover, the communication of objects is achieved both in a replicative and nonreplicative manner, that is, the objects are sent to all the adjacent cells or to only one adjacent cell, respectively. The variants of tP systems considered in the literature essentially differ in the mechanisms used to communicate objects between cells. [64,65].
• Stochastic simulation technique (τ-DD system) The definition of the stochastic simulation technique called τ-DD [66], where the probabilities are associated to the rules, following the method introduced by Gillespie in [67]. The aim of τ-DD is to extend the single-volume algorithm of tau-leaping [68], in order to simulate multi-volume systems, where the distinct volumes are arranged according to a specified hierarchy. The τ-DD approach is designed to share a common time increment among all the membranes, used to accurately extract the rules that will be executed in each compartment (at each step). This improvement is achieved using, inside the membranes of τ-DD, a modified tau-leaping algorithm, which gives the possibility to simulate the time evolution of every volume as well as that of the entire system.
Conclusion
In this paper we reviewed some of principal concepts that, in our opinion, will characterize the next future of Systems Biology and of interdisciplinary research in biomedicine. A key point emerging from this review is the characterization and definition of complex biomedical system. The notion of complexity, although still elusive, is giving new tools for the interpretation of Biology and Medicine. A new interface between Medicine and Biology is emerging with the contribution of other sciences such as Physics, Engineering and Mathematics. As a consequence, new conceptual frameworks are taking place in biomedicine, and it is becoming clear that it is no more possible to neglect properties of biomedical systems arising from small-scale elements like noise, fluctuations, and global properties, such as integrated responses at the whole organism level. Within this scenario, a central role is played by new techniques for producing and analyzing data that are giving a detailed picture of the mechanisms governing biological systems, including humans, that support the concept of obtaining a personalized medicine. Advanced computer simulation techniques are having an increasing diffusion in the biomedical disciplines, and are providing new methodologies for the prediction of their behaviour. These methodologies range from deterministic to stochastic algorithms, and are supported by new generations of hardware that allow huge data storage capability and computational power. The availability of data storage and standard communication protocols has fuelled the appearance of a series of public database and Web Services, in which it is possible to retrieve a wide class of biological information. Finally, the conceptual framework of Systems Biology and the definition of Complex Biomedical System, are giving a new interpretation of complex pathologies and therapeutic approaches. These modeling approaches aim to bridge the 'translational gap' between basic and clinical research towards translational medicine.
Glossary
Bifurcation: a change in the properties of the stable states of a system described by a mathematical function, e.g. when the average value of a protein passes from a single possible value (one stable state) to a high or low possible value (two stable states).
Emergent properties: global properties of a system resulting from simpler interaction of its elements, rather than being specifically encoded in it.
False discovery rate: statistical method to control (or at least to estimate) the expected number of false positives (e.g. cases called different from the null hypothesis when they are not) when applying multiple testing (i.e. many statistical tests in parallel, e.g. while checking for statistically significant differential expression of thousand of genes).
Fisher's exact test: a statistical test to check for nonrandom association between two variables.
Generalized linear model: a generalization of linear regression between two groups of variables that may allow for nonlinear relationships.
Maximum Likelihood (ML): The likelihood (L_H) of a hypothesis (H) is equal to the probability of observing the data if that hypothesis were correct. The statistical method of maximum likelihood (ML) chooses amongst hypotheses by selecting the one which maximizes the likelihood; that is, which renders the data the most plausible.
Markov process: A mathematical model of infrequent changes of (discrete) states over time, in which future events occur by chance and depend only on the current state, and not on the history of how that state was reached.
Null model: the "null hypothesis" for a statistical test describes the typical background properties of the model that should be contradicted in case of significant deviations from it.
MonteCarlo Markov Chain: Standard MCMC uses a Markov Chain where a new state is proposed, then with some probability, the proposed state is accepted or the previous state is maintained. After a long time of continuing this process, (under some conditions) states visited by the Markov Chain approximate a sample from the posterior density of model parameters given the data.
Plug and play statistics Statistical methods: are plug-andplay if they require simulation from a dynamic model but not explicit likelihood ratios.
Phase transition: a macroscopic change in system properties that result in discountinuous variations in some observed variables (e.g. the change from liquid to gas as a function of temperature and pressure changes).
Stochastic fluctuations: changes in time (and space) of an observed variable (e.g. the expression of a protein) due to random perturbations (e.g. in the degradation or synthesis processes).
Thermodynamic approach: a physical approach to the study of large systems in which the properties of the single elements (in the limit of the number of elements going to infinity) are averaged out but macroscopic features are kept (e.g. the average pressure of a gas obtained as an average of the single molecular impacts on the container surface). | 9,640.4 | 2009-10-15T00:00:00.000 | [
"Biology",
"Computer Science",
"Mathematics",
"Medicine"
] |
Minimum penalized Hellinger distance for model selection in small samples
In statistical modeling area, the Akaike information criterion AIC, is a widely known and extensively used tool for model choice. The {\phi}-divergence test statistic is a recently developed tool for statistical model selection. The popularity of the divergence criterion is however tempered by their known lack of robustness in small sample. In this paper the penalized minimum Hellinger distance type statistics are considered and some properties are established. The limit laws of the estimates and test statistics are given under both the null and the alternative hypotheses, and approximations of the power functions are deduced. A model selection criterion relative to these divergence measures are developed for parametric inference. Our interest is in the problem to testing for choosing between two models using some informational type statistics, when independent sample are drawn from a discrete population. Here, we discuss the asymptotic properties and the performance of new procedure tests and investigate their small sample behavior.
Introduction
A comprehensive surveys on Pearson Chi-square type statistics has been provided by many authors as Cochran (1952), Watson (1956) and Moore (1978Moore ( ,1986, in particular on quadratics forms in the cell frequencies. Recently, Andrews(1988aAndrews( , 1988b has extended the Pearson chi-square testing method to non-dynamic parametric models, i.e., to models with covariates. Because Pearson chi-square statistics provide natural measures for the discrepancy between the observed data and a specific parametric model, they have also been used for discriminating among competing models. Such a situation is frequent in Social Sciences where many competing models are proposed to fit a given sample. A well know difficulty is that each chi-square statistic tends to become large without an increase in its degrees of freedom as the sample size increases. As a consequence goodness-of-fit tests based on Pearson type chi-square statistics will generally reject the correct specification of every competing model. To circumvent such a difficulty, a popular method for model selection, which is similar to use of Akaike (1973) Information Criterion (AIC), consists in considering that the lower the chi-square statistic, the better is the model. The preceding selection rule, however, does not take into account random variations inherent in the values of the statistics.
We propose here a procedure for taking into account the stochastic nature of these differences so as to assess their significance. The main propose of this paper is to address this issue. We shall propose some convenient asymptotically standard normal tests for model selection based on φ−divergence type statistics. Following Vuong (1989Vuong ( , 1993, the procedures considered here are testing the null hypothesis that the competing models are equally close to the data generating process (DGP) versus the alternative hypothesis that one model is closer to the DGP where closeness of a model is measured according to the discrepancy implicit in the φ−divergence type statistic used. Thus the outcomes of our tests provide information on the strength of the statistical evidence for the choice of a model based on its goodness-of-fit. The model selection approach proposed here differs from those of Cox (1961Cox ( , 1962 and Akaike (1974) for non nested hypotheses. This difference is that the present approach is based on the discrepancy implicit in the divergence type statistics used, while these other approaches as Vuong's (1989) tests for model selection rely on the Kullback-Leibler (1951) information criterion (KLIC). Beran (1977) showed that by using the minimum Hellinger distance estimator, one can simultaneously obtain asymptotic efficiency and robustness properties in the presence of outliers. The works of Simpson (1989) and Lindsay (1994) have shown that, in the tests hypotheses, robust alternatives to the likelihood ratio test can be generated by using the Hellinger distance. We consider a general class of estimators that is very broad and contains most of estimators currently used in practice when forming divergence type statistics. This covers the case studies in Harris and Basu (1994); Basu et al. (1996); Basu and Basu (1998) where the penalized Hellinger distance is used. The remainder of this paper is organized as follows. Section 2 introduces the basic notations and definitions. Section 3 gives a short overview of divergence measures. Section 4 investigates the asymptotic distribution of the penalized Hellinger distance. In section 5, some applications for testing hypotheses are proposed. Section 6 presents some simulation results. Section 7 concludes the paper.
Definitions and notation
In this section, we briefly present the basic assumptions on the model and parameters estimators, and we define our generalized divergence type statistics. We consider a discrete statistical model, i.e X 1 , X 2 , . . . X n an independent random sample from a discrete population with support X = {1, . . . , m}. Let P = (p 1 , . . . , p m ) T be a probability vector i.e P ∈ Ω m where Ω m is the simplex of probability m-vectors, We consider a parameter model which may or may not contain the true distribution P , where Θ is a compact subset of k-dimensional Euclidean space (with k < m − 1). If P cointains P , then there exists a θ 0 ∈ Θ such that P θ 0 = P and the model P is said to be correctly specified.
We are interested in testing H 0 : P ∈ P ( with true parameter θ 0 ) versus H 1 : P ∈ Ω m − P.
By · we denote the usual Euclidean norm and we interpret probability distributions on X as row vectors from R m . For simplicity we restrict ourselves to unknown true parameters θ 0 satisfying the classical regularity conditions given by Birch (1964): 1. True θ 0 is an interior point of Θ and p iθ 0 > 0 for i = 1, . . . , m. Thus is an interior point of the set Ω m .
2. The mapping P : Θ −→ Ω m is totally differentiable at θ 0 so that the partial derivatives of p i with respect to each θ j exist at θ 0 and p i (θ) has a linear approximation at θ 0 given by is of full rank (i.e. of rank k and k < m).
Under the hypothesis that P ∈ P, there exists an unknown parameter θ 0 such that P = P θ 0 and the problem of point estimation appears in a natural way. Let n be sample size. We can estimate the distribution P θ 0 = (p 1 (θ), p 2 (θ), . . . , p m (θ)) T by the vector of observed frequencies P = (p 1 , . . . ,p m ) on X ie of measurable mapping X n −→ Ω m . This non parametric estimator P = (p 1 , . . . ,p m ) is defined bŷ We can now define the class of φ-divergence type statistics considered in this paper.
A brief review of φ-divergences
Many different measures quantifying the degree of discrimination between two probability distributions have been studied in the past. Morales et al. (1997Morales et al. ( , 1998, Zografos (1994Zografos ( , 1998, Bar-Hen (1996) Consider two populations X and Y , according to classifications criteria can be grouped into m classes species x 1 , x 2 , . . . , x m and y 1 , y 2 , . . . , y m with probabilities P = (p 1 , p 2 , . . . , p m ) and Q = (q 1 , q 2 , . . . , q m ) respectively. Then is the φ−divergence between P and Q (see Csiszár, 1967) for every φ in the set Φ of real convex functions defined on [0, ∞[. The function φ(t) is assumed to verify the following regularity condition : φ : [0, +∞[−→ R ∪ {∞} is convex and continuous, where 0φ( 0 0 ) = 0 and 0φ( p 0 ) = lim u−→∞ (φ(u)/u). Its restriction on ]0, +∞[ is finite, twice continuously differentiable in a neighborhood of u = 1, with φ(1) = φ (1) = 0 and φ (1) = 1 (cf. Liese and Vajda (1987)). We shall be interested also in parametric estimators of P θ 0 which can be obtained by means of various point estimatorŝ It is convenient to measure the difference between observed P and expected frequencies P θ 0 . A minimum Divergence estimator of θ is a minimizer of D φ ( P , P θ 0 ) where P is a nonparametric distribution estimate. In our case, where data come from a discrete distribution, the empirical distribution defined in (2.1) can be used.
In particular if we replace φ 1 (x) = −4[ √ x − 1 2 (x + 1)] in (3.2) we get the Hellinger distance between distribution P and P θ given by (3.4) Liese and Vajda (1987), Lindsay (1994) and Morales et al. (1995) introduced the so-called minimum φ-divergence estimate defined by In particular if we replace φ = − log x + x − 1 we get where KL m is the modified Kullback-Leibler divergence.
Beran (1977) first pointed out that the minimum Hellinger distance estimator (MHDE) of θ, defined byθ has robustness proprieties. Further results were given by Tamura and Boos (1986), Simpson (1987), and Donoho and Liu (1988), Simpson (1987Simpson ( , 1989 and Basu et al. (1997) for more details on this method of estimation. Simpson, however, noted that the small sample performance of the Hellinger deviance test at some discrete models such as the Poisson is somewhat unsatisfactory, in the sense that the test requires a very large sample size for the chi-square approximation to be useful (Simpson (1989), Table 3). In order to avoid this problem, one possibility is to use the penalized Hellinger distance (see Harris )). The penalized Hellinger distance family between the probability vectors P and P θ is defined by : where h is a real positive number with = {i :p i = 0} and c = {i :p i = 0}. Note that when h = 1, this generates the ordinary Hellinger distance (Simpson, 1989). Hence (3.7) can be written as followŝ One of the suggestions to use the penalized Hellinger is motivated by the fact that this suitable choice may lead to an estimate more robust than the MLE. A model selection criterion can be designed to estimate an expected overall discrepancy, a quantity which reflects the degree of similarity between a fitted approximating model and the generating or true model. Estimation of Kullback's information (see Kullback-Leibler (1951)) is the key to deriving the Akaike Information criterion AIC (Akaike (1974)). Motivated by the above developments, we propose by analogy with the approach introduced by Vuong (1993), a new information criterion relating to the φ-divergences. In our test, the null hypothesis is that the competing models are as close to the data generating process (DGP) where closeness of a model is measured according to the discrepancy implicit in the penalized Hellinger divergence.
Asymptotic distribution of the penalized Hellinger distance
Hereafter, we focus on asymptotic results. We assume that the true parameter θ 0 and mapping P : Θ −→ Ω m satisfy conditions 1-6 of Birch (1964). We consider the m-vector P θ = (p 1θ , . . . , p mθ ) T , the m × k Jacobian matrix The above defined matrices are considered at the point θ ∈ Θ where the derivatives exist and all the coordinates p j (θ) are positive.
The stochastic convergences of random vectors X n to a random vector X are denoted by X n P −→ X and X n L −→ X (convergences in probability and in law, respectively). Instead c n X n P −→ 0 for a sequence of positive numbers c n , we can write X = o p (c −1 n ).
We need the following result to prove Theorem (4.3). Let φ ∈ Φ, let p : Θ → Ω m be twice continuously differentiable in a neighborhood of θ 0 and assume that conditions 1-5 of Section 2 hold. Suppose that I θ 0 is the k × k Fisher Information matrix and θ P H satisfying (3.7) then the limiting distribution of and applying the Central Limit Theorem we have For simplicity, we write D h H ( P , P θ P H ) instead of P HD h ( P , P θ P H ).
Theorem 4.3 Under the assumptions of Proposition
proof. A first order Taylor expansion gives In the same way as in Morales et al. (1995), it can be established that : From (4.12) and (4.13) we obtain Where I is the m × m unity matrix, have the same asymptotic distribution. Furthermore it is clear (applying TCL) that The case which is interest to us here is to test the hypothesis H 0 : P ∈ P. Our proposal is based on the following penalized divergence test statistic D h H ( P , P θ P H ) where P and θ P H have been introduced in Theorem (4.3) and (3.7) respectively.
Using arguments similar to those developed by Basu (1996), under the assumptions of (4.3) and the hypothesis H 0 : P = P θ , the asymptotic distribution of 2nD h H ( P , P θ P H ) is a chi-square when h = 1 with m − k − 1 degrees of freedom. Since the others members of penalized Hellinger distance tests differ from the ordinary Hellinger distance test only at the empty cells, they too have the same asymptotic distribution. (See Simpson 1989, Basu, Harris and Basu 1996 among others).
Considering now the case when the model is wrong i.e H 1 : P = P θ . We introduce the following regularity assumptions (A 1 ) There exists θ 1 = arg inf θ∈Θ P HD h (P, P θ ) such that : , with Λ 11 = Σ p in (4.10) and Theorem 4.4 Under H 1 : P = P θ and assume that conditions (A 1 ) and (A 2 ) hold, we have : where From the assumed assumptions (A 1 ) and (A 2 ), the result follows.
Applications for testing hypothesis
The estimate D h H ( P , P θ P H ) can be used to perform statistical tests.
Test of goodness-fit
For completeness, we look at D h H ( P , P θ P H ) in the usual way, i.e., as a goodnessof-fit statistic. Recall that here θ P H is the minimum penalized Hellinger distance estimator of θ. Since D h H ( P , P θ P H ) is a consistent estimator of D h H (P, P θ ), the null hypothesis when using the statistic D h H ( P , P θ P H ) is Hence, if H 0 is rejected so that one can infer that the parametric model P θ is misspecified. Since D h H (P, P θ ) is non-negative and takes value zero only when P = P θ , the tests are defined through the critical region Remark 5.1 Theorem (4.4) can be used to give the following approximation to the power of test H 0 : D h H (P, P θ ) = 0.
Approximated power function is where q α,k is the (1 − α)-quantile of the χ 2 distribution with m − k − 1 degrees of freedom and F n is a sequence of distribution functions tending uniformly to the standard normal distribution F(x). Note that if H 0 : D h H (P, P θ ) = 0, then for any fixed size α the probability of rejection H 0 : D h H (P, P θ ) = 0 with the rejection rule 2nD h H ( P , P θ P H ) > q α,k tends to one as n → ∞.
Obtaining the approximate sample n, guaranteeing a power β for a give alternative P , is an interesting application of formula (5.17). If we wish the power to be equal to β * , we must solve the equation It is not difficult to check that the sample size n * , is the solution of the following equation The solution is given by 2 and b = q α,k D h H (P, P θ ) and the required size is n 0 = [n * ] + 1 , where [·] denotes "integer part of".
Test for model selection
As we mentioned above, when one chooses a particular φ−divergence type statistic D h H ( P , P θ P H ) = P HD h H ( P , P θ P H ) with θ P H the corresponding minimum penalized Hellinger distance estimator of θ, one actually evaluates the goodness-of-fit of the parametric model P θ according to the discrepancy D h H (P, P θ ) between the true distribution P and the specified model P θ . Thus it is naturel to define the best model among a collection of competing models to be the model that is closest to the true distribution according to the discepancy D h H (P, P θ ).
In this paper we consider the problem of selecting between two models. Let G µ = {G(. | µ); µ ∈ Γ} be another model, where Γ is a q−dimensional parametric space in R q . In a similar way, we can define the minimum penalized Hellinger distance estimator of µ and the corresponding discrepancy D h H (P, G µ ) for the model G µ .
Our special interest is the situation in which a researcher has two competing parametric models P θ and G µ , and he wishes to select the better of two models based on their discrimination statistic between the observations and models P θ and G µ , defined respectively by D h H ( P , P θ P H ) and D h H ( P , G µ P H ). Let the two competing parametric models P θ and G µ with the given discrepancy D h H (P, ·).
Definition 5.2
H eq 0 : D h H (P, P θ ) = D h H (P, G µ ) means that the two models are equivalent, H P θ : D h H (P, P θ ) < D h H (P, G µ ) means that P θ is better than G µ , H Gµ : D h H (P, P θ ) > D h H (P, G µ ) means that P θ is worse than G µ , Remark 5.3 1) It does not require that the same divergence type statistics be used in forming D h H ( P , P θ P H ) and D h H ( P , G µ P H ). Choosing, however, different discrepancy for evaluating competing models is hardly justified.
2) This definition does not require that either of the competing models be correctly specified. On the other hand, a correctly specified model must be at least as good as any other model.
The following expression of the indicator D h H (P, P θ ) − D h H (P, G µ ) is unknown, but from the previous section, it can be estimated by the the difference This difference converges to zero under the null hypothesis H eq 0 , but converges to a strictly negative or positive constant when H P θ or H Gµ holds. These properties actually justify the use of D h H ( P , P θ P H ) − D h H ( P , G µ P H ) as a model selection indicator and common procedure of selecting the model with highest goodness-of-fit. As argued in the introduction, however, it is important to take into account the random nature of the difference D h H ( P , P θ P H ) − D h H ( P , G µ P H ) so as to assess its significance. To do so we consider the asymptotic distribution of √ n D h H ( P , P θ P H ) − D h H ( P , G µ P H ) under H eq 0 . Our major task is to to propose some tests for model selection, i.e., for the null hypothesis H eq 0 against the alternative H P θ or H Gµ . We use the next lemma with θ P H and µ P H as the corresponding minimum penalized Hellinger distance estimator of θ and µ. Using P and P θ defined earlier, we consider the vector proof.
The results follows from a first order Taylor expansion.
We define Q θ , Q µ and Λ * are consistently estimated by their sample analogues K θ , K µ , Q θ , Q µ and Λ * , hence Γ 2 is consistently estimated by Next we define the model selection statistic and its asymptotic distribution under the null and alternatives hypothesis. Let where HI h stands for the penalized Hellinger Indicator. The following theorem provides the limit distribution of HI h under the null and alternatives hypothesis.
Under H eq 0 : P θ = G µ and P θ P H = G µ P H we get : Finally, applying the Central Limit Theorem and assumptions (A1)-(A2), we can now immediately obtain HI h L −→ N (0, 1).
6 Computational results
Example
To illustrate the model procedure discussed in the preceding section,we consider an example. we need to define the competing models, the estimation method used for each competing model and the Hellinger penalized type statistic to measure the departure of each proposed parametric model from the true data generating process. For our competing models, we consider the problem of choosing between the family of poisson distribution and the family of geometric distribution. The poisson distribution P (λ) is parameterized by λ and has density The geometric distribution G(p) is parameterized by p and has density g(x, p) = (1 − p) x−1 × p for x ∈ N * and zero otherwise.
We use the minimum penalized Hellinger distance statistic to evaluate the discrepancy of the proposed model from the true data generating process.
We where π(π ∈ [0, 1]) is specific value to each set of experiments. In each set of experiment several random sample are drawn from this mixture of distributions. The sample size varies from 20 to 300, and for each sample size the number of replication is 1000. In each set of experiment, we choose two values of the parameter h = 1 and h = 1/2, where h = 1 corresponds to the classic Hellinger distance. The aim is to compare the accuracy of the selection model depending on the parameter setting chosen. In order a perfect fit by the proposed method, for the chosen parameters of these two distributions, we note that most of the mass is concentrated between 0 and 10. Therefore, the chosen partition has eight cells defined by and [C 7 , C 8 [= [7, +∞[ represents the last cell. We choose different values of π which are 0.00, 0.25, 0.535, 0.75, 1.00. Although our proposed model selection procedure does not require that the data generating process belong to either of the competing models, we consider the two limiting cases π = 1.00 and π = 0.00 for they correspond to the correctly specified cases. To investigate the case where both competing models are misspecified but not at equal distance from the DGP, we consider the case π = 0.25, π = 0.75 and π = 0.535. The former case correspond to a DGP which is poisson but slightly contaminated by a geometric distribution. The second case is interpreted similarly as a geometric slightly contaminated by a poisson distribution. In the last case, π = 0.535 is the value for which the poisson D h H ( P , P λ P H ) and the geometric D h H ( P , G p P H ) families are approximatively at equal distance to the mixture m(π) according to the penalized Hellinger distance with the above cells. In the first two sets of experiments (π = 0.00 and π = 1.00) where one model is correctly specified, we use the labels 'correct', 'incorrect' and 'indecisive' when a choice is made. The first halves of tables 1-5 confirm our asymptotic results. They all show that the minimum penalized Hellinger estimators λ P H and p P H converge to their pseudo-true values in the misspecified cases and to their true values in the correctly specified cases as the sample size increases . With respect to our HI h , it diverges to −∞ or +∞ at the approximate rate of √ n except in the table 5. In the latter case the HI h statistic converges, as expected, to zero which is the mean of the asymptotic N (0, 1) distribution under our null hypothesis of equivalence.
With the exception of table 1 and 2, we observed a large percentage of incorrect decisions. This is because both models are now incorrectly specified. In contrast, turning to the second halves of the tables 1-2, we first note that the percentage of correct choices using HI h statistic steadily increases and ultimately converges to 100%. The preceding comments for the second halves of tables 1 and 2 also apply to the second halves of tables 3 and 4. In all tables (1,2,3 and 4), the results confirm, in small samples, the relative domination of the model selection procedure based on the penalized Hellinger statistic test (h = 1/2) than the other corresponding to the choice of classical Hellinger statistic test (h = 1), in percentages of correct decisions. Table 5 also confirms our asymptotics results : as sample size incerases, the percentage of rejection of both models converges, as it should, to 100%.
In figures 1, 3, 5, 7 and 9 we plot the histogramm of datasets and overlay the curves for Geometric and poisson distribution. When the DGP is correctly specified figure 1, the poisson distribution has a reasonable chance of being distinguished from geometric distribution.
Similarly, in figure 3, as can be seen, the geometric distribution closely approximates the data sets. In figures 5 and 7 the two distributions are close but the geometric (figure 5) and the poisson distributions ( figure 7) Figure 9 : Histogram of DGP=0.465×Geom+0.535×Pois with n=50 Figure 10 : Comparative barplot of HIn depending n to be much closer to the data sets. When π = 0.535, the distributions for both (figure 9) poisson distribution and geometric distribution are similar, while being slightly symmetrical about the axis that passes through the mode of data distribution. This follows from the fact that these two distributions are equidistant from the DGP. and would be difficult to distinguish from data in practice.
The preceding results in tables and the theorem (5.5) confirm, in figures 2, 4, 6 and 8, that the Hellinger indicator for the model selection procedure based on penalized hellinger divergence statistic with h = 0.5 (light bars) dominates the procedure obtained with h = 1 (dark bars) corresponding to the ordinary Hellinger distance. As expected, our statistic divergence HI h diverges to −∞ (figure 2, 8) and to +∞ (figure 4, figure 8) more rapidly when we use the penalized Hellinger distance test than the classical Hellinger distance test. Hence, Figure 10 allows a comparison with the asymptotic N (0, 1) approximation under our null hypothesis of of equivalence. Hence the indicator HI 1/2 , based on the penaliezd Hellinger distance is closer to the mean of N (0, 1) than is the indicator HI 1 .
Conclusion
In this paper we investigated the problems of model selection using divergence type statistics. Specifically, we proposed some asymptotically standard normal and chi-square tests for model selection based on divergence type statistics that use the corresponding minimum penalized Hellinger estimator. Our tests are based on testing whether the competing models are equally close to the true distribution against the alternative hypotheses that one model is closer than the other where closeness of a model is measured according to the discrepancy implicit in the divergence type statistics used. The penalized Hellinger divergence criterion outperforms classical criteria for model selection based on the ordinary Hellinger distance, especially in small sample, the difference is expected to be minimal for large sample size. Our work can be extended in several directions. One extension is to use random instead of fixed cells. Random cells arise when the boundaries of each cell c i depend on some unknown parameter vector γ, which are estimated. For various examples, see e.g., Andrews (1988b). For instance, with appropriate random cells, the asymptotic distribution of a Pearson type statistic may become independent of the true parameter θ 0 under correct specification. In view of this latter result, it is expected that our model selection test based on penalized Hellinger divergence measures will remain asymptotically normally or chi-square distributed. | 6,896.4 | 2011-10-14T00:00:00.000 | [
"Mathematics"
] |
Selective Detection of Nitrogen-Containing Compound Gases
N-containing gaseous compounds, such as trimethylamine (TMA), triethylamine (TEA), ammonia (NH3), nitrogen monoxide (NO), and nitrogen dioxide (NO2) exude irritating odors and are harmful to the human respiratory system at high concentrations. In this study, we investigated the sensing responses of five sensor materials—Al-doped ZnO (AZO) nanoparticles (NPs), Pt-loaded AZO NPs, a Pt-loaded WO3 (Pt-WO3) thin film, an Au-loaded WO3 (Au-WO3) thin film, and N-doped graphene—to the five aforementioned gases at a concentration of 10 parts per million (ppm). The ZnO- and WO3-based materials exhibited n-type semiconducting behavior, and their responses to tertiary amines were significantly higher than those of nitric oxides. The N-doped graphene exhibited p-type semiconducting behavior and responded only to nitric oxides. The Au- and Pt-WO3 thin films exhibited extremely high responses of approximately 100,000 for 10 ppm of triethylamine (TEA) and approximately −2700 for 10 ppm of NO2, respectively. These sensing responses are superior to those of previously reported sensors based on semiconducting metal oxides. On the basis of the sensing response results, we drew radar plots, which indicated that selective pattern recognition could be achieved by using the five sensing materials together. Thus, we demonstrated the possibility to distinguish each type of gas by applying the patterns to recognition techniques.
Introduction
Many N-containing gases exude irritating odors, such as ammonia (NH 3 ), trimethylamine (TMA), triethylamine (TEA), nitric oxide (NO), and nitrogen dioxide (NO 2 ). NH 3 mainly arises from natural sources through the decomposition of organic matter containing nitrogen. Exposure to high levels of NH 3 emitted from chemical plants, cultivated farmland (fertilizer), and motor vehicles can cause irritation and serious burns on the skin and in the mouth, throat, lungs, and eyes [1,2].
TMA is a colorless, hygroscopic, and flammable tertiary amine that has a strong fishy odor at low concentrations and an NH 3 -like odor at higher concentrations. Exposure to high levels of TMA can cause headaches, nausea, and irritation to the eyes and respiratory system. After marine fish death, bacterial or enzymatic actions rapidly convert trimethylamine oxide into TMA-a volatile base that is largely responsible for the characteristic odor of dead fish [3,4]. Accordingly, the detection of TMA is essential for evaluating the freshness of fish [5][6][7]. TEA is a colorless volatile liquid with a strong fishy odor, reminiscent of the smells of NH 3 and the hawthorn plant [8]. It is commonly utilized as a catalyst and an acid neutralizer for condensation reactions, and is useful as an intermediate for manufacturing medicines, pesticides, and other chemicals. It is also a decomposition product of the V-series nerve gas agent [9]. Short-term exposure to TEA can irritate the skin and mucous membranes of humans. Chronic (long-term) exposure of workers to TEA vapor can cause reversible corneal edema [10].
NO is a nonflammable, extremely toxic, oxidizing gas with a sharp sweet odor. NO can be released by the reaction of nitric acid with metals, e.g., in metal etching and pickling, and is a byproduct of the combustion of substances in fossil fuel plants and automobiles. NO is a skin, eye, and mucous membrane irritant, as moisture and O 2 convert nitric oxide into nitric and nitrous acids. The most hazardous effects of NO are on the lungs. Inhalation causes symptoms such as coughing and shortness of breath, along with a burning sensation in the throat and chest [11]. NO is spontaneously converted to NO 2 in air; thus, some NO 2 is likely to be present when nitric oxide is detected in air [12]. NO 2 has a strong harsh odor, similar to chlorine, and may exhibit a vivid orange color. The major source of NO 2 is the burning of coal, oil, and gas. Almost all NO 2 comes from motor-vehicle exhaust, metal refining, electricity generation from coal-fired power plants, and other manufacturing industries [13]. The reaction of NO 2 with chemicals produced by sunlight leads to the formation of nitric acid, which is a major constituent of acid rain [14]. NO 2 also reacts with sunlight, which leads to the formation of ozone and smog in air [15,16]. The main effect of breathing high levels of NO 2 is an increased risk of respiratory problems, such as asthma, wheezing, coughing, colds, the flu, and bronchitis [17,18]. The U.S. National Institute for Occupational Safety and Health (NIOSH) has established exposure limits for these gases, as shown in Table 1 [19].
In this study, we investigated the sensing properties of Al-doped ZnO (AZO) nanoparticles (NPs), Pt-loaded AZO (Pt-AZO) NPs, a Pt-loaded WO 3 (Pt-WO 3 ) thin film, a Au-loaded WO 3 (Au-WO 3 ) thin film, and N-doped graphene toward NH 3 , TMA, TEA, NO, and NO 2 . We found that each N-based hazardous gas reacted distinctively to the five types of sensing materials, producing different sensing patterns.
AZO NPs
AZO NPs were synthesized via a hydrothermal method [54,55]. Zinc acetate dehydrate (Zn(AC) 2 ·2H 2 O, 99%, Sigma-Aldrich, Seoul, Korea) and potassium hydroxide (KOH, 99%, Sigma-Aldrich) were dissolved in methanol with a molar ratio of 1:3. Aluminum acetate (99%, Sigma-Aldrich) was placed into the zinc acetate solution to achieve 1.0 at% of doped Al. The KOH solution was mixed with the zinc acetate solution via stirring at 60 • C for 24 h. Then, the suspension was centrifuged and washed with methanol three times. The obtained samples were dried at 90 • C for 60 min and annealed at 350 • C for 30 min in a H 2 /N 2 atmosphere.
Pt-AZO NPs
For synthesizing Pt-AZO NPs, Pt NPs were coated on the surface of the as-synthesized AZO NPs with a deposition rate of 6−7 nm/min using a DC magnetron sputtering system in an agitated vessel [55]. In the agitated vessel, the powders were continuously stirred using a rotating impeller, and the Pt NPs were homogenously loaded on the surface of the AZO NPs. The Pt-loaded samples were prepared with a deposition time of 2 min.
Pt-WO 3 and Au-WO 3 Thin Films
For synthesizing Pt-WO 3 and Au-WO 3 thin films, WO 3 thin films were prepared via dual ion beam sputtering [56]. A tungsten metal target of 99.99% purity was employed. The WO 3 was deposited onto an interdigitated Pt electrode formed on a Si/SiO 2 wafer via a photolithography process. The dual ion beam consisted of a primary ion beam applied to the target and a secondary ion beam with accelerated atoms to be deposited on the substrate. The tungsten target was sputtered under the following conditions: The power of the main ion gun was 90 W, the voltage of the anode was 50 V, and the voltage of the cathode was −50 V. O ions were applied under the following conditions: The power of the assistant ion gun was 120 W, the voltage of the anode was 1000 V, and the voltage of the cathode was 300 V. The thickness of the WO 3 thin film was 200 nm. Pt (2 nm) and Au (2 nm) were deposited on the WO 3 thin film via direct current (DC) magnetron sputtering as catalysts. The thickness of Au and Pt was adjusted to~2 nm by controlling the deposition time, where the deposition rate was 0.67 nm/s for Au and 0.28 nm/s for Pt. The samples were heat-treated at 550 • C for 1 h.
N-Doped Graphene
N-doped graphene was synthesized via arc discharging. A hollow graphite rod with a size of 6 mm, bismuth oxide as a catalyst, and 4-aminobenzoic acid as a dopant were placed into the hole and discharged while inducing 150 A in a 550 Torr H 2 /He atmosphere used as a buffer. The amount of N in the graphene was 2 wt% [57]. Table 2 presents the fabrication methods and specifications for the five aforementioned sensing materials used in the experiments.
Characterization
The morphology and shape of the as-synthesized sensing materials were investigated via field-emission scanning electron microscopy (FE-SEM, JEOL 7001F) and transmission electron microscopy (TEM, JEOL JEM-ARM200F).
Device Fabrication
For fabrication of the gas sensor, interdigitated Cr (20 nm) and Pt (100 nm) electrodes were deposited on the patterned SiO 2 substrate via DC magnetron sputtering [54,55]. The synthesized NPs (AZO and Pt-AZO NPs) were mixed with an α-terpineol binder and coated onto the interdigitated electrodes. The sensor was heat-treated at 300 • C for 1 h to remove the binder and annealed at 600 • C for 1 h. The Pt-WO 3 and Au-WO 3 thin films were sputtered directly onto the interdigitated electrodes. N-doped graphene was drop-coated onto the electrodes.
Gas Sensing Measurement
A device was mounted in a chamber of a tube furnace system and placed in a flow system equipped with gas cylinders and mass flow controllers (MFCs) to perform the gas sensing test. The working temperature of the sensor was controlled using the temperature controller of the tube furnace. With the application of controlled heat, the resistance of the sensing material was measured in the presence of synthetic air and then in the presence of air with a controlled amount of target gas. The amount of target gas was controlled to 10 parts per million (ppm) by varying the gas flow rates using the MFCs. All the gas sensing measurements were conducted at an operating temperature of 400 • C, except for N-doped graphene (room temperature). The sensing properties were measured using a combination of a current source (Keithley 6220) and a nanovoltmeter (Keithley 2182) with a constant current supply of 10 nA. Figure 1 shows the size and morphology of the sensing materials. Figure 1a presents a TEM image of AZO NPs, which were spherical and had a diameter of~25 nm. The isolated AZO NP represented the single crystallinity of a hexagonal wurtzite structure of ZnO with a lattice spacing of~0.28 nm, which was confirmed by high-resolution TEM analysis with the electron diffraction pattern [54]. Figure 1b shows a TEM image of the as-synthesized Pt-AZO NPs. This indicates that the Pt NPs with a size of 2 nm were uniformly distributed on the surface of the AZO NPs, which was confirmed by analyses of the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) image and the energy dispersive X-ray spectroscope (EDS) line profile [54,55]. In addition, the XRD patterns of AZO and Pt-AZO NPs revealed the crystal structure of a hexagonal wurtzite phase without any secondary or impurity phases [55]. The diffraction peaks of the face centered cubic structure of the Pt crystals were observed for Pt-AZO NPs [55]. Figure 1c shows a TEM image of strip-shaped N-doped graphene with a diameter of~10 nm. The basal planes were discontinuous and distorted, and some parts were wavy and turbostratic, indicating the presence of defects, which may have facilitated gas diffusion. Cross-sectional SEM images of the Au-WO 3 and Pt-WO 3 thin films are shown in Figure 1d,e, respectively. The thickness of the WO 3 thin film was~200 nm. The thicknesses of the Pt and Au activator layers deposited on the WO 3 thin film were estimated to be~2 nm. Figure 2 shows the response patterns of the sensing materials exposed to 10 ppm of NO, NO 2 , NH 3 , TMA, and TEA gases. Here, the sensing response is defined as (R a − R g )/R g , depending on whether the gas is reducing or oxidizing, where R g and R a represent the resistances of the five types of sensing materials in the N-containing compound gases and air, respectively. In this figure, the upward and downward directions of the graph correspond to the decrease and increase of the resistance, respectively. As shown in Figure 2, the responses of the metal oxides (AZO, Pt-AZO, Pt-WO 3 , Au-WO 3 ) became positive when they were exposed to reducing gases (NH 3 , TMA, TEA) and negative under exposure to oxidizing gases (NO, NO 2 ). This is because all the metal oxides tested in this experiment were n-type semiconductors. Positive and negative responses correspond to the decrease and increase, respectively, of the resistance of the sensing material in the target gas compared with that in air. In contrast, the responses of the N-doped graphene became positive when it was exposed to oxidizing gases (NO, NO 2 ), indicating that N-doped graphene is a p-type semiconductor. show graphical representations of the sensing responses (in Figure 2) of the sensors to the N-containing compound gases. Figure 3a presents the sensing responses of AZO NPs to the five N-containing compound gases. The AZO NPs exhibited a decrease in resistance when exposed to 10 ppm TEA, TMA, and NH 3 . Among these, the highest response level was 144 for TEA. The responses were 44 and 24 for TMA and NH 3 , respectively (see Figure 2a). When the AZO NPs were exposed to 10 ppm NO and NO 2 , an increase in resistance was observed, with response values of −0.06 and −0.07 in NO and NO 2 , respectively. Figure 3b presents the sensing responses of Pt-AZO NPs to the N-containing compound gases. The Pt-AZO NPs exhibited a higher overall response to the N-based hazardous gases than the AZO NPs (see Figure 2b). Additionally, the Pt-AZO NPs showed a reduced resistance when exposed to 10 ppm TEA, TMA, and NH 3 , with response values of 159, 73, and 23, respectively. When exposed to NO and NO 2 , the resistance increased, and the response was −2.8 and −4.7, respectively. Compared with pure AZO, the Pt-ZNO NPs exhibited almost no change in their response to NH 3 , whereas their response was increased slightly and significantly for the tertiary amines and nitric oxides, respectively. The sensing response of sensing materials can be augmented via noble-metal loading [58]. Consequently, in various gas sensing applications, Pt is loaded as a catalytic additive for enhancing the sensing response [55,59]. In our case, the Pt loading was effective for improving the nitric-oxide sensing. Figure 4a presents the sensing responses of the Pt-WO 3 thin film to the N-containing compound gases. The Pt-WO 3 exhibited high sensitivity to not only tertiary amines but also nitric oxides. The sensing responses of the Pt-WO 3 thin film to TEA, TMA, and NH 3 were 13,277, 3100, and 2489, respectively. The sensing responses to NO and NO 2 were −481 and −2638, respectively, indicating an increased resistance. Remarkably, the TEA sensing response exceeded~13,000. To our knowledge, all the responses of the Pt-WO 3 thin film to TEA, TMA, NH 3 , NO, and NO 2 are significantly higher than those of previously reported sensing materials based on semiconducting metal oxides (See Table 3). Figure 4b shows the sensing responses of the Au-WO 3 thin film to the N-containing compound gases. The sensing responses of the Au-WO 3 thin film to TEA, TMA, and NH 3 were 93,666, 9810, and 4821, respectively. The sensing responses of NO and NO 2 were −0.29 and −0.71, respectively. The Au-WO 3 thin film exhibited much higher response to TEA, TMA, and NH 3 than the Pt-WO 3 thin film. In particular, the Au-WO 3 thin film exhibited an extremely high sensing response (~100,000) to TEA compared to the other gases. To our knowledge, this is the first report of the highest sensing response to TEA, TMA, and NH 3 , compared to those reported so far, for metal-oxide sensing materials (See Table 3). Figure 4a presents the sensing responses of the Pt-WO3 thin film to the N-containing compound gases. The Pt-WO3 exhibited high sensitivity to not only tertiary amines but also nitric oxides. The sensing responses of the Pt-WO3 thin film to TEA, TMA, and NH3 were 13,277, 3100, and 2489, respectively. The sensing responses to NO and NO2 were −481 and −2638, respectively, indicating an increased resistance. Remarkably, the TEA sensing response exceeded ~13,000. To our knowledge, all the responses of the Pt-WO3 thin film to TEA, TMA, NH3, NO, and NO2 are significantly higher than those of previously reported sensing materials based on semiconducting metal oxides (See Table 3). Figure 4b shows the sensing responses of the Au-WO3 thin film to the N-containing compound gases. The sensing responses of the Au-WO3 thin film to TEA, TMA, and NH3 were 93,666, 9810, and 4821, respectively. The sensing responses of NO and NO2 were −0.29 and −0.71, respectively. The Au-WO3 thin film exhibited much higher response to TEA, TMA, and NH3 than the Pt-WO3 thin film. In particular, the Au-WO3 thin film exhibited an extremely high sensing response (~100,000) to TEA compared to the other gases. To our knowledge, this is the first report of the highest sensing response to TEA, TMA, and NH3, compared to those reported so far, for metal-oxide sensing materials (See Table 3). Figure 5 shows the sensing responses of N-doped graphene to the N-containing compound gases. NH 3 and the tertiary amines were not detected, even at a relatively high concentration (10 ppm); only nitric oxides were detected, with a low response of 0.1-0.7. Pure graphene is a p-type semiconductor in air, and exposure to oxidizing gases, such as NO 2 and O 2 , reduces its resistance by enhancing the hole conduction [60]. Although Lu et al. reported that highly N-doped graphene exhibits n-type semiconducting behavior [61], the sensing response of our N-doped graphene indicated that the sample was a p-type semiconductor. If the doped N atoms replace the C atoms in the hexagonal ring of graphene (quaternary N) efficiently, 2 wt% N in graphene is sufficient to make the material an n-type semiconductor. Thus, our results indicate that the direct substitutional doping was not efficient enough to make the material n-type. When N atoms are doped into graphene, three bonding configurations occur within the C lattice: Quaternary N (direct substitution), pyridinic N, and pyrrolic N [57]. Only quaternary N yields n-type doping; the other two configurations promote p-type doping [62].
Results and Discussion
The XPS N 1s spectra of the N-doped graphene used in our experiments presented that the amount of pyridinic and pyrrolic N was larger than that of quaternary N [57]. As shown in Figure 5, the N-doped graphene exhibited good sensitivity to NO 2 . This was expected, as Shaik et al. reported NO 2 sensing with N-doped graphene, which was fabricated using a wet process and exhibited p-type behavior [63]. In contrast, the theoretical studies of Jappor et al. and Dai et al. [64,65], which focused on quaternary N-doping, indicated that NO 2 was weakly physiosorbed onto the N-doped graphene surface. Clearly, the pyridinic and pyrrolic N-doping made graphene a good NO 2 gas sensor. Figure 6 shows the response of AZO NPs, Pt-AZO NPs, Au-WO 3 thin film, and Pt-WO 3 thin film at various concentrations of the five N-containing gases (0.1, 1, and 10 ppm) at 400 • C. The sensing response increases with increasing gas concentration. As shown in Figure 6, the ZnO and WO 3 samples have a lower limit of 0.01 ppm to detect those N-containing gases. A comparison of the responses of the five sensing materials to 10 ppm of the five N-containing compound gases is shown in Figure 7a-e in the form of radar plots. The radar plots of the sensing response show different patterns for the reducing gases (TEA, TMA, NH 3 ) and the oxidizing gases (NO, NO 2 ). The specific patterns of the radar plots for the sensing response represent several noteworthy features: (i) The sensing response of the WO 3 film-based sensor was superior to that of the AZO NP-based sensor for all five N-containing compound gases; (ii) the WO 3 film and AZO NP-based sensors are more sensitive in detecting TEA compared to the other gases; (iii) the Au-WO 3 thin film exhibited the highest response for the detection of 10 ppm of TEA, TMA, and NH 3 ; (iv) the Pt-WO 3 thin film showed the best sensing performance for the detection of 10 ppm of NO and NO 2 . In particular, the sensing response of the WO 3 film and AZO NP-based sensors increases in the order of NH 3 , TMA, and TEA. The response is significantly higher in detecting TEA compared to the other gases. This can be attributed to an electron donating effect [25]. When the metal oxide sensor is exposed to the reducing gas, the reducing gas reacts with the adsorbed oxygen ions and the free electrons are released back to the conduction band of the metal oxides. This leads to an increase in conductance and consequently an increase in response. At the working temperature of WO 3 film and AZO NP (400 • C), the O 2− ion species mainly interact with the gas molecules [55], according to the following equations for TEA (Equation (1) As a consequence, a number of the released electrons increases in the order of NH 3 , TMA, and TEA. Therefore, the significantly enhanced sensing response to TEA is mainly attributed to the great number of released electrons.
In addition, the responses of the Au-WO 3 thin film for sensing TEA, TMA, and NH 3 are remarkably better than those of the Pt-WO 3 thin film. To understand this result, we investigated the surface morphology and compositional distribution of those WO 3 thin films by using FE-SEM equipped with an energy-dispersive X-ray spectroscope (EDS). Figure 8 shows the top-view SEM images of the as-prepared Pt-and Au-WO 3 thin films. The images show the Pt particles cover the surface of the WO 3 thin film (Figure 8a), while the Au islands are randomly distributed on its surface (Figure 8b). Figures 9 and 10 show the elemental distribution at the cross-sectional areas of the Pt-and Au-WO 3 thin films, respectively. The EDS elemental color mapping results present that the Pt elements cover the entire surface of the film, but the Au elements are sparsely distributed compared to Pt. According to the sensing mechanism, the sensing response of the n-type metal oxide gas sensor mainly depends upon the concentration of oxygen ion species (O − or O 2− ) adsorbed on the surface of the sensing materials. Further, the loaded noble metals provide more active sites for the adsorption of oxygen ion species owing to a spill-over effect. Therefore, too many Pt atoms covered on the film decreases the number of active sites available on the film's surface, leading to the reduced response. Consequently, the Au-WO 3 thin film exhibits better sensing response compared to the Pt-WO 3 thin film for the reducing gases of TEA, TMA, and NH 3 . As a result, we can find that a moderate amount of metal catalyst plays an important role in improving the sensing response. More importantly, these sensing responses of the Au-WO 3 thin film to TEA, TMA, and NH 3 , and the Pt-WO 3 thin film to NO and NO 2 are much higher than those of previously reported sensors based on metal-oxide sensing materials (See Table 3). The sensitivities of most metal-oxide sensors reported for the detection of TEA are very low. In addition, there are few reports on TEA detection using WO 3 materials. For example, polyaniline-WO 3 nanocomposites exhibited a sensing response of 81 to 100 ppm TEA at room temperature [23]. In the case of TMA sensing, there are many reports showing good response results. Cho et al. reported a high response to 5 ppm TMA: 56.9 at 450 • C for a WO 3 hollow sphere [29] and 373.74 at 300 • C for MoO 3 nanoplates [30]. Sensing N-containing compound gases, such as NH 3 and NO x , using Pt-WO 3 and Au-WO 3 has been reported. For example, D'Arienzo et al. reported that the sensing response of Pt-WO 3 to 74 ppm NH 3 was 110 at 225 • C [34]. Srivastava and Jain found that the sensing response of Pt-WO 3 to 4000 ppm NH 3 Additionally, we evaluated the sensing responses of the five sensing materials to 10 ppm of the five N-containing compound gases, as presented in the form of the radar plots in Figure 11. The response time is defined as the time required to reach 90% of the saturation resistance upon the exposure to full-scale concentration of the gas. As shown in Figure 11, the Pt-WO 3 and Au-WO 3 thin films exhibited a fast response time (i.e., a very rapid reaction rate) for the detection of TEA, TMA, and NH 3 . In particular, the Pt-WO 3 thin film showed high responses to all the N-containing gases, as well as the fastest response (<20 s).
The higher and faster sensing response of the Pt-WO 3 and Au-WO 3 thin films is attributed to the addition of an appropriate amount of metal additives to WO 3 , which promoted chemical reactions by reducing the activation energy between the film surface and the target gas. Cu-loaded WO 3 and Ag-loaded WO 3 have also been reported to detect N-containing compound gases with high sensitivity [27,42]. Furthermore, the outstanding sensing responses of the Pt-WO 3 and Au-WO 3 thin films are attributed to the deposition of high-quality thin films via the dual ion beam sputtering technique. The thin films deposited using this technique exhibited an exact stoichiometry. Therefore, the dense thin-film formation allowed the deposition of high-quality films with a very small thickness. Furthermore, the five plot patterns are significantly different, indicating that the five sensor materials can be used for an e-nose to distinguish the five N-containing compound gases. Table 3. Comparison of sensing properties of various types of metal-oxide-based sensors for the detection of the N-containing gaseous compounds (∆R ≡ (R a − R g ) or (R g − R a ), S *: Sensitivity ≡ response/concentration.).
Gas
Sensing
Conclusions
We investigated the sensing properties of five types of sensing materials (AZO NPs, Pt-AZO NPs, a Pt-WO 3 thin film, a Au-WO3 thin film, and N-doped graphene) for the detection of five hazardous N-containing compound gases (TEA, TMA, NH 3 , NO, and NO 2 ). Owing to the different reactivities of the gases, the sensing materials exhibited different sensing response patterns. The metal-oxide sensors of AZO, Pt-AZO, Pt-WO 3 , and Au-WO 3 showed positive responses to NH 3 , TMA, and TEA (reducing gases) and negative responses to NO and NO 2 (oxidizing gases). This is because all the metal oxides tested in the experiment were n-type semiconductors. In contrast, the N-doped graphene exhibited a positive response to NO and NO 2 owing to its p-type semiconducting property. The metal oxide-based materials showed significantly higher sensing responses to the tertiary amines than to the nitric oxides. The N-doped graphene reacted only to the nitric oxides. Among the sensing materials, the Au-WO 3 and Pt-WO 3 thin films exhibited the best sensing response. More importantly, the sensing responses of the Au-WO 3 thin film to TEA, TMA, and NH 3 and the Pt-WO 3 thin film to NO and NO 2 were much higher than those of previously reported sensors based on metal-oxide sensing materials. In particular, the Au-WO 3 and Pt-WO 3 thin films exhibited extremely high sensing responses of approximately 100,000 for 10 ppm of TEA and approximately −2700 for 10 ppm of NO 2 , respectively. Accordingly, our study indicates that the five N-containing compound gases can be distinctively detected using the five sensor elements via application of recognition technology that shows different patterns of the sensing response. In order to demonstrate the analytical applicability of the proposed method in a real application, future studies will be conducted to investigate whether the sensor array consisting of the five sensing materials selectively detects only one target gas when mixed with the five N-containing compound gases. In addition, the reproducibility, long-term stability, and humidity interference of the sensor array will be tested.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,503 | 2019-08-01T00:00:00.000 | [
"Materials Science"
] |
Effect of Fabric Substrate and Introduction of Silk Fibroin on the Structural Color of Photonic Crystals
Monodispersed polystyrene (PS) particles were prepared and deposited onto various kinds of textile fabrics using a gravity sedimentation method. The monodispersed PS particles were self-assembled on fabrics to form a photonic crystal, which has an iridescent structural color. The structural color of fabrics was determined by the bandgaps of photonic crystals. Moreover, the effect of the fabric substrate, including the raw materials, base color, and fabric weave, etc., on the structural color of the photonic crystals was studied. Scanning electron microscopy and UV-vis spectrometry were adopted to characterize the structure and optical performance of photonic crystals. The results indicate that the silk fabric with a black base color and satin weave contribute to a bright and pure textile structural color. In order to solve the problem of low color fastness of the structural color on the fabric surface, silk fibroin (SF) was introduced to the PS microsphere solution. Results show that the addition of SF slightly affects the brightness of the structural color, while it has a certain reinforcing effect on the structural color fastness to rubbing and washing.
Introduction
Most colors of textiles are achieved by dyeing, which is called a chemical method. While many structural colors exist in nature, such as in butterfly wings, shells, and peacock feathers, structural color is generated by the interaction of light and the micro-nanoperiodic structure [1], which is called the photonic crystal. It has the characteristics of high saturation, high brightness, a polarization effect, and unfading iridescence [2]. Therefore, the application of structural color to textiles has attracted scientists' attention, hoping to either partially or completely replace the use of traditional dyes with the characteristics of high-level pollution and water consumption [3,4].
Photonic crystals are dielectric structural materials with photonic bandgaps formed by the periodic arrangement of materials with different dielectric constants in space. They were independently proposed by John [5] and Yablonovich [6] in 1987. Photonic crystals have many special physical properties and phenomena, such as a slow photon effect [7], a photonic bandgap [8], photon localization [9], a super prism effect [10], a negative refraction effect [11], etc. According to the periodic spatial distribution of refractive index changes, they can be divided into three categories [12,13]: one-dimensional, two-dimensional, and three-dimensional ones. One-dimensional photonic crystals often produce structural coloration through film interference and grating diffraction. A two-dimensional photonic crystal is a material in which the dielectric constant of a medium is periodically arranged in two directions in the spatial plane and uniformly distributed in the vertical direction of the plane. A two-dimensional photonic crystal has a very small geometric size and photonic bandgap, and it has the characteristics of reduced transmission loss, a self-imaging effect, and a slow light effect. A three-dimensional photonic crystal is a material in which the dielectric constant of a medium is periodically arranged in all three directions in space
Materials
The fabrics made of different raw materials (cotton, silk, and wool), different colors (white, red, and black), and different fabric weaves (plain, twill, and satin) were purchased from the Shanghai Fabric Market (Shanghai, China). Bombys mori silkworm cocoons were provided by Guangxi Sericulture Technology Co., Ltd. (Nanning, China). The monodispersed polystyrene (PS) microspheres were made in our laboratory. Styrene (St, analytically pure), potassium persulfate (KPS, analytically pure), sodium bicarbonate, and lithium bromide were purchased from Shanghai Aladdin Technology Co., Ltd.(Shanghai, China). High-purity deionized water (18.2 MΩ), produced by a Millipore Milli-Q system (Burlington, MA, USA) (0.22 um), was used for preparing buffers, which were used as solvents for mixed solutions.
Synthesis of PS Microspheres
The soap-free lotion polymerization of the PS microspheres was carried out in a four-necked, jacketed glass reactor equipped with a nitrogen bubbler, a top D-shaped mechanical agitator, and a condenser. Firstly, deionized water (100 mL) was introduced into a glass reactor that was purged with nitrogen gas for 10 min. Then, styrene was added to the reactor, and the mixture was vigorously stirred in a nitrogen atmosphere at 330 rpm for 10 min, while the temperature was raised to 70 • C. After that, potassium persulfate dissolved in a small amount of water was added, and the polymerization reaction was carried out at 70 • C under nitrogen protection for 20 h. After cooling it to room temperature, the filtrate was saved for the following experiments. PS microspheres with particle sizes of 260 nm ± 5 nm were synthesized.
Self-Assembly of Photonic Crystals
The fabrics were treated with ultrasonication in deionized water and dried before use to ensure a clean surface. The gravity deposition method was used to form photonic crystals on fabrics ( Figure 1). PS colloidal suspensions were dispersed in ultrasound for 50 min, and then deposited dropwise onto the fabric surface. The photonic crystals were self-assembled in a vacuum oven at 60 • C with a relative humidity of 50% for 5 h.
glass reactor that was purged with nitrogen gas for 10 min. Then, styrene was added to the reactor, and the mixture was vigorously stirred in a nitrogen atmosphere at 330 rpm for 10 min, while the temperature was raised to 70 °C. After that, potassium persulfate dissolved in a small amount of water was added, and the polymerization reaction was carried out at 70 °C under nitrogen protection for 20 h. After cooling it to room temperature, the filtrate was saved for the following experiments. PS microspheres with particle sizes of 260 nm ± 5 nm were synthesized.
Self-Assembly of Photonic Crystals
The fabrics were treated with ultrasonication in deionized water and dried before use to ensure a clean surface. The gravity deposition method was used to form photonic crystals on fabrics ( Figure 1). PS colloidal suspensions were dispersed in ultrasound for 50 min, and then deposited dropwise onto the fabric surface. The photonic crystals were selfassembled in a vacuum oven at 60 °C with a relative humidity of 50% for 5 h.
Preparation of Silk Fibroin Solutions
Silk fibers were degummed in a boiling aqueous solution of 0.5% (w/w) NaHCO3 for 30 min, twice, with frequent stirring. After that, the degummed silks were washed with deionized water 5 times. The regenerated silk fibroin (SF) was acquired by dissolving the degummed silks into a 9.3 M LiBr solution for 4 h at 60 °C, and then extracting LiBr from the SF solution via a dialysis cassette (Solarbio, molecular weight cut-off: 3500) for 2 days.
Fluorescein isothiocyanate (FITC)-labeled SF was synthesized based on the covalent conjugation of isothiocyanate group pf FITC and the amino group of SF. Briefly, a 1 mL
Preparation of Silk Fibroin Solutions
Silk fibers were degummed in a boiling aqueous solution of 0.5% (w/w) NaHCO 3 for 30 min, twice, with frequent stirring. After that, the degummed silks were washed with deionized water 5 times. The regenerated silk fibroin (SF) was acquired by dissolving the degummed silks into a 9.3 M LiBr solution for 4 h at 60 • C, and then extracting LiBr from the SF solution via a dialysis cassette (Solarbio, molecular weight cut-off: 3500) for 2 days.
Fluorescein isothiocyanate (FITC)-labeled SF was synthesized based on the covalent conjugation of isothiocyanate group pf FITC and the amino group of SF. Briefly, a 1 mL Na 2 CO 3 (0.5 mM) solution was added to 10 mL of a 70 mg/mL SF solution, and then 7 mg of FITC in 1.4 mL of DMSO was added to the as-prepared SF solution. Then, the mixed solution was slowly stirred for 2 h in a dark room at 25 • C. Finally, the FITC-labeled SF solution was obtained by dialysis in the dark for 6 h to remove unconjugated FITC. Afterwards, 200 µL of 1% (w/v) PS particles suspension was centrifuged at 2500 rpm, and the PS particles were then re-dispersed in 1 mL of a 5 mg/mL FITC-labeled SF solution in the dark [35].
Color Fastness Measurements
The color fastness to rubbing was measured according to GB/T 3920-2008 [36]. The samples were cut into a 50 mm × 140 mm size and fixed at the lower testing position along the direction of the friction head's round-trip path. At the same time, about (9 ± 0.2) N of pressure was applied to the friction head, and the friction head movement was adjusted to (104 ± 3) mm, 1 cycle per second, for a total of 100 cycles. After rubbing, the reflective spectra of samples were measured.
The color fastness to washing was measured according to GB/T 3921-2008 [37]. The samples were cut into a 100 mm × 40 mm size, then washed in a 1 g/L soap solution in a 40 • C water bath, taken out, and dried in an oven. The reflective spectra of samples were measured after soaping for 30 min, 60 min, 90 min, and 120 min, respectively.
Characterization
The microstructures of photonic crystals were characterized by scanning electron microscopy (Hitachi SU70, Tokyo, Japan) at 1.0 kV, and the samples were not coated with gold. The size distribution and mono-dispersion of PS microspheres were characterized by DLS (NanoBrook Omni Brookhaven, Schenectady, NY, USA). The reflective spectrum of photonic crystals was measured with a UV-vis spectrometer (USB-2000, Ocean Optics, Dunedin, FL, USA). The fluorescence images of FITC-labeled, SF-incubated PS were obtained with a Leica TCS SP8 confocal laser scanning microscope (Wetzlar, Germany). Figure 2 shows the PDI curve of the PS microspheres. It can be seen from the figure that the PDI of the synthesized PS microspheres was less than 0.1, which proves that the particle size of PS microspheres was uniform.
Effect of Raw Materials
The self-assembled PS microspheres with 260 nm and 300% owf were deposited on plain cotton, silk, and wool fabrics via the gravity sedimentation method. Figure 3a-c
Effect of Raw Materials
The self-assembled PS microspheres with 260 nm and 300% owf were deposited on plain cotton, silk, and wool fabrics via the gravity sedimentation method. Figure 3a-c show the fabrics made from different raw materials, but with the same plain weave and black color. The fineness of warp and weft filament was 40 s × 40 s, and the fabric density was 128/10 cm × 68/10 cm. The structural color on the surface of the silk fabric ( Figure 3e) exhibited a bright a uniform color, and obvious iridescence, which can be proven by the reflection spectru curves (Figure 3h).
In the spectrum curve, the peak position corresponding to the wavelength (nm) r resents the coloring phase. While in photonic crystals, the coloring phase is determin by the position of the bandgap, which can be adjusted by the size of the colloidal mic spheres. The position of the reflection peak is considered the wavelength of the photo bandgap, where a certain wavelength range of electromagnetic incident light is forbidd to propagate. The peak positions in Figure 3g-i were nearly the same (576 nm ± 5 nm, 5 nm ± 5 nm, 574 nm ± 5 nm) as the microspheres deposited with the same size (260 nm nm). This means that the bandgap of photonic crystal falls into 576 nm, and the incid light of this specific wavelength cannot be allowed to propagate but can be reflect The structural color on the surface of the silk fabric ( Figure 3e) exhibited a bright and uniform color, and obvious iridescence, which can be proven by the reflection spectrum curves (Figure 3h).
In the spectrum curve, the peak position corresponding to the wavelength (nm) represents the coloring phase. While in photonic crystals, the coloring phase is determined by the position of the bandgap, which can be adjusted by the size of the colloidal microspheres. The position of the reflection peak is considered the wavelength of the photonic bandgap, where a certain wavelength range of electromagnetic incident light is forbidden to propagate. The peak positions in Figure 3g-i were nearly the same (576 nm ± 5 nm, 578 nm ± 5 nm, 574 nm ± 5 nm) as the microspheres deposited with the same size (260 nm ± 5 nm). This means that the bandgap of photonic crystal falls into 576 nm, and the incident light of this specific wavelength cannot be allowed to propagate but can be reflected, which is why the fabrics seem to be green to blue, which is the color of visible light near 576 nm.
The peak height in the spectrum curve represents the brightness of the color. The higher the peak, the brighter the color appears. Photonic crystal on silk fabric had the brightest structural color, whose peak height was 29.312 ± 0.003 au, compared to that on cotton (23.556 ± 0.002) and wool (27.468 ± 0.004).
The peak width (nm) represents the purity of the color: the smaller the peak width, the better the purity of the color. The photonic crystal on silk had the narrowest peak (125.452 ± 0.02), compared to that on cotton (189.364 ± 0.01) and wool (132.246 ± 0.02). Figure 4 shows SEM images of the photonic crystals self-assembled on fabrics with different raw materials. Figure 4a shows the macrostructure of silk fabric after the deposition of PS particles. It can be clearly seen that the surface of the fabric is very rough, which is due to the deposition of PS microspheres. The arrangement of microspheres on the silk fabric (in Figure 4b) is more regular than those on the cotton and wool fabrics (Figure 4c,d).
Polymers 2023, 15, x FOR PEER REVIEW 7 of 15 Figure 4 shows SEM images of the photonic crystals self-assembled on fabrics with different raw materials. Figure 4a shows the macrostructure of silk fabric after the deposition of PS particles. It can be clearly seen that the surface of the fabric is very rough, which is due to the deposition of PS microspheres. The arrangement of microspheres on the silk fabric (in Figure 4b) is more regular than those on the cotton and wool fabrics (Figure 4c,d). Despite having the same plain weave, due to the natural twisting structure of cotton fibers, and the scale structure on the surface of wool fibers, the surface of cotton and wool fabrics was not as flat as that of silk fabrics, resulting in many more defects in the photonic crystals fabricated on cotton and wool fabrics. Such obvious cracks in the photonic crystal can not only decrease the reflectivity of the light in the bandgap but can also scatter the nearby lights outside the bandgap wavelength.
The phenomenon of photonic crystals with an ordered structure can be explained by Bragg's law. When the bandgap of photonic crystal falls into the visible light region, the photonic crystals exhibit structural colors. The different effects of structural colors were Despite having the same plain weave, due to the natural twisting structure of cotton fibers, and the scale structure on the surface of wool fibers, the surface of cotton and wool fabrics was not as flat as that of silk fabrics, resulting in many more defects in the photonic crystals fabricated on cotton and wool fabrics. Such obvious cracks in the photonic crystal can not only decrease the reflectivity of the light in the bandgap but can also scatter the nearby lights outside the bandgap wavelength. The phenomenon of photonic crystals with an ordered structure can be explained by Bragg's law. When the bandgap of photonic crystal falls into the visible light region, the photonic crystals exhibit structural colors. The different effects of structural colors were produced by different self-assembled photonic crystals. It is well known that for a perfect photonic crystal, the incident light could be strongly reflected at the photonic bandgap, while in the other wavelengths the lights will have very high transmittance. The regular arrangement of microspheres on silk fabric (Figure 4b) will lead to a more obvious effect of the structural color, which means that the color may be brighter and much more angle-dependent.
Effect of Fabric Base Colors
The colloidal PS microsphere solution (260 nm) was deposited on fabrics with owf 300% using the gravity sedimentation method, mainly considering black, red, and white fabric base colors.
Effect of Fabric Base Colors
The colloidal PS microsphere solution (260 nm) was deposited on fabrics with owf 300% using the gravity sedimentation method, mainly considering black, red, and white fabric base colors. Figure 5a-c show fabrics with different base colors. All the fabrics were made of silk with a satin weave, and the fineness of warp and weft filament was 40 s × 40 s, while the fabric density was 128/10 cm × 68/10 cm. Figure 5d-f show the structural color on silk fabrics with different base colors. The structural color effect on the black surface appears the best, followed by red silk, and then white silk. To our naked eyes, the photonic crystal deposited on the white silk fabric seems almost white, with no special structural color, but the color is very bright, as shown in Figure 5d. The photonic crystal deposited on the red silk fabric appears orange, as shown Figure 5d-f show the structural color on silk fabrics with different base colors. The structural color effect on the black surface appears the best, followed by red silk, and then white silk. To our naked eyes, the photonic crystal deposited on the white silk fabric seems almost white, with no special structural color, but the color is very bright, as shown in Figure 5d. The photonic crystal deposited on the red silk fabric appears orange, as shown in Figure 5e, and the photonic crystal deposited on the black silk fabric appears green, as shown in Figure 5f-this is the closest to the structural color of the photonic crystal bandgap.
The results of spectrum curves in Figure 5g-i are consistent with the apparent structural color effect.
In Figure 5g, the white silk fabric has a high reflectance value of near 90% in the whole visible wavelengths, which indicates that most of the incident light is reflected and less is absorbed. However, there is still a small reflection peak near wavelength 578 nm, which is the bandgap of the photonic crystal fabricated by PS colloidal microspheres on the white silk fabrics.
For the photonic crystal on the red silk fabric (Figure 5h), the reflective curve shows two peaks at 576 nm and 691 nm. The peak at 576 nm is produced by the bandgap of the photonic crystal, and the peak at 691 nm comes from the original base color of the silk fabric. When the red fabric was deposited by the photonic crystal whose bandgap is near the green light wavelength, the fabric seemed to be orange, which is mixed by red and green light.
The peak height of the photonic crystal on the black surface (Figure 5i) is the highest (31.554 ± 0.001), which means that the structural color is the brightest and the peak width is the narrowest (127.119 ± 0.03), which indicates that the purity of structural coloration on the black surface of the silk fabric is the best.
The same PS colloidal microspheres deposited on silk fabrics with different base colors show different structural colors. This is because the observed light not only contains the reflective wavelength caused by the photonic bandgap, but also the wavelengths beyond the photonic bandgap, which decrease the purity of the structural color [2]. When the base color of fabric is black, the black color will absorb the redundant transmitted and scattered light beyond the photonic bandgap, and hence improve the effect of the structural color corresponding to photonic crystals. Conversely, when the white silk fabric is used as a substrate material, it can not only reflect the certain wavelength of visible light that is forbidden to propagate by the photonic crystals but can also reflect the transmitted and scattered light beyond the photonic bandgap wavelength. This will dilute the structural color from the selective wavelength of the photonic crystal bandgap. That is why the same colloidal microspheres on different fabrics show different structural color effects and reflectance spectra.
Effect of Fabric Weaves
The colloidal microsphere solution (260 nm, 300% owf) was deposited on silk fabrics with different weaves, including plain weave, twill weave, and satin weave, using the gravity sedimentation method. Figure 6a-c show the black silk fabrics with different weaves of plain, twill, and satin, respectively. The fineness of warp and weft filament was 40 s × 40 s, and the fabric density was 128/10 cm × 68/10 cm.
The hues of structural colors on fabrics with different weaves appeared green, but the brightness of them was different, which can be seen from Figure 6d-f. To naked eyes, the effect of the structural color on the silk fabric with the satin weave appeared the best, being the brightest and purest.
From Figure 6g-i, it can be seen that the peak width of the satin fabric (127.119 ± 0.03) is smaller than that of the plain weave and twill silk fabrics, which means the purity of structural coloration on the surface of the satin silk fabric is the best. The peak height of satin fabric (31.554 ± 0.001) is higher than those of the other ones, which means the satin fabric has the optimal brightness. The hues of structural colors on fabrics with different weaves appeared green, but the brightness of them was different, which can be seen from Figure 6d-f. To naked eyes, the effect of the structural color on the silk fabric with the satin weave appeared the best, being the brightest and purest.
From Figure 6g-i, it can be seen that the peak width of the satin fabric (127.119 ± 0.03) is smaller than that of the plain weave and twill silk fabrics, which means the purity of structural coloration on the surface of the satin silk fabric is the best. The peak height of satin fabric (31.554 ± 0.001) is higher than those of the other ones, which means the satin fabric has the optimal brightness.
The texture points of the plain and twill fabrics are densely distributed, and the fabric is relatively stiff, while the satin fabric has fewer texture points, longer floating lines, and a smooth surface, making it more suitable for the optimum structural color effect. From the SEM images of photonic crystals on different weaves, it can be seen that the photonic The texture points of the plain and twill fabrics are densely distributed, and the fabric is relatively stiff, while the satin fabric has fewer texture points, longer floating lines, and a smooth surface, making it more suitable for the optimum structural color effect. From the SEM images of photonic crystals on different weaves, it can be seen that the photonic crystal self-assembled on the satin weave in Figure 6l has the most regular structure and the least defects. Relatively, the structures of the photonic crystals on the plain weave in Figure 6j and the twill weave in Figure 6k show more cracks and defects, and this is due to the fabrics with plain and twill weaves having more interweaving points and undulations on the surface, which brings the self-assembly of photonic crystals more difficulties.
Effect of Silk Fibroin on Color Fastness
In this study, solutions of PS microspheres and silk fibroin were mixed to form a complex solution. During the synthetic process, the PS microspheres were grafted using the carboxyl group from acrylic acid, while silk fibroin is a protein which contains the amino and carboxyl groups. When these two solutions were mixed, strong hydrogen bonds formed between the carboxyl group on the PS microspheres and the amino groups on the SF. This supplied high strength for combining these two molecules. Due to the carboxyl group on the surface, PS microspheres are hydrophilic. However, SF possesses both hydrophilic and hydrophobic groups and is amphiphilic. Therefore, the hydrophilic groups of SF connect with PS, and the hydrophobic ends coil inwards during the first stage. Later, the hydrophobic ends try to escape from the water and stretch to the solution-air interface to make the system more stable by decreasing the free energy. Simultaneously, the whole complex gets pulled together at the interface. Finally, the hydrophobic parts of SF spread out on the surface, and the hydrophilic parts accompanying the PS spheres assemble at the interface. Thus, the hydrophilic parts of the SF molecule form a glue-like material among the PS spheres [35].
For better validation of the combination of silk fibroin and PS spheres, confocal laser scanning microscopy ( Figure 7a) and SEM (Figure 7b) were adopted, and the grafting of SF was traced on the PS surfaces. Fluorescein isothiocyanate (FITC) showed covalent bonding with the SF and labeled specific molecules. After incubating the PS particles within the FITC-labeled SF solution for 30 min, they were harvested through centrifugation. The detectable SF molecules labeled by fluorescence were found to aggregate, surrounding the microspheres after incubation. Besides, according to the SEM images regarding PS subjected to incubation within SF solutions, SF molecules accumulated around the PS particles, and connected them, acting as glue.
tions on the surface, which brings the self-assembly of photonic crystals more difficulties.
Effect of Silk Fibroin on Color Fastness
In this study, solutions of PS microspheres and silk fibroin were mixed to form a complex solution. During the synthetic process, the PS microspheres were grafted using the carboxyl group from acrylic acid, while silk fibroin is a protein which contains the amino and carboxyl groups. When these two solutions were mixed, strong hydrogen bonds formed between the carboxyl group on the PS microspheres and the amino groups on the SF. This supplied high strength for combining these two molecules. Due to the carboxyl group on the surface, PS microspheres are hydrophilic. However, SF possesses both hydrophilic and hydrophobic groups and is amphiphilic. Therefore, the hydrophilic groups of SF connect with PS, and the hydrophobic ends coil inwards during the first stage. Later, the hydrophobic ends try to escape from the water and stretch to the solution-air interface to make the system more stable by decreasing the free energy. Simultaneously, the whole complex gets pulled together at the interface. Finally, the hydrophobic parts of SF spread out on the surface, and the hydrophilic parts accompanying the PS spheres assemble at the interface. Thus, the hydrophilic parts of the SF molecule form a glue-like material among the PS spheres [35].
For better validation of the combination of silk fibroin and PS spheres, confocal laser scanning microscopy ( Figure 7a) and SEM (Figure 7b) were adopted, and the grafting of SF was traced on the PS surfaces. Fluorescein isothiocyanate (FITC) showed covalent bonding with the SF and labeled specific molecules. After incubating the PS particles within the FITC-labeled SF solution for 30 min, they were harvested through centrifugation. The detectable SF molecules labeled by fluorescence were found to aggregate, surrounding the microspheres after incubation. Besides, according to the SEM images regarding PS subjected to incubation within SF solutions, SF molecules accumulated around the PS particles, and connected them, acting as glue. The addition of SF to the PS solution may affect the structural color caused by photonic crystals. Figure 8 shows spectrum curves of photonic crystals on silk satin fabrics. The addition of SF to the PS solution may affect the structural color caused by photonic crystals. Figure 8 shows spectrum curves of photonic crystals on silk satin fabrics. Compared with that made from the pure PS solution, the peak height of that from the PS-SF solution dropped by 13.8%. These results prove that silk fibroin would slightly affect the structural color of photonic crystals. The addition of SF changed the shape and size of the microspheres, and then the regularity of the photonic crystal decreased. At the same time, fibroin protein covered the PS particles, affecting light scattering and the brightness of the structural color.
Compared with that made from the pure PS solution, the peak height of that from the PS-SF solution dropped by 13.8%. These results prove that silk fibroin would slightly affect the structural color of photonic crystals. The addition of SF changed the shape and size of the microspheres, and then the regularity of the photonic crystal decreased. At the same time, fibroin protein covered the PS particles, affecting light scattering and the brightness of the structural color. The spectrum curves of photonic crystals before and after rubbing 100 times, fabricated from the pure PS and PS-SF solutions, were measured and shown in Figure 9. For the photonic crystals made from pure PS spheres, the peak height of the spectrum curve dropped by 72.7% after rubbing, as shown in Figure 9a, while that of the photonic crystals made from the PS-SF mixed solution only dropped by 57.2%, as shown in Figure 9b. This indicates that the addition of SF has a certain reinforcing effect on the structural color fastness to rubbing on the fabric surface. This is due to the glue-like SF molecules between the fabric and the photonic crystals having a certain degree of viscosity, which makes the bonding of PS microspheres to the fabric surface much more firm. The spectrum curves of photonic crystals before and after rubbing 100 times, fabricated from the pure PS and PS-SF solutions, were measured and shown in Figure 9. For the photonic crystals made from pure PS spheres, the peak height of the spectrum curve dropped by 72.7% after rubbing, as shown in Figure 9a, while that of the photonic crystals made from the PS-SF mixed solution only dropped by 57.2%, as shown in Figure 9b. This indicates that the addition of SF has a certain reinforcing effect on the structural color fastness to rubbing on the fabric surface. This is due to the glue-like SF molecules between the fabric and the photonic crystals having a certain degree of viscosity, which makes the bonding of PS microspheres to the fabric surface much more firm. The spectrum curves of photonic crystals before and after washing for 30 mins, 60 mins, 90 mins, and 120 mins were measured and shown in Figure 10. Figure 10a shows the spectrum curve of the photonic crystals constructed from the pure PS microsphere solution. The peak height decreased by about 50% after 30 min of washing, and then continuously decreased with the increasing washing time. The biggest decrease occurred in the first 30 min. Figure 10b shows that for the photonic crystals constructed from the PS-SF solution, the peak height only decreased by 15% after washing for 30 mins, and The spectrum curves of photonic crystals before and after washing for 30 min, 60 min, 90 min, and 120 min were measured and shown in Figure 10. Figure 10a shows the spectrum curve of the photonic crystals constructed from the pure PS microsphere solution.
The peak height decreased by about 50% after 30 min of washing, and then continuously decreased with the increasing washing time. The biggest decrease occurred in the first 30 min. Figure 10b shows that for the photonic crystals constructed from the PS-SF solution, the peak height only decreased by 15% after washing for 30 min, and dropped by 50% after 120 min of washing. This indicates that the addition of SF significantly improves the structural color fastness to washing. The reason for this is that the swelling and adhesion effect of SF in water makes the bonding between photonic crystals and the fabrics more tight. The spectrum curves of photonic crystals before and after washing for 30 mins, 60 mins, 90 mins, and 120 mins were measured and shown in Figure 10. Figure 10a shows the spectrum curve of the photonic crystals constructed from the pure PS microsphere solution. The peak height decreased by about 50% after 30 min of washing, and then continuously decreased with the increasing washing time. The biggest decrease occurred in the first 30 min. Figure 10b shows that for the photonic crystals constructed from the PS-SF solution, the peak height only decreased by 15% after washing for 30 mins, and dropped by 50% after 120 min of washing. This indicates that the addition of SF significantly improves the structural color fastness to washing. The reason for this is that the swelling and adhesion effect of SF in water makes the bonding between photonic crystals and the fabrics more tight.
Conclusions
Monodispersed polystyrene particles were prepared using soap-free emulsion polymerization and deposited onto various kinds of textile fabrics using the gravity sedimentation method. The monodispersed PS particles were self-assembled on the fabrics to form photonic crystals, which have an iridescent structural color. The structural color of the photonic crystals on fabrics is determined based on the bandgaps and can be affected by the fabric surface. Scanning electron microscopy (SEM) observation and UV-vis spectrometry results indicated that fabrics made from silk, with a black base color and a satin weave, contribute to a bright and colorful structural color.
Silk fibroin was introduced to the PS microsphere solution to solve the problem of low color fastness of the structural color to the fabric surface. The addition of SF slightly affected the structural color of photonic crystals, while it had a certain reinforcing effect on the structural color fastness to rubbing, and significantly improved the structural color fastness to washing.
The application of the structural color to textiles would take the place of chemical dyes. Future research should be conducted on the large-scale industrialization of the self-assembly of photonic crystals and further enhancement of binding fastness to textiles. | 7,781.2 | 2023-08-26T00:00:00.000 | [
"Materials Science"
] |
Molecular dynamics for linear polymer melts in bulk and confined systems under shear flow
In this work, we analyzed the individual chain dynamics for linear polymer melts under shear flow for bulk and confined systems using atomistic nonequilibrium molecular dynamics simulations of unentangled (C50H102) and slightly entangled (C178H358) polyethylene melts. While a certain similarity appears for the bulk and confined systems for the dynamic mechanisms of polymer chains in response to the imposed flow field, the interfacial chain dynamics near the boundary solid walls in the confined system are significantly different from the corresponding bulk chain dynamics. Detailed molecular-level analysis of the individual chain motions in a wide range of flow strengths are carried out to characterize the intrinsic molecular mechanisms of the bulk and interfacial chains in three flow regimes (weak, intermediate, and strong). These mechanisms essentially underlie various macroscopic structural and rheological properties of polymer systems, such as the mean-square chain end-to-end distance, probability distribution of the chain end-to-end distance, viscosity, and the first normal stress coefficient. Further analysis based on the mesoscopic Brightness method provides additional structural information about the polymer chains in association with their molecular mechanisms.
Polymers undergo a variety of processing conditions in practical polymer processes, such as the plastic extrusion process. It is crucial to understand the structural and dynamical behaviors of polymer molecules under various external conditions to economically manufacture high-quality products in such processes. Accordingly, numerous experimental and theoretical research efforts have explored the fundamental aspects behind the macroscopic rheological responses of dense polymeric fluids 1, 2 , which have enormously advanced our knowledge and enabled us to predict the material properties of polymers under specific conditions in a variety of practical applications. However, many unresolved rheological issues remain (especially from the microscopic viewpoint) for polymeric materials in bulk or confined systems, e.g., fundamental mechanisms underlying stress overshoot, interfacial slip, and melt instability for polymer melts under shear flow [2][3][4][5][6][7][8] . To systematically control such rheological phenomena, it is essential to comprehend the intrinsic molecular dynamics of individual polymer chains separately in bulk and confined situations and how they compare to each other; such an understanding would greatly help to build general knowledge to accurately capture the physical aspects that underlie such complex macroscopic responses of polymer systems and tune the material properties in response to an arbitrary external flow field.
In this work, we performed an in-depth analysis on the fundamental molecular mechanisms and dynamic characteristics of bulk and confined polymer melts under shear flow using atomistic nonequilibrium molecular dynamics (NEMD) simulations of unentangled (C 50 H 102 ) and weakly entangled (C 178 H 358 ) linear polyethylene (PE) melts. This work is in addition to various advanced experimental 9-13 and numerical [14][15][16][17][18][19][20][21][22] studies to reveal the individual chain dynamics in polymer solutions or melts under an external flow field. This molecular-level information attained by directly tracking down individual chain motions is applied to understand the rheological behaviors of representative mesoscopic and macroscopic structural and dynamical properties in response to the applied flow field in a wide range of flow strengths. We analyze the similarities and differences between the bulk and confined melt systems in the characteristic molecular mechanisms and rheological responses under various flow regimes.
Method
Canonical NEMD simulations of monodispersed unentangled (C 50 H 102 ) and entangled (C 178 H 358 ) linear PE melts for bulk and confined systems under shear flow were conducted using the p-SLLOD algorithm 23 implemented with a Nosé-Hoover thermostat 24,25 . For both bulk and confined systems, we employed the Siepmann-Karaboni-Smit united-atom model 26 (wherein the original rigid bond was replaced by a flexible bond with a harmonic spring), which has been most commonly applied to the simulations of PE melts 14,15,[17][18][19] . The set of evolution equations for each system 14,19 was numerically integrated with the reversible reference system propagator algorithm (r-RESPA) 27 using multiple time scales: a short time scale (0.47 fs) for bonded (bond-stretching, bond-bending, and bond-torsional) interactions and a long time scale (2.35 fs) for the nonbonded Lennard-Jones interactions, thermostat, and external flow field. Both bulk and confined systems were subjected to a simple shear flow for which only the xy-component of the velocity gradient tensor was non-zero, with x, y, and z in Cartesian coordinates representing the flow, velocity gradient, and neutral directions, respectively. The system conditions for all bulk and confined PE melts studied in this work correspond to a constant temperature T = 450 K and pressure P = 1 atm. Specifically, for bulk systems, the simulations were executed at densities of ρ = 0.743 g/cm 3 and ρ = 0.782 g/cm 3 for the C 50 PE melt and C 178 PE melt, respectively. The simulation box dimensions for the bulk systems were set as (93.02 Å × 45.00 Å × 45.00 Å) (x × y × z) with a total of 120 molecules for the C 50 PE melt, and [(65.89, 131.78, and 263.55) Å × 65.89 Å × 65.89 Å] (enlarged in the flow (x-)direction depending on the applied shear rate to avoid system-size effects) with a total of 54, 108, and 216 molecules, respectively, for the C 178 PE melt. For confined systems where PE melts are confined by the two-layered rigid simple-cubic lattice walls in the velocity gradient (y-)direction, the simulations were carried out at ρ = 0.763 g/cm 3 and ρ = 0.789 g/cm 3 for the C 50 PE melt and C 178 PE melt, respectively. The walls were composed of 544 atoms with simulation box dimensions of (93.02 Å × 49.51 Å × 45.00 Å) for the C 50 PE melt containing 120 molecules, and 676, 1352, and 2704 atoms with the box dimension of [(65.89, 131.78, and 263.55) Å × 70.51 Å × 65.89 Å] containing 54, 108, and 216 molecules, respectively, for the C 178 PE melt. The lattice parameter of the simple cubic wall was set to σ w = 5.227 Å (=1.33 σ CH2 ). The surface energy parameter of the wall atoms was set as ɛ w /k B = 939 K ( = 20 ε CH2 /k B ), corresponding to a mica surface (~200-400 mJ/m 2 ) 28,29 . The wall atoms were fixed in their lattice sites during the simulations. For the confined systems, the shear flow field was generated by moving the upper wall at a constant velocity, V, in the flowing direction and fixing the bottom wall (the readers are referred to the Supplementary Information for further methodological details).
A wide range of flow strengths, from the linear to the highly nonlinear viscoelastic regimes, was applied to the bulk and confined C 50 and C 178 PE melts: 0.02 ≤ Wi ≤ 200 for C 50 bulk PE, 0.05 ≤ Wi ≤ 500 for C 50 confined PE, 0.39 ≤ Wi ≤ 7000 for C 178 bulk PE, and 0.68 ≤ Wi ≤ 5600 for C 178 confined PE. (The Weissenberg number (Wi) is defined as the product of the longest relaxation time (τ) of the system and the applied shear rate (γ), .) The characteristic relaxation time, τ, for each system evaluated by the integral below the stretched-exponential curve describing the decay of the time autocorrelation function of the unit chain end-to-end vector was τ = 0.5 ± 0.05 ns for C 50 bulk PE, τ = 1.2 ± 0.1 ns for C 50 confined PE, τ = 15.6 ± 1.0 ns for C 178 bulk PE, and τ = 26.7 ± 2.0 ns for C 178 confined PE work. (Before collecting data, each system was fully equilibrated for a long time (i.e., several times longer than its longest relaxation time τ) at its target state point. After equilibration, the production run of more than 8-10 times longer than τ was carried out to evaluate statistically-reliable physical properties for each system; i.e., 5 ns and 15 ns for the C50 bulk and confined PE melts, respectively, and 100 ns and 200 ns for the C178 bulk and confined PE melts, respectively). Figure 1A illustrates the bulk and confined systems studied in this work. For confined systems, if the center-of-mass position of a chain is located within 2.5 σ (σ = 3.93 Å) from the wall surfaces, it is considered as an interfacial chain. Under equilibrium conditions, in contrast to isotropic random-coil configurations displayed by chains in the bulk system, interfacial chains of confined systems are shown to possess mostly "L" or "U"-shaped (rather extended) configurations on the wall surfaces via the combined effects of the intramolecular chain conformational entropy and the favorable energetic interactions between the polymer and wall. That is, some parts of an interfacial chain are attached to the wall (i.e., adsorbed) and others are detached from the wall (i.e., non-adsorbed). While the adsorbed chain segments experience the wall friction through direct interactions with the wall, the non-adsorbed segments experience intermolecular interactions with nearby surrounding chains in the bulk region. This dual interaction induces distinctive molecular mechanisms for interfacial chains at a given flow strength under shear from those for chains in bulk system or bulk region in confined system.
Results and Discussion
First, for the bulk system (left panel of Fig. 1B), chains are mainly aligned to the flow (x-)direction at low shear rates without a significant structural deformation because the orientation is easier than stretching in response to the applied rotational shear field. At intermediate flow strengths, chains are substantially deformed (stretched) and nearly aligned to the flow direction, and furthermore begin to execute a whole-chain rotation and tumbling motion. In this flow regime, bulk chains mostly exhibit rather symmetrical S-shaped rotations and tumbling behaviors (Fig. 1B). This indicates that the head and tail portions of a chain almost equally (i.e., symmetrically) move in opposite directions relative to each other, as a result of the relative difference in their streaming velocity according to their different (higher and lower) positions in the velocity gradient direction of the applied shear field. In contrast, at high shear rates, chains mainly exhibit tumbling behaviors with hairpin-like configurations, rather than the symmetrical S-shaped configuration. This hairpin-like rotational characteristic occurs because a strong flow field does not allow a sufficient time for the head and tail portions of chains to symmetrically execute their respective movements during the rotational time span; i.e., either head or tail portion alone leads the overall chain tumbling motion quickly without waiting for the other portion to move in the opposite flowing direction.
In comparison, for confined system, chains near the wall exhibit distinctive characteristic molecular mechanisms with respect to the flow strength. In the weak flow regime, chains perform a z-to-x rotation (i.e., rotation Comparisons between the bulk and confined systems for (C) shear viscosity η and (D) the first normal stress coefficient (Ψ 1 ) as a function of Wi for the C 50 PE melt under shear flow. The vertical dotted lines distinguish the three characteristic flow regimes for bulk (black) and confined (green) systems. The three characteristic flow regimes are defined, based on the variation of the mean-square end-to-end distance for the bulk systems and the variation of the degree of slip (referring to ref. 19 for details) for the confined systems. The error bars are smaller than the size of the symbols, unless otherwise specified. See Supplementary Fig. 1 for the corresponding plot of the confined system with the Wi number based on the real shear rate accounting for a non-zero slip at the wall. from the z-direction to the x-direction) with still residing at the interface through the attractive polymer-wall interactions 19 . The alignment of interfacial chains with the flow (x-)direction via this in-plane chain rotation reduces the wall friction experienced by the chains during their movement along the flow direction through a decrease in the effective collision area between the chain and wall 19 . In the intermediate flow regime, the majority of interfacial chains display the out-of-plane wagging mechanism 19 , wherein outer parts of interfacial chains exhibit a repetitive motion between the detachment from and attachment to the wall (leading to an "L"-shaped configuration in the x-y plane) via the competitive effects of the applied flow field (inducing detachment) and the attractive polymer-wall interaction (enhancing attachment). In this flow regime, chains outside the interfacial region are substantially deformed and aligned to the flow direction, which significantly affects the degree of interaction between interfacial and nearby surrounding bulk chains. With further increasing flow strength, interfacial chains experience strong dynamical collisions with the wall atoms, which gives rise to highly nonlinear rotational dynamics, i.e., irregular (chaotic) rotation and tumbling mechanisms at the wall, leading the interfacial chains to detach from the wall toward the bulk region 19 . Interfacial chains can only execute a hairpin-like tumbling motion while respecting the geometrical constraint imposed by the wall.
All these characteristic molecular mechanisms and dynamics underlie the macroscopic structural and dynamical properties for bulk and confined systems, respectively. Figure 1C presents the viscosity variation with respect to the Wi number for the bulk and confined C 50 PE melt systems. Both bulk and confined PE melts exhibit a typical shear-thinning behavior, and the degrees of shear thinning are similar. However, the viscosity of the confined system is larger than that of the bulk system in the entire flow regime, which is consistent with experiments 9 . This is attributed to the high degree of momentum transfer via collisions (friction) between the interfacial chains and the wall. A similar behavior is shown for the first normal stress coefficient in Fig. 1D; this is again ascribed to the large contribution of interfacial chains to the overall system elasticity, arising from their highly oriented and stretched conformations.
In Fig. 2A, we compare the mean-square chain end-to-end distances 〈R ete 2 〉 as a function of shear rate for the bulk and confined systems. For both the bulk and confined systems, the overall behavior of 〈R ete 2 〉 can be characterized by three distinct flow regimes. In the weak flow regime, 〈R ete 2 〉 displays a slight increase with shear rate, indicating that chains in this regime are mostly oriented to the flow direction without a significant structural deformation or stretch. In the intermediate regime, 〈R ete 2 〉 exhibits a dramatic increase with the applied flow strength and eventually reaches a maximum. With a further increase of the shear rate, 〈R ete 2 〉 shows a rather decreasing behavior, which was also observed in previous studies 14,15,18 for bulk PE melts under shear flow. The decrease is ascribed to strong intermolecular collisions together with intense chain rotation and tumbling dynamics at high shear rates. However, we should keep in mind that the characteristic molecular mechanism behind the variation of 〈R ete 2 〉 for each flow regime is not the same for the bulk and interfacial chains. The 〈R ete 2 〉 value is quantitatively quite similar for the bulk and confined systems with respect to Wi (a further quantitative similarity is seen for the confined system in terms of Wi number based on the real shear rate, which accounts for a non-zero slip at the wall ( Supplementary Fig. 2)). However, the average value of 〈R ete 2 〉 for only the interfacial chains appears to be somewhat larger than that of the whole confined system, indicative of a larger deformation of the interfacial chains (in association with a higher degree of molecular interactions with the wall) via the favorable polymer-wall interactions. There is also a relatively steeper increase of 〈R ete 2 〉 in the intermediate flow regime and a larger decrease of 〈R ete 2 〉 in the strong flow regime for interfacial chains than that for the bulk system. This phenomenon can be understood based on the molecular mechanisms of interfacial chains described in Fig. 1B. At flow strengths greater than that in the weak flow regime (where the characteristic molecular mechanism is the z-to-x in-plane chain rotation without a substantial structural deformation, i.e., only a slight variation of 〈R ete 2 〉), the chains exhibit the out-of-plane wagging mechanism with highly deformed structures (i.e., a large variation of 〈R ete 2 〉) in response to the applied flow. The steep increase of 〈R ete 2 〉 for the interfacial chains in the intermediate flow regime is associated with the repetitive chain attachment-detachment mechanism, because the interfacial chains mostly tend to be aligned and stretched in the flow direction without executing rotational or tumbling dynamics. In the strong flow regime, interfacial chains exhibit a chaotic (irregular) rotation and tumbling mechanism with strong dynamic collisions with the wall atoms; this considerably reduces the stretched chain conformation to a rather compact structure, leading to a significant decrease of 〈R ete 2 〉 with increasing flow strength. These stronger variations of 〈R ete 2 〉 for interfacial chains can be further understood by analyzing the probability distribution function of the chain end-to-end distance (Fig. 2B). Compared to the chains in the bulk system, the interfacial chains in the confined system exhibit a more pronounced stretch peak in the intermediate flow regime and a more distinctive rotation peak in the strong flow regime. In addition, the region between the rotation and stretch peaks displays a more pronounced curvature for interfacial chains compared to that for the corresponding bulk system. The interfacial chains exhibit two distinct peaks, even at equilibrium, indicative of a certain amount of chains with a rather extended conformation because of the energetically favorable polymer-wall interaction.
Further detailed structural information for the polymer chains under shear flow can be obtained from the Brightness method 15,30 , which categorizes the mesoscale chain structures into several configuration classes based on the monomer distribution along the chain. Figure 2C presents the probability distribution function (PDF) for six representative configurations (Coil, Fold, Kink, Dumbbell, Half-dumbbell, and Stretched) for the C 50 PE melt system. (It is noted that the Brightness method focuses on the overall chain configuration or shape without regard to the actual chain size). We note that rather short C 50 PE chains produce higher portions for the Fold configuration than the Coil configuration at equilibrium. At low shear rates, the portions of Half-dumbbell and Stretched configurations appear somewhat larger for interfacial chains than those for the bulk system, indicative of more extended conformations of interfacial chains via attractive polymer-wall interactions. More interesting features are exposed in the intermediate flow regime where, with increasing shear rate, (i) the Fold portion exhibits a gradual decrease for the bulk system but a small increase for interfacial chains, (ii) the Half-dumbbell portion exhibits a small increase for the bulk system but a small decrease for interfacial chains, and (iii) the Stretched portion is quite larger for interfacial chains than for the bulk system. These results can be understood by considering the characteristic molecular dynamics corresponding to the S-shaped tumbling mechanism for the bulk system and the L-shaped repetitive chain detachment-attachment for interfacial chains. In the strong flow regime, the stretched portion appears to decrease more significantly for interfacial chains compared to that for the bulk system, as associated with irregular (chaotic) chain rotation and tumbling mechanisms of interfacial chains. In 〉 as a function of Wi for the bulk and confined C 50 PE melt systems. "Interfacial" represents the corresponding result for only the interfacial chains in the confined system. The interfacial regions of both top and bottom walls for the confined systems are found to produce practically identical results (within the statistical uncertainties) for all the microscopic and macroscopic structural and dynamical properties, e.g., 〈Rete2〉 and its PDF, molecular conformations via the Brightness method, interfacial residence time, and so forth. The error bars are smaller than the size of the symbols unless otherwise specified. Comparison of the bulk system and the interfacial region in the confined system for the C 50 PE melt for (B) the probability distribution function, addition, the dominant hairpin-like chain tumbling mechanism for the bulk system at high shear rates leads to a relatively larger increase in the Fold portion than that for the interfacial chains. Figure 3A presents the variation of the chain order parameter, λ, for the C 50 PE melt with respect to the shear rate, which measures the degree of chain alignment in the flow direction in response to the applied field. The interfacial chains already exhibit a large degree of chain ordering in the weak flow regime, with a gradual increase of λ with the shear rate and almost saturation of chain ordering at the end. As such, the λ-value exhibits only a slight variation with shear rate in the intermediate flow regime. In contrast, the corresponding bulk system displays a rather small increase in λ with respect to the shear rate in the weak flow regime and a steep increase in the intermediate flow regime, followed by a plateau region in the strong flow regime. We note that qualitatively similar behavior is observed for the confined system when λ is calculated over all the chains of bulk and interfacial regions; thus, the order parameter averaged over the entire confined system (which might be the case in typical experiments) may provide erroneous structural information for interfacial chains. Furthermore, in contrast to the plateau values of λ for bulk chains in the strong flow regime, the λ-value for the interfacial chains decreases considerably with increasing shear rate in this regime. This phenomenon is directly associated with the irregular (chaotic) chain rotation and tumbling mechanisms via strong molecular collisions of interfacial chains with the wall.
Further information related to the chain orientation was obtained by investigating the PDF of the chain orientation angle with respect to the flow direction for the C 50 PE system; for this purpose, six spatial orientation regions were chosen, as depicted in Fig. 3B. In the weak flow regime, with increasing shear rate, the PDFs for the bulk system show a decrease for Regions 1, 2, and 6, but a somewhat increase for Regions 4 and 5. As a noticeable ) /2, where u denotes the unit chain end-to-end vector and I denotes the second-rank unit tensor, as a function of Wi for the bulk and confined C 50 PE melt systems. "Interfacial" represents the corresponding result for only the interfacial chains in the confined system. The error bars are smaller than the size of the symbols, unless otherwise specified. See Supplementary Fig. 3 for the corresponding plot of the confined system with Wi number based on the real shear rate accounting for a nonzero slip at the wall. (B) Probability distribution function (PDF) of the chain orientation angle (based on the chain end-to-end vector) with respect to the flow direction as a function of Wi with a schematic illustration of the molecular mechanisms in conjunction with the PDF in the three respective flow regimes. The total space is divided by six angular regions for the bulk system and the interfacial region of the confined system. (C) PDF of the local entanglement density ( ) along the velocity gradient (y-)direction for the entangled C 178 PE melts of the bulk and confined systems. Note that Wi = 0 corresponds to the equilibrium condition, Wi = 65 for the bulk system and Wi = 110 for the confined system are in the intermediate flow regime, and Wi = 650 for bulk system and Wi = 570 for the confined system are in the strong flow regime. difference between the bulk and confined systems, the PDFs for interfacial chains exhibit a significant increase for Region 3 and 4 compared to those of the corresponding bulk system; this result directly shows a significant chain alignment to the flow direction for the interfacial chains at low shear rates, which is consistent with the result for the order parameter in Fig. 3A. In the intermediate flow regime, the bulk system shows a steep increase in the PDF of Region 4 but a decrease in the PDF of Region 5 with increasing shear rate. In addition, there is a slight increase in the PDF of Region 3, which is associated with an increase in the degree of the chain tumbling dynamics. In contrast, the interfacial chains exhibit a less steep increase in the PDF of Region 4 and a nearly constant value for the PDF of Region 3 in this flow regime, as can be understood by considering the out-of-wagging (repetitive chain detachment-attachment) mechanism. In the strong flow regime, the bulk system exhibits a slight increasing behavior for the PDFs of Regions 3 and 4 and a slight decreasing (or a plateau) behavior for those of the other regions. In contrast, the interfacial chains exhibit somewhat opposite trends, which are directly related to the intense irregular (chaotic) chain rotation and tumbling mechanisms of interfacial chains on the wall (see Supplementary Fig. 4 for the average tumbling time of chains in the bulk and confined C 50 PE melts). Figure 3C presents the spatial distributions along the velocity gradient (y-)direction for the entanglement density ∼ Z es for the entangled C 178 PE melts, which were obtained from the topological analysis of the entanglement network of the system using the Z1-code 31,32 . (The average number of kinks (intermolecular entanglements) per chain directly obtained from the Z1-code analysis for the C178PE melt system is approximately equal to 6, which is roughly two times larger than the number of entanglements based on the experimental plateau modulus. Our various analyses show that the rheological characteristics of entanglement network for the C178PE melt under shear in a wide range of flow strengths is qualitatively very similar to that of the longer C400 PE melt for both bulk and confined systems. We therefore consider the present result for the rheological aspects of entanglement network for the C178 PE melt to be qualitatively valid for the practical entangled systems). In contrast to the homogeneous distributions of ∼ Z es for the bulk system, ∼ Z es for the confined system exhibits a distinctive shoulder around each of the interfacial regions at equilibrium. This is mainly associated with a relatively higher polymer density 19 in the interfacial regions via the polymer-wall interaction. In addition, ∼ Z es is enhanced by entanglements between the detached chain segments of the L or U-shaped interfacial chains and the nearby surrounding bulk chains. This feature can be associated with a relatively higher viscosity (and thus a higher viscous dissipation under flowing conditions) in the region near the interface. At high shear rates, the ∼ Z es -value decreases throughout the confined system due to disentanglement between chains via chain alignment and stretching. Further, the distribution becomes flattened with less pronounced shoulders with their positions appearing somewhat shifted toward the system center. This phenomenon can be understood by considering the frequent movement of interfacial chains into the bulk region and mixing with bulk chains via strong dynamical collisions with the wall at high flow strengths.
Conclusion
Through a detailed analysis of individual chain dynamics using atomistic NEMD simulations for unentangled (C 50 H 102 ) and weakly entangled (C 178 H 358 ) linear PE melts under shear flow in bulk and confined systems, we revealed and contrasted the characteristic molecular mechanisms with respect to the applied flow strength between the bulk and interfacial chains. This molecular dynamic information is very useful in understanding the structural and rheological behavior of bulk and confined systems under shear as a function of flow strength, and would be further beneficial for building our general knowledge of predicting and controlling the material properties of the polymer under various flow conditions. The main features identified in this study are summarized below.
• Under equilibrium conditions, while polymer chains in the bulk system display random-coil configurations, the interfacial chains of the confined system possess "L" or "U"-shaped configurations on the wall via the combined effects of the intramolecular entropy and the attractive polymer-wall interaction. The detached chain segments of the "L" or "U"-shaped interfacial chains make entanglements with nearby surrounding bulk chains, enhancing the degree of chain entanglement (and thus the viscosity) around the interfacial regions. • In the weak flow regime, while both bulk and interfacial chains are aligned to the flow (x-)direction without a significant structural deformation, the interfacial chains of the confined system perform the z-to-x in-plane rotation at the wall. In addition, compared to the bulk chains, the interfacial chains achieve a much larger degree of chain ordering in the weak flow regime. • In the intermediate flow regime, both bulk and interfacial chains become nearly aligned to the flow direction with a highly deformed (extended) structure; however, in comparison to bulk chains, interfacial chains display a considerably steeper increase of 〈R ete 2 〉 with respect to the shear rate and a more pronounced stretch peak in ∼ P R ( ) ete . As regards the characteristic molecular mechanism, the interfacial chains reveal the L-shaped out-of-plane wagging (repetitive chain detachment-attachment) mechanism while chains in the bulk system exhibit a rather symmetrical S-shaped rotation and tumbling behavior. These distinct dynamic characteristics between the bulk and interfacial chains lead to significantly different results for the probability distribution with respect to the mesoscopic chain configurations in the Brightness method and chain orientation angle as a function of the flow strength in this regime. • In the strong flow regime, both the bulk and interfacial chains exhibit chain end-over-end tumbling behaviors. Specifically, bulk chains exhibit a tumbling behavior mainly with a hairpin-like configuration instead of the symmetrical S-shaped one. In comparison, interfacial chains exhibit highly nonlinear rotational dynamics, such as irregular (chaotic) rotation and tumbling (strictly with a hairpin-like configuration) mechanisms at the wall, via strong dynamical collisions with the wall. This interfacial dynamics results in (i) a significant SCIENTIfIC REPORtS | 7: 9004 | DOI:10.1038/s41598-017-08712-5 decrease of 〈R ete 2 〉, (ii) distinctive rotation peak in ∼ P R ( ) ete , and (iii) distinct behavior for the probability distribution of the mesoscopic chain configurations in the Brightness method, with respect to the applied flow strength in this regime. In addition, such strong collisions lead the chains to frequently move out of the interfacial region toward the bulk region.
The findings in this study should be carefully taken into account in theoretical modeling. For example, a naïve use of the order parameter and the mean-square chain end-to-end distance averaged over the whole system may lead to erroneous predictions of rheological properties (e.g., stress, anisotropic diffusion, flow birefringence) for confined systems, since the intrinsic structural and dynamical behaviors of the bulk and interfacial chains in response to the applied flow are quite dissimilar. In addition, an adequate description for the flow-induced crystallization of confined systems would require combined information of the distinct characteristic molecular dynamics in each flow regime for both the bulk and interfacial chains. Considering the rapid advance in experimental technique (e.g., fluorescent video microscopy 10-13 ), we also expect the present findings to be potentially useful in the experimental analysis and practical applications of bulk and confined dense polymeric materials undergoing shear flow in the near future. | 7,240.8 | 2017-08-21T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Inclusive spin-momentum analysis and new physics at a polarized electron-positron collider
We consider the momentum distribution and the polarization of an inclusive heavy fermion in a process assumed to arise from standard-model (SM) $s$-channel exchange of a virtual $\gamma$ or $Z$ with a further contribution from physics beyond the standard model involving $s$-channel exchanges. The interference of the new physics amplitude with the SM $\gamma$ or $Z$ exchange amplitude is expressed entirely in terms of the space-time signature of such new physics. Transverse as well as longitudinal polarizations of the electron and positron beams are taken into account. Similarly, we consider the cases of the polarization of the observed final-state fermion along longitudinal and two transverse spin-quantization axes which are required for a full reconstruction of the spin dependence of the process. We show how these model-independent distributions can be used to deduce some general properties of the nature of the interaction and some of their properties in prior work which made use of spin-momentum correlations.
Introduction
The proposed International Linear Collider (ILC) [1] which could collide e + and e − at a centre-of-mass energy of several hundred GeV, if built, would serve as an instrument for precision measurements of various parameters underlying particle physics and the dedicated study has published a five-volume Technical Design Report (for the physics part, see ref. [2] and for the detector see ref. [3]). The purpose of the ILC, and indeed of other proposed high energy e + e − colliders, such as the Compact Linear Collider (CLIC) [4,5] the Future Circular Collider (FCC-ee) [6,7] and the Circular Electron Positron Collider (CEPC) [8] is to study the properties of the Standard Model (SM) at high precision in order to validate its predictions as well as to find deviations, if any, and to discover particles and interactions that lie Beyond the Standard Model (BSM). Deviations from SM predictions would arise because of virtual loop effects of particles too heavy to be produced, or indeed due to new interactions which would give rise to terms in the low-energy effective action modifying interaction vertices. Amplitudes from such vertices could interfere with SM amplitudes and produce deviations from its predictions, and could possibly give rise to correlations that are forbidden by the symmetries of the SM when SM particles are observed in the detectors with high-precision measurements of their kinematic and other properties. A dedicated study on the benefits of a strong beam polarization programme, of either or both beams, as well as the benefits of transverse and longitudinal beam polarization has also been carried out some years ago in the context of the ILC [9]. An important new compendium of physics at the ILC is the review, ref. [10].
A useful approach that has been applied in the context of BSM physics searches at e + e − colliders relies on the classification of new physics in terms of its space-time transformation properties using, e.g., one-particle [11] inclusive distributions e + e − → h(p)X, where h denotes a particle that is detected, and p is its momentum. The new physics is lumped into 'structure functions' that are inspired by analysis used in deep-inelastic scattering. It has also been extended in the context of a two-particle inclusive process [12] e + e − → h 1 (p 1 )h 2 (p 2 )X, where h 1 and h 2 denote two particles that are detected, and p 1 and p 2 are their respective momenta and is denoted as the basic process (I). The two-particle inclusive process is depicted in Fig. 1, while the one-particle process can be considered a special case where only one of the particles is detected and the other is included in X. This approach is model independent, and is based only on Lorentz covariance for deriving the most general form of one-particle and two-particle kinematic distributions. It was found that the two-particle case provides more information than the single-particle case as discussed in detail in ref. [12], and in principle, this could be extended to an n-particle inclusive framework, with a rapid rise in complexity. Our formalism is restricted to envisaging new physics only through an s-channel exchange, i.e., that (i) the SM contribution is assumed to be through the tree-level exchange of a virtual photon and a virtual Z, and that (ii) the e + (p + ) Figure 1: The basic process (I) BSM effects could arise through the exchange of a new particle like a new gauge boson Z ′ , or through the exchange of Z, but a with a BSM vertex or a SM loop producing the final state in question, or through a new scalar or tensor exchange in the s channel. Our work above is an extension of the work of Dass and Ross [13,14] that had been performed in the context of γ contributing to the s-channel production, probing the then undiscovered neutral current. As discussed extensively in refs. [11,12], our work in practice is the inclusion of Z in the s-channel, in addition to γ, and where now it is BSM physics that we intend to fingerprint. Moreover, the results can be applied to a more general situation where the interference need not be between SM and BSM amplitudes, but any two amplitudes, one of which is characterized by the exchange of a spin-1 particle, and the other characterized by scalar, pseudoscalar, vector, axial-vector or tensor interactions.
Many studies of such manifestations of BSM physics rely on the measurement of an exclusive final state for which there are definite predictions in the SM, and/or definite predictions within the framework of effective Lagrangians, or effective BSM vertices. An early work in this regard in the context of the ILC is ref. [15], where it was shown that transverse polarization plays a key role in uncovering CP violation due to BSM physics due to scalar (S), pseudo-scalar (P) and tensor (T) type interactions, when no spins are measured. This work was inspired partly by even earlier work done for LEP energies, see ref. [16].
The question then arises as to how one may be able to probe BSM physics further with one-particle inclusive distributions, in the event that its spin has been measured. Keeping in mind that spin measurement is actually performed by further decays of the particle in question, such a scenario is really a quasi-one-particle inclusive process. Nevertheless, the availability of a second final-state momentum vector as in the two-particle inclusive case is what renders it a more powerful probe. On the other hand, a single-particle inclusive measurement with the measurement of spin of the particle along a specific quantization axis may provide a second vector and thereby play an important role in uncovering BSM physics. Whereas in ref. [14], the possibility of measuring the spin of the particle in a one-particle inclusive measurement e + e − → h(p, s)X, where h and s denote the SM particle that are detected, and h and s are its momentum and spin respectively, has been considered, it had not been considered in ref. [12]. This is denoted as the basic process (II) and is depicted in We also note here that in the recent past, numerous investigations have been made in the context of exclusive processes at the ILC, where it has been shown that the measurement of the final-state spin can also be an excellent probe of BSM physics. We had considered specific exclusive processes and had concluded that many types of BSM interactions reveal themselves only when the spin of one of the final state particles is resolved. For instance, in order to separately resolve BSM contributions from scalar and tensor type couplings in e + e − collisions with transversely polarized beams, one has to resolve the spin of the topquark in tt production [17,18]. 1 Early work on the necessity to resolve the final state spins in the context of τ + τ − production at significantly lower energies to probe the presence of anomalously large magnetic moments and possible electric dipole moments of the τ -lepton are refs. [22,23]. (For a general and interesting discussion see ref. [24].) In the present work, for purposes of illustration, we introduce these sources of BSM physics to provide a concrete framework wherein we can make some remarks about the resulting structure functions derivable from such an exclusive process.
It is usual practice to study the dependence of a process on the spin of a produced particle by restricting to a single spin quantization axis, typically, the momentum direction of the particle. In this case, what is accessible is the probability of production of the particle with a definite helicity. However, this corresponds to only the diagonal element of the spin density matrix. In order to study the full spin structure of amplitudes, one needs also offdiagonal elements of the spin density matrix, or equivalently, the polarization information for two other mutually orthogonal spin quantization axes. This approach has been advocated earlier, for example, in refs. [25,26] in the context of top-pair production at an e + e − collider and in [27] for single-top production at the Large Hadron Collider. Single-top production itself is an interesting process in itself; for a review, see ref. [28]. Single-top production at CLIC has been studied in ref. [29].
Keeping in mind these considerations, for the purposes of this work we confine ourselves to an inclusive, massive spin-1/2 fermion, where we now employ the three suitably chosen axes explicitly. We note here that the considerations of ref. [14] remained general in the choice of the spin quantization axis. In the present work, we present results for the three different quantization axes. In practice, this is made possible by the fact that the two types of processes are closely related: the single-particle inclusive process with spin measurement is closely related to the two-particle inclusive process with suitable identification of vectors entering the definition of the structure functions. Thus, by employing the standard techniques as in ref. [11,12,13,14] we can proceed with the analysis of the single-particle inclusive measurement with spin resolution. As in our earlier work, significant new features arise due to the presence of the axial-vector coupling of the Z to the electron, a feature missing in a vector theory like QED, as in the considerations of ref. [14], and an extensive discussion can be provided on the features of the correlations for the three specific quantization axes. In all considerations of the top-quark spin resolution at the LHC as ref. [27] or at the ILC, as well as in τ −spin reconstruction as in the work discussed here, it can only be done from the distributions of its decay products and typically taken in the rest frame of the top-quark, by looking at the angular distribution of a decay product about the quantization axis. This, of course, is independent of the environment in which the top-quark is produced, whether it is in a hadron collider or an e + e − collider, and whether or not in the hadron collider it is pair or singly produced. Analogous considerations apply also to the τ -lepton. For reviews on approaches to these issues, see refs. [30,31,32,33,34].
As in the past, once a general discussion is provided for an inclusive final state, it may be readily applied to exclusive final states as well, thereby providing a framework for discussing several processes of interest. The expectations from our general model-independent analysis is compared for some specific processes with the results obtained earlier for those processes. Our approach would thus be useful to derive general results for newer processes which fall within the framework described above. Thus, what is presented here is the result of a detailed calculation for each individual process.
We also note that many of the considerations that have been spelt out for the ILC also apply to the other planned facilities, namely CLIC, FCee and CEPC.
The structure of this paper is as follows: In the next section we include some preliminaries about the inclusive process, the kinematics and a discussion on the choice of spin quantization axes. In Sec. 3 we present a computation of the spin-momentum correlations resulting from the presence of structure functions that characterize the new physics. Our results here are presented in the form of results arising from the computation of a trace that encodes the leptonic tensor as well as the new physics encoded in a tensor constructed out of the momenta of the observed final-state particles (what is known as a 'hadronic' tensor, for historical reasons, since the term arose at a time when the final state consisted largely of hadrons). These tables provide the analogue for the SM and new physics, of what was provided by Dass and Ross [14] for QED and neutral currents. In Sec. 4 we discuss the CP and T properties of correlations for different classes of inclusive and exclusive final states.
We provide a discussion on the the polarization dependence of the correlations in different cases. In Sec. 5 we will specialize to specific examples of processes, into which our approach can give significant insight. In Sec. 6 we present our conclusions and discuss prospects for extension of the present framework to account for classes of BSM interactions not presently covered.
The process and kinematics
We consider the two-particle inclusive process and the one-particle spin-resolved process where h is the final-state particles whose momentum p and spin s are measured, X is an inclusive state. The process is assumed to occur through an s-channel exchange of a photon and a Z in the SM, and through a new current whose coupling to e + e − can be of the type V, A, or S, P , or T . Since we will deal with a general case without specifying the nature or couplings of h, we do not attempt to write the amplitude for the process (1). We will only obtain the general form, for each case of the new coupling, of the contribution to the angular distribution of h from the interference of the SM amplitude with the new physics amplitude. It might be clarified here that even though we use the term "inclusive" implying that no measurement is made on the state X, in practice it may be that the state X is restricted to a concrete one-particle or two-particle state which is detected. In such a case the sum is not over all possible states X. Nevertheless, the momenta of the few particles in the state X are assumed to be integrated over, so that there is a gain in statistics as compared to a completely exclusive measurement. The angular distributions we calculate hold also for such a case, except that structure functions would depend on the states included in X.
The following symbols have been used by us in various stages of the computations and we present here a comprehensive list of these definitions. We define: q = p − + p + , K ≡ ( p − − p + )/2 = Eẑ, whereẑ is a unit vector in the z-direction, E is the beam energy, and s ± lie in the x-y plane.
We now turn to the important question of the choice of three linearly independent vectors which will define the quantization axes. Although the decay distributions of the top-quark are correlated to the spin in the top-quark rest frame, our choice of vectors is in the laboratory, or e + e − c.m. frame. It is assumed that all the kinematic information would be available which would allow one to construct any quantity of interest for the event sample. In particular, it may be noted that this choice would suffice for the full analysis of the top-quark polarization for which the SM would have definite predictions, and could also be used in other contexts such as anomalous couplings, or any kind of BSM physics.
In the e + e − centre-of-mass frame, the spin vectors have components given by: where P = | p| and E p = √ P 2 + m 2 . In covariant notation, (6) Note that θ is the angle of h with respect to the beam-direction, and φ is the azimuthal angle where the x-axis is chosen as the direction of the transverse polarization of the e + and e − beams. Note that the symbol h(p, s) is a generic symbol, where the spin s could stand for the measurement of the spin along any one of the three quantization axes of choice.
The three-vectors n and t in the laboratory frame are actually quite simple: and t ≡ (− cos θ cos φ, − cos θ sin φ, sin θ).
n is along a direction perpendicular to both the momentum p of h and the beam direction, see for example ref. [35]. On the other hand, t is in the plane of the beam direction and p, though perpendicular to the latter. For ease of visualization, we have represented the vectors in Fig. 3.
As in the past, we calculate the relevant factor in the interference between the standard model currents with the BSM currents as Here g e V , g e A are the vector and axial-vector couplings of the photon or Z to the electron current, and Γ i is the corresponding coupling to the new-physics current, h ± are the helicities (in units of 1 2 ) of e ± , and s ± are respectively their transverse polarizations. For ease of comparison, we have sought to stay with the notation of refs. [13,14], with some exceptions which we spell out when necessary. We should of course add the contributions coming from photon exchange and Z exchange, with the appropriate propagator factors. However, we give Figure 3: Representation of the momentum and spin vectors in the laboratory frame. The left panel depicts the electron and positron momenta, respectively, p − and p + , their respective transverse spin vectors s − and s + , which lie along the positive and negative x-axis respectively, the momentum p of the detected particle and the spin-quantization axis that is denoted by s. The right panel depicts the three different spin-quantization axes s, n and t defined in the text. In both panels, the axes x ′ and y ′ denote axes obtained by rotation of the x and y axes about the z axes through an angle φ.
here the results for Z exchange, from which the case of photon exchange can be deduced as a special case. The tensor H iµ stands for the interference between the couplings of the final state to the SM current and the new-physics current, summed over final-state polarizations, and over the phase space of the unobserved particles X. It is only a function of the the momenta q , p and s (or n or t). The implied summation over i corresponds to a sum over the forms V, A, S, P, T , together with any Lorentz indices that these may entail.
Computation of correlations
We now determine the forms of the matrices Γ i and the tensors H iµ in the various cases, using only Lorentz covariance properties. Our additional currents are as in refs. [13,14], except for the sign of g A in the following. We explicitly note that in our convention is ǫ 0123 = +1. We set the electron mass to zero. Consider now the three cases where the BSM physics could be of the scalar and pseudoscalar type, vector and axial-vector type or tensor type. Note that in each case, H iµ can be independent of the spin vector (s, n or t), or linearly dependent on the spin vector. The linear dependence can arise either from the spin vector entering the tensor structure or from a simple multiplicative factor q · s (q · n, q · t being zero in the centre-of-mass frame). We explicitly include the tensors which involve the spin vector. But we do not show the spin vector entering through a factor of q · s, as this would have the same distribution as the spin-independent tensor. It is thus understood that in what follows, each spin-independent structure function, say F , should be actually replaced by F + F ′ (q · s), where F ′ is another structure function.
Scalar and pseudoscalar case:
In this case, there is no free Lorentz index for the leptonic coupling. Consequently, we can write it as The tensor H iµ for this case has only one index, viz., µ. Hence the most general form for H S µ is 2 : where r is chosen from p, and the spin vectors s, n and t, corresponding respectively to longitudinal polarization, transverse polarization perpendicular to the production plane, and transverse polarization in the production plane. Here F r denotes the relevant structure functions we encounter and is a function of invariants q 2 and p · q. In fact, all the structure functions introduced in the above, as well as those to be introduced in the following are functions of the same Lorentz invariants q 2 and p · q. The dependence of the functions on q 2 and p · q encodes the dynamics of the BSM interactions. In particular, they would contain propagators and form factors occurring in the BSM amplitudes. It may be noted that these definitions can result in an unconventional phase for the spin vectors, and will have implications to our analysis of spin-momentum distributions and their properties implied by the CPT theorem: it would imply that relative to the momentum the spin vectors would have required an additional factor of i in the definition of the structure functions in the usual correspondence between CPT= −1 distributions and the appearance of imaginary parts of these structure functions [36]. (For another useful review, see ref. [37]).
Vector and axial-vector case:
The leptonic coupling for this case can be written as 2 The form for H S µ = (r µ − q µ r·q q 2 )F r which is the definition adopted in ref. [14], is also permissible, since when r = p, and since p − q p·q q 2 is a current conserving combination, and the second term does not contribute.
Note that we differ from Dass and Ross [13,14] in the sign of the g A term, in order to be in line with the convention for the standard neutral current coupling of the SM, which was established well after the work in refs. [13,14]. The tensor H for this case has two indices, and can be written as where W 1 , W rw 2 , W uv 3 , W 4 are invariant functions, and r, w can be chosen from p, s, n and t, and u, v can be chosen from p, q, s, n and t, with the condition that the tensor be at most linear in the spin vector. As compared to the one-particle exclusive case, there is an additional tensor structure with structure function W 4 , which requires two vectors, being antisymmetric. The only non-zero contribution is in the case when two vectors are p and n.
Tensor case:
In the tensor case, the leptonic coupling is The tensor H for this case can be written in terms of the four invariant functions where w is chosen from p, s, n and t, r from p and q, and u from p, q, s, n and t. These choices of vectors for r, w and u give a complete set of independent tensors. The use of vectors other than covered by the choices would result in tensors which are combinations of tensors described by eq. (15). Details can be found in [14]. We next substitute the leptonic vertices Γ and the respective tensors H i in (9), and evaluate the trace in each case. We present the results in Tables 1-4. The structure functions accompanying the tensors which depend on spin (i.e. contain one of the vectors s, n and t), would occur in the the spin-dependent differential cross section with a factor λ s , λ n or λ t , each taking the value +1 or −1, denoting the spin projection along the respective spin vector s, n or t.
A superscript T on a vector is used to denote its component transverse with respect to the e + e − beam directions. For example, r T = r − r ·ẑẑ, and similarly for other vectors. Tables 1, 2, 3 and 4 are respectively for cases of scalar-pseudoscalar, vector-axial-vector, real and imaginary parts of tensor couplings respectively.
Since our present case is that of a single particle being measured, there is only one momentum p. However, there is one more vector, viz. the spin vector. These are the three possible spin quantization axes given by s µ , n µ and t µ and a full evaluation of the resulting correlations is given in Tables 1-4.
Structure function
Correlation Table 1: Correlations due to structure functions for scalar and pseudo-scalar BSM physics the interference of the SM amplitude with the BSM physics. An overall factor of 8 has been suppressed to be consistent with published results, refs. [11,12]. Note the symmetry of the correlations under the simultaneous interchange of Im↔Re, g e V ↔ g e A .
Structure function Table 1 for real parts of the tensor couplings Structure function Table 1 for imaginary parts of the tensor couplings In the absence of any further assumptions of the theory, it is not possible to draw very pointed conclusions. However, we can still deduce some useful points, very often, related to what measurements are not possible even under the very minimal assumptions made by us. Examining the tables, one can make the following observations. Many of the observations we have here are similar to the case of one-and two-particle inclusive measurements without spin resolution [11,12]. These may be summarized as follows in bullet form: • In the case of S, P and T couplings, all the entries in the corresponding tables vanish for unpolarized beams, or for longitudinally polarized beams.
• Thus at least one beam has to be transversely polarized to see the interference.
• In case of V and A couplings, both beams have to be polarized, or the effect of polarization vanishes. It is interesting to note that all the correlations in the latter case are symmetric under the interchange of s + and s − .
• In case of S, P and T to observe terms which correspond to combinations like (h − s + ± h + s − ), it is necessary to have at least one beam longitudinally polarized, and the other transversely polarized.
• It is only the coupling g e V which accompanies the imaginary part of the structure functions in case of S, P couplings, and g e A in case of T couplings. Likewise, g e A and g e V occur with the real parts in these respective cases.
• In the case of vector and axial-vector BSM interactions the structure functions without final-state spin measurement which contribute when polarization is included are the same as the ones which contribute when beams are unpolarized, provided absorptive parts are neglected. We assume here that the final-state particles which are observed are themselves eigenstates of CP, in which case, the imaginary parts of the structure functions contain absorptive parts of the BSM amplitudes. In other words, no qualitatively new information is contained in the polarized distributions if we neglect the imaginary parts of the structure functions and do not make a final-state spin measurement. However, this situation is changed when structure functions dependent on final-state spin are included.
• Most BSM interactions are chirality conserving in the limit of massless electrons, and can therefore be cast in the form of vector and axial-vector couplings. Thus, in a large class of contexts and theories, it is possible to conclude that polarization does not give qualitatively new information, unless absorptive parts are involved. Again, the inclusion of spin measurement of the one-particle inclusive state changes this situation.
• It is possible to conclude that polarization can be used to get information on absorptive parts of structure functions of BSM interactions, which cannot be obtained with only unpolarized beams, and the final-state spin resolution can be used to obtain information even on the dispersive parts. 3 • In our case, if absorptive parts are included, there is a contribution from Im W uv 3 . Again, in this case, it possible to predict the differential cross section for the polarized case, if the unpolarized cross section is known.
• On the other hand, we see that Im W pp 2 contributes only for transversely polarized beams. Thus, to observe these structure functions, it is imperative to have transverse polarization of both beams.
• A further point to notice about the contributions of Im W rw 2 is that if g e V = g V and g e A = g A , the contribution vanishes. In other words, if the new physics contribution corresponds to the exchange of the same gauge boson as the SM contribution, so that the coupling at the e + e − vertex is the same, even though the final state may be produced through a new vertex, the contribution to the distribution is zero. Thus, in case of a neutral final state, where the SM contribution through a virtual photon vanishes at tree level, the observation of Im W rw 2 through transverse polarization could be used to determine the absorptive part of a loop contribution arising from γ exchange. In case of a charged-particle final state for which both Z and γ contribute, such a contribution would be sensitive to loop effects arising in both these exchanges.
The features mentioned here capture the main reasons for enhancing BSM physics in the presence of beam polarization, which is the essence of the studies of refs. [9,11,12].
CP and T properties of correlations
It is important to characterize the C, P and T properties of the various terms in the correlations, which would in turn depend on the corresponding properties of the structure functions which occur in them.
In this context we recall that a similar analysis was done for the one-particle inclusive case treated in [11]. In that case, we deduced the important result that when the final state consists of a particle and its anti-particle, it is not possible to have any CP-odd term in case of V and A BSM interactions. This deduction depended on the property that in the centre-of-mass frame, the particle and anti-particle three-momenta are equal and opposite. In the case of two-particle inclusive distributions, even if the two particles observed are conjugates of each other, their momenta are not constrained. Thus it is possible to have CP-odd correlations even in the V, A case. In this section, we present an extension of those analyses to the case at hand, which is the one-particle inclusive case with spin-resolution. It may be noted that the work of [11] is the simplest possible realization of this framework. The present work is a highly non-trivial extension of the work therein, and is based on the introduction of not just one, but three different spin quantization axes. It is not possible to anticipate the results of this analysis and thus the present work is an important extension. Furthermore, it brings into the focus the requirement of a dedicated spin analysis of final state particles at future e + e − colliders.
We now come to a more systematic analysis. We consider two important cases, one when the particle h in the final state in e + e − → hX is its own conjugate, and the other when it is not. We treat these two cases separately.
Case A: h c = h
In this case, the particle h is required to be its own conjugate, and is a spin-half particle, it would be a Majorana fermion, and therefore uncharged. Then, if h is light (e.g., a Majorana neutrino), it would escape detection leading to missing energy and momentum, and the state X would have to include a pair of charged particles to make possible a measurement of this state. On the other hand, if h is heavy (e.g., a heavy Majorana neutrino or a neutralino in a supersymmetric theory), it would decay making it possible to measure it spin with the help of its decay products.
We first examine the case of scalar and pseudoscalar interactions. When the spin of h is not measured, the distributions in the first two rows of Table 1 being even under CP would be present if the structure function F p does not violate CP. On the other hand, the distributions in the third and fourth rows of Table 1, if seen, would measure possible CP violation in F p .
If the spin of h is measured, we have to keep in mind that the spin-dependent structure functions are multiplied by a factor λ depending on the spin-quantization axis. When the spin quantization axis is along its momentum direction, the dependence of distributions on the CP properties of F s is opposite in sign as for F p , since the spin projection along the momentum (which we denote by λ s ) and the spin itself have opposite P properties. Since the distributions are identical in the two cases, except for an additional factor of E p /(P m) in the former case, the distribution with spin measurement corresponds to CP opposite to that in the case without spin measurement.
Since under naive time reversal T momentum and spin have the same behaviour, viz., change of sign, the CPT property will follow the CP property. We remind the reader that T here denotes naive time reversal, i.e., reversal of the spin and momenta vectors, as opposed to genuine time reversal, in which initial and final states are also interchanged.
The distributions in the case when the spin quantization axis is n are very similar, and the additional factor in this case is cosecantθ/P . However, in this case, the roles of the real and imaginary parts of the structure functions are reversed, and the distributions have an interchange g e V ↔ g e A relative to the ones for F p or F s . This has an important significance because numerically g e V << g e A . Thus, the distributions which occur with a certain F p or F s will have widely different numerical value as compared to those occurring with the F n of the same magnitude. Moreover, the vector n as chosen has exactly opposite C, P and T properties as compared to s. Correspondingly, λ n has also opposite C, P and T properties as compared to λ s .
In case of the spin quantization axis of h being t, the distributions for F t are related to those of F p by a factor cot θ/P . This changes the CP property of the distribution, since cot θ is odd under CP. In addition, as t has C opposite to that of p, λ t has C=−1, whereas its P and T properties are the same as those of λ s , resulting in CP= +1, and T=+1. Thus, the CP properties of the distributions for F t remain opposite to those for the distributions for F p .
To see how a study of the CP properties would be affected by the experimental configuration, consider the case when the e − and e + beams have only transverse polarization, and whose directions are oppositely directed. This would be a natural scenario in case of circular colliders, where the e − and e + polarizations, because of synchrotron radiation via the well-known Sokolov-Ternov effect, are directed perpendicular to the plane of the trajectories of the particles, and anti-parallel to each other. In this case, s − = − s + . Then, CP-odd distributions arise in the case of spin-independent structure function F p for scalar couplings, and for spin-dependent structure functions F s , F n and F t for pseudoscalar couplings.
In the case of vector and axial-vector couplings, if the spin of h is not measured, CP violation can be seen in distributions associated with Im(g V W pq 3 ) and Im(g A W pq 3 ) because the factor cos θ occurring there is odd under CP. This CP violation results from absorptive parts of the structure function W pq 3 , and is consistent with the what was already remarked in [11], that the observation of CP violation in the case when the spin of h is not measured requires either an absorptive part to be present, or the use of transverse beam polarization. We find that this is not true when the spin of h is observed because then there are more possibilities of observing CP violation, again because of the CP-odd factor cos θ (or cot θ) as in the distributions associated with W pt 2 , or because λ s or λ n is odd under CP, as in the case of W ps 2 , W pn 2 and W pn 4 . Thus, even in the absence of absorptive parts, CP violation is observable even without transverse beam polarization for the structure function W pn 4 . Thus, in the absence of absorptive parts, CP violation is observable even without transverse beam polarization for the structure function W pn 4 . An example of a suggestion for measurement of CP violation in neutralino production using the spin of the neutralino along the direction n normal to the production plane can be found in [38,39]. Another feature seen is a CPviolating contribution associated with the structure function W pn 2 which survives only in the presence of transverse beam polarization, but with an entirely different kinematic dependence compared to any other structure function.
In the tensor case, when spin is not observed, all F -type structure functions are CP even and all P F -type ones are CP odd. Of the spin-dependent structure functions, all F -type structure functions with one superscript s, n or t are CP odd, whereas all P F -type structure functions with one such superscript are CP even.
Again, if we restrict ourselves to the configuration where s + = − s − , the surviving CP-odd terms of the ones just listed are only those of the type F s,n,t 2 and P F pqp . It is interesting to notice that, with T as usual denoting naive time reversal, the distributions which correspond to CPT=+1 and CPT=−1 arise from the two opposite cases where the structure function does not have an absorptive part and where it does have an absorptive part. This follows from the CPT theorem. If the phases in the definitions of the structure functions are chosen appropriately, the former would be associated with the real part of the structure function and the latter with the imaginary part.
Case B: h c = h
In the case when h is not self-conjugate, the most interesting case would be the one where X ≡ h c , i.e., when only h and h c are pair-produced. In that case, ascribing the momentum p c to h c , we can write p = 1 2 ( p − p c ) in the c.m. frame, so that under CP, P is invariant, as also cos θ.
Looking at Tables 1, 3 and 4, it is clear that in the case of scalar, pseudoscalar and tensor interactions, when spin of h is not measured, the only CP-odd correlations are those which have a combination ( s + − s − ), which is C odd and P even, or the combination (h − s + + h + s − ), which is C even and P odd. Thus, for the configuration s + = − s − , only CP-odd correlations survive. In the scalar and pseudoscalar case, the CP-odd correlation is present for all structure functions with a pseudoscalar coupling to leptons.
In the case of vector and axial-vector couplings, there are no CP-odd correlations.
If the spin of h is measured, one cannot make definite statements about CP properties, because the spin of h c is not measured. Some more interesting situations are possible in this situation: When h is not selfconjugate, and when h and h c are not pair produced, one can draw conclusions about the CP properties only by comparing the distributions for an inclusive process with final state h + X with those for the final state h c + X. We can construct special combinations ∆σ ± corresponding to the sum and difference of the partial cross sections ∆σ and ∆σ for production of h and h c respectively. We could then make statements of the CP properties of ∆σ ± for various structure functions. A general discussion in this case is somewhat complicated. However, in case we restrict ourselves to only tree-level contributions, then, by unitarity, the effective Lagrangian can be taken to be Hermitian and also avoiding complications arising from absorptive parts generated by loops. Then the couplings and structure functions contributing to ∆σ would be complex conjugates of those contributing to ∆σ. Thus, only the real parts of the couplings would contribute to ∆σ + and only the imaginary part to ∆σ − . It would not be possible to make such a clear-cut separation if loop effects are included. In this scenario, it would be possible for terms in distribution which would be CP even in the earlier case of h = h c to be CP odd in the combination ∆σ − . This combination would occur with only imaginary parts of the couplings. On the other hand, terms which were CP odd in the case h = h c would be CP odd in the combination ∆σ + , and real parts of couplings would contribute to it.
Implications for specific processes
In this section we discuss the implications of our framework to specific processes. As in the preceding section, since we are specifically interested in the possibility of CP violating signatures, we need to separately consider the cases of self-conjugate and non-self-conjugate exclusive final states.
Case A: h c = h
The two important cases we consider here are the production of a pair of heavy neutrinos in a left-right symmetric model [40] and a pair of neutralinos in a supersymmetric model [38,39] at electron-positron colliders, both of which can occur in an appropriate extension of the SM.
Heavy neutrino production
Since in many theoretical scenarios neutrinos are massive Majorana fermions, our formalism will be applicable to inclusive neutrino production in most theoretical models. Moreover, many theories incorporate CP violation, which may be relevant for baryogenesis through leptogenesis. The neutrino needs to be heavy, so that it decays in the detector so that its polarization would be measureable. The neutrino may be accompanied by another neutrino or some other inclusive state. Since the process does not occur in the SM, we will apply our formalism to the interference of two amplitudes for the production, one of which would be through the s-channel exchange of the Z boson, non-vanishing only for a e − L e + R or e − R e + L combination of initial states, and the other, may be through s-, t-, or u-channel exchanges of massive particles. Since the cases we consider correspond to unpolarized beams, the interfering second amplitude would also have to have the same initial helicity combinations, viz., e − L e + R or e − R e + L , giving essentially the same results as would a vector-axial-vector interaction. Consequently, we can anticipate the resulting distributions from our Table 2. If the spin of the final state is not measured, the relevant rows in Table 2 which incorporate CP violation correspond to the combinations Im(g V W pq 3 ) and Im(g A W pq 3 ). In either case, the factor cos θ leads to a forward-backward asymmetry, which is discussed in [40].
It is conceivable that with polarized beams, and/or by studying the polarization of the neutrinos, more information can be obtained on the structure of a possible theory of Majorana neutrinos. While our tables can be used as an indication of what correlations may be useful, the corresponding structure functions would have to be worked out in the specific model being tested.
Neutralino production
Neutralinos are Majorana fermions and are relevant to supersymmetric extensions of the SM. Again, neutralino production is not a SM process, we again consider the interference of two amplitudes for the production, one through the s-channel exchange of the Z boson, and the other through t-, or u-channel exchanges of massive charged sleptons. In the absence of beam polarization, the terms corresponding to W pn 4 would indicate CP violation, which corresponds to neutralino polarization in a plane normal to the production plane, and is discussed in [38]. In [39] a CP-odd forward-backward asymmetry with transverse beam polarization and without the neutralino spin being measured is also presented. Our formalism misses this term because it arises on account of t-and u-channel contributions in the neutralino pair production process, whereas we consider only s-channel processes.
Of related interest is the discussion in ref. [41] where the production of Dirac and Majorana particles in fermion-antifermion annihilation is considered in some generality. The main results relate to the symmetry or anti-symmetry of the cross section and the polarization of the observed final state when the e + e − beams are unpolarized or longitudinally polarized. These correspond to V, A type of interactions (S, P and T interactions requiring transverse beam polarization in order for them to be observable). Also, in the case of Dirac fermions, the results require observation of the spins of both the fermion and anti-fermion. Hence we cannot make a comparison with the corresonding results, as our formalism concerns only spin measurement of one final-state fermion. In the case of Majorana fermions we have found agreement with the symmetry properties in all cases studied by them, with the exception of the symmetry property of polarization in one of the form factors we consider, viz., Re(W pt 2 ).
Case B: h c = h
In the case when the produced particle h is not self-conjugate, the simplest possibility is that h is produced in association with h c , its conjugate particle. We will restrict ourselves to this possibility, as a larger inclusive state is complicated to discuss in specific detail. We would like to emphasize that ours is a model-independent approach, and we would like to elicit general features in our formalism. We do not include here predictions from individual models for our structure functions. Nevertheless, we outline an intermediate step in such a calculation in case of the hh c final state.
Here there are two possibilities, one when the particle h is a particle in the SM spectrum, and the other when h is new particle in an extension of the SM. Examples of the former are when h is a charged massive particle, like the top quark or the tau lepton. As for extensions of the SM, cases often studied in the literature are production of excitations of quarks or leptons, charginos in supersymmetric theories, production of heavy quarks or leptons in a model with extra generations or an extended gauge group, and production of Kaluza-Klein partners of SM fermions in extra-dimension theories. In all these cases, correponding a number of different underlying theories, a unified model-independent approach can be used. There are two possibilities which we consider: A. Loop-level BSM contribution to γhh c and Zhh c vertices In this case, the amplitudes for hh c production through s-channel γ exchange and through s-channel Z exchange are parametrized in a general way in terms of vector, axial-vector and tensor couplings, with coefficients which are momentum-dependent form factors. Thus, the structure functions appearing in our formalism, corresponding to the interference between SM γ and Z exchange contributions with an indirect loop-level BSM contribution, are represented, still in a relatively model-independent way, in terms of form factors. The assumption made is that the BSM contribution appears in the loop contribution to γhh c and Zhh c vertices.
This approach can include models mentioned above, for some of which form factors have been calculated in the past [42,43,44,45,46]. B. BSM contribution through effective e + e − hh c interactions This case includes contributions which do not take place through γhh c and Zhh c vertices. Without explicit details of the production mechanism, the BSM contribution can be represented as general contact e + e − hh c interactions, which would include all tensor contributions in a model-independent way. Again, the interference between the SM contribution and the BSM contact interaction contribution would result in the structure functions that we use in our analysis. The structure functions could be calculated in terms of the contact interaction form factors. Contact interactions in this context have been studied in [47,48,49].
The issue of the spin resolution has been studied in the context of tt production in the presence of BSM physics due to an effective Lagrangian characterized exclusively by its Lorentz signature. In ref. [17,18] it was shown that the availability of helicity amplitudes for both the initial as well as final state particles allows one to obtain distributions and to construct suitable asymmetries to probe BSM physics in a manner that was not possible when the helicity was unresolved, in contrast to the work in ref. [15]. Furthermore, it was also demonstrated that a correspondence could be made between the one-particle inclusive distribution and relate the relevant structure functions to the parameters of the effective Lagrangian of the exclusive process [11]. This work was further extended to a scenario of measurements of the spin in the so-called beam-line and off-diagonal bases to enhance the sensitivity to BSM physics [18], where these bases were discussed in the context of tt production in some detail in ref. [50].
We now study these cases in some more detail.
A. Loop-level BSM contribution to γhh c and Zhh c vertices
In this section we study the process e + e − → f f , where f is a quark or a lepton, a process which will dominate at the ILC. We also look at further decays of the final-state fermions when they are heavy and the momentum correlations amongst these as probes of BSM interactions. We concentrate on CP-odd correlations which indicate CP violation and are therefore important to study. However, these are by no means the only interesting correlations. CP-even correlations could be used to study new CP-conserving interactions like magnetic dipole moments. Early work in this regard was the study by Couture [22,23] of e + e − → τ + τ − in the presence of dipole moments 4 . Couture has studied in detail the spin-spin correlations in τ + τ − production and studied the effects of possible electric and magnetic dipole moments on these, but at energies where the presence of Z boson can be safely neglected. It turns out that many important effects show up only in the presence of the Z boson due to its parity violating properties, and at energies comparable or significantly larger than its mass. Here we consider the process in the full electro-weak theory, but with a sum over the spin of thef , giving rise therefore to a one-particle inclusive type distribution with spin resolution, wherein the effective structure functions are actually known in terms of the dipole strengths.
Consider the process e − (p − )e + (p + ) → f (p)f (p), in the presence of anomalous magnetic dipole couplings κ γ and κ Z and electric dipole couplings d γ and d Z of the spin-half fermion f to γ and Z, respectively. The effective Lagrangian of the dipole couplings (V ≡ γ, Z) is given by The interference term between the amplitude in the SM model and the amplitude with the anomalous couplings, summed over the spins off , for the contribution of one vector boson V (γ or Z) is Here s is the spin four-vector of the fermion f , and it would represent the helicity four-vector. The calculation above is a generic one, and as a result the vector s can also be replaced by the vectors n and t to get the results for the cases when the spin quantization axes are in transverse directions. Comparing this expression with eqs. (9) and (13), we can immediately identify the various structure functions listed in eq. (13) for the present exclusive case. For this one has also to consider the s dependent terms above with s replaced by n or t. Moreover, some terms in eq. (17) above do not find a place in eq. (13). To understand this, we note that the terms with the factors ǫ αβγµ p α q β s γ and ǫ αβγν p α q β s γ are vanishing, since in these, q has only the time component, and the space components of p and s are proportional to each other. When s in these ǫ tensors is replaced by n, ǫ αβγµ p α q β n γ reduces, apart from a scalar factor, to the transverse spin vector t µ , giving a term present in eq. (13). Similarly, ǫ αβγµ p α q β t γ is proportional to n µ . As stated earlier, since the spin off is not measured, it is not possible to get correlations which are explicitly even or odd under CP. However, it is clear that both the CP-even dipole couplings κ V as well as the CP-odd dipole couplings d V can be measured from a measurement of the structure functions through correlations listed in Table 2.
In case of the photon, for which g f A = 0, the electric dipole coupling d f γ appears only in the structure functions W 2 , and W 4 , and then, only in their imaginary parts. However, as observed earlier, in case of Im W 2 , the coupling constant combination multiplying the final distribution is g f V g e A − g f A g e V , which for the case of photon couplings is 0, as both g e A and g f A vanish. Hence the only possibility of measurement of the photon electric dipole coupling is through W 4 . In that case, both longitudinal polarization of the beams and measurement of the transverse polarization of the final-state fermion is required. As for the dipole coupling of f to the Z, the fact that the vector coupling g e V of the electron to the Z is numerically small (g e V /g e A ≈ 0.08) would play a role in deciding the approximate nature of the distributions. In case the f is a charged lepton like τ , the corresponding vector coupling g f V to the Z is also small. In that case, neglecting vector couplings of the e and τ , an examination of eq. (17) and Table 2 reveals that the measurement of the weak dipole moment would require longitudinal polarization of the e − or e + beam, preferably both beam longitudinally polarized with helicities of opposite sign. These observations are in accordance with early work that pointed out that availability of beam polarization would significantly enhance the sensitivity to the measurement of dipole moments [51,52] which generalized the results for unpolarized beams, see ref. [53].
BSM physics with effective operators in e + e − → ff
In [18] we had obtained kinematic distributions for the process e + e − → tt (which would be applicable in general to any inclusive state ff with a heavy fermion f ) with transversely polarized beams and when the polarization of the top quark in the final state is measured. In that work, four-fermion contact interactions were used, following the work of Grzadkowski [54]. It would seem natural that the present formalism could be applied to that exclusive process as a special case. It was observed in [18] that the distributions obtained by explicit evaluation of the amplitudes were consistent with those predicted by appropriate structures in our formalism. During the course of these investigations we find, on closer examination, that with the restrictions of hermiticity on the couplings of the contact interactions, the structure functions for the scalar, pseudoscalar and tensor interactions turn out to be vanishing. Therefore, it may appear that there is some kind of an incompatibility between the current framework and exclusive processes with scalar or tensor contact interactions. Furthermore, although we did find schematic evidence that they can be mapped on to one another, the two schemes are actually distinct. The presence of a 'current' of the present framework when related to the adopted contact interaction framework imposes too stringent a requirement between the various couplings thereby leading to the vanishing of the structure functions.
In the case of vector and axial-vector interactions, however, the formalism is compatible with even contact interactions. It is thus possible to predict the distributions in this case using our formalism. This case was not treated in [18].
One could also compare the results coming from more complicated exclusive states that could arise in popular extensions such as techni-pion models, for the process e + e − → ttπ t [55]. Other general considerations of discrete symmetry in processes involving top-and b-quarks in the SM as well as in the MSSM, without spin resolution may be found in refs. [56,57,58] to cite a few examples.
It may be fruitful to point out that our aim is to establish a model-independent approach. As such, it is not intended to take up specific models. Nevertheless, we have taken up some explicit categories of final states, both as inclusive states and exclusive states, and with that categorization, we have tried to construct general amplitudes characterized by form factors. Thus, within our model-independent approach, we have been as explicit as possible and have tried to illustrate the power of the approach. The work here is meant to be a set of consistency checks on specific BSM models, and not meant to be full-fledged substitute to important and popular extensions of the SM.
Summary and Discussion
We now present a summary of the motivation, approach and the main results in this work and provide a discussion of these results.
The motivation for our work comes from the planned high energy and high performance e + e − machines that are now being considered seriously at very high levels for construction. While there are several technical differences between the various machines under discussion in terms of their physical layout, technology and detector and accelerator aspects between the ILC, CLIC, FCee and CEPC, the underlying physics that is sought to be probed is the same. These machines are expected to be providing a clear environment and high statistics environment to study SM particles and their properties at high precision in the light of LHC discoveries. A great deal of work has been carried out to study the feasibility of improving the degree of polarization both for the electron as well as positron beams. It is therefore highly likely that longitudinally polarized beams will be commissioned at linear colliders. It is also possible to create transverse polarization out of longitudinally polarized beams using spin rotators. Less work has been done for feasibility studies for longitudinally polarized beams at circular collider. Whereas such studies are desirable from the stand point of accelerator science, the essential features of the physics with polarized beams remains the same for both linear as well as circular colliders.
In order to build a serious case for high precision SM and BSM studies, all valid approaches much be studied as diligently as possible. Our work is an important model independent effort in this direction, in order to provide a no-frills approach to the study of fingerprinting BSM physics at these machines.
A model-independent approach to the study of possible BSM physics is to represent such effects in terms of the most general vertices allowed by Lorentz invariance and gauge invariance in the case of various exclusive processes. It may be worth emphasizing that BSM models are are characterized by alterations to effective vertices arising in higher orders from the integrating out of more massive states of the theory, which would make specific predictions for the structure functions of the inclusive framework. While we have provided some illustrations of such a correspondence, our framework has as an advantage modelindependence, also as its most important distinguishing feature.
An even more general approach, which is the one pursued here, is one where only one SM particle is observed, and the effects of all interactions are represented, assuming Lorentz invariance, in terms of the vectors on hand. These are primarily momenta of the colliding particles in e + e − collisions, the polarizations of the incoming particles, the momentum and the spin of the observed particle.
For ease of reading, we now itemize the important points of this work.
• In this work, we have started out by considering the most general terms that can occur in the interference between SM and BSM physics and construct vertices involving vectors on hand, namely the momenta and the directions of transverse polarization of the incoming particles and the momenta and spin quantization axes of the observed spin-1/2 particle, consistent with Lorentz covariance. Thus, we have computed the spin and momentum correlations, expressed as angular distributions, in e + e − collisions arising from the interference between the virtual γ and Z exchange SM amplitudes and BSM amplitudes characterized by their Lorentz signatures, with the unknown physics encoded into structure functions for a one-particle inclusive measurement with spin resolution. Transverse and longitudinal beam polarizations are explicitly included.
• The generalization of simple one-particle to two-particle inclusive measurement was also done some years ago, since the availability of a second vector gives rise to greater possibilities for structure functions. The availability of a second vector in the form of the spin quantization direction in turns gives rise to more intriguing possibilities which is the subject of this work. Whereas it is natural to assume the direction of the motion of the detected particle as the quantization axis, the full reconstruction of the density matrix of a spin-1/2 particle requires three mutually orthogonal directions. We have employed three mutually orthogonal directions as quantization axes corresponding to longitudinal, normal and transverse directions in the plane of production.
• A large number of structure functions have to be introduced, which are taken to be complex, with definite implications for their properties under the discrete symmetries of C, P and T for the dispersive (real) and absorptive (imaginary) parties, as dictated by the CPT theorem. The spin-quantization axes themselves can be expressed in terms of the momentum vectors on hand, and allow us to express the results for the spin-momentum correlations and spin-spin correlations in terms of economical tables, namely Tables 1-4. While we present the results for the general couplings g e A and g e V directly applicable to Z, those case for the photon are obtained by simply setting g e V = e and g e A = 0.
• Some salient features of the entries in the tables were the following.
-Indeed, as in the case of one-and two-particle inclusive study with no spin resolution, with S, P and T type couplings, transverse polarization of at least one of the beams is needed to uncover their presence at leading order, or a hybrid of longitudinal and transverse polarization. In the case of imaginary parts of S and P type structure functions, the result is accompanied by g e V while in the case of T type structure functions it is accompanied by g e A . In the case of the real parts, the vector and axial-vector couplings will have to be swapped, in accordance with the symmetry of the tables under the simultaneous swap of vector and axial-vector couplings, and of real and imaginary parts of the structure functions.
-The structure functions corresponding to V and A type BSM interactions lead to correlations that have distinctly different properties. In particular, without final-state spin resolution beam polarization does not lead to any qualitatively different information when the imaginary parts are disregarded. However, when spin resolution is included, this is no longer true, which is an important finding of the present investigation. In other words, the appearance of specific structure functions and combinations of initial beam polarizations which render some spin structure functions as being observable only with beam polarization is noteworthy.
-Analogously, absorptive parts of structure functions with spin resolution are qualitatively different from those without spin, which was not pointed out earlier. Thus beam polarization is crucial for uncovering interactions which cannot be done with unpolarized beams. Our analysis of the correlations also shows that there are special circumstances under which contributions may vanish as in the case of a new vector boson, which would imply equality of the vector-and axial-vector couplings of the electron and the observed particle, and new interactions would be visible only through loop-effects.
• As a sequel to the thorough analysis of the general results found in the Tables 1-4, we have presented a discussion on the nature of the correlations and the deductions that can be made on their polarization dependence. We have also discussed the CP and CPT properties of certain structure functions. This was based on a systematic analysis under the rubric of (a) h c = h, and (b) h c = h. The main features of this analysis may be summarized as follows: -As one can see from the study without final-state spin being resolved, when h c = h (Majorana fermions), CP violation cannot be observed in the absence of transverse beam polarization. However, when final spin is measured, CP violation can indeed be observed without transverse polarization of the beams. Thus, in the absence of absorptive parts, CP violation is observable even without transverse beam polarization for the structure function W pn 4 , which corresponds to spin measurement of the final-state particle along the direction of n, perpendicular to the production plane. A realization of this possibility can be found in [38]. It is possible that some other suggestions of different possibilities of CP-violating distributions discussed here will find realization in other practical situations.
-For the case when the unobserved state X is just h c , that is, h and h c are pair produced, in the case of scalar, pseudoscalar and tensor interactions, and when spin of h is not measured, the only CP-odd correlations are those which have a combination ( s + − s − ), which is C odd and P even, or the combination (h − s + + h + s − ), which is C even and P odd. Thus, for the configuration s + = − s − , only CP-odd correlations survive. In the scalar and pseudoscalar case, the CP-odd correlation is present for all structure functions with a pseudoscalar coupling to leptons. In the case of vector and axial-vector couplings, there are no CP-odd correlations.
-For the case of h c = h, interesting results that could be obtained when h and h c are pair produced do not directly apply to the case when only one of them is produced. It is not possible to make definite statements for the case h and h c are produced when the spin of h is measured, because CP would relate the spin of h to the spin of h c and the latter is not measured. It is possible to envisage that one has samples separately with hX and h c X and construct suitable asymmetries to uncover CP violation.
• We have provided a discussion on the possible ways in which our work can be related to prior studies of exclusive fermion pair production. In particular, for purposes of illustration, we have considered the process in the presence of electric and magnetic dipole moments and evaluate the effective structure functions. The computation of these proves to be a useful illustration of the framework. The cases of self-conjugate fermions and otherwise have also been discussed.
• We have also examined our prior analysis of BSM physics in the form of certain fourfermion effective operators with spin resolution in the present framework. We find that in the scalar, pseudoscalar and tensor cases, our formalism cannot be used without change to the case of contact interaction framework that had been adopted, which proves to be too restrictive and leads to vanishing structure functions. A sufficiently general inclusive framework that is not based on 'currents' and is more general is yet to be developed and may prove to be useful for the study where the observed particle is a boson, in contrast to the present framework where the observed particle is taken to be a fermion.
• The precise modification of the present framework to other spin bases (as for example beam-line and off-diagonal bases considered in the past) is yet to be analyzed and is work for the future, as also an extension to spin-spin correlations in a two-particle inclusive state.
• In contrast to specific models that go into BSM physics with exclusive final states, our present work remains very general and model independent, and could provide the simplest possible framework to study BSM physics to look for signals independent of assumptions of what lies at higher energy scales. Sufficiently precise data when gathered can be used to study if the structure functions so measured respect constraints that would be implied by specific extensions of the SM in exclusive particle production.
Many of the considerations that have been spelt out for the ILC also apply to the other planned facilities, namely CLIC, FCee and CEPC. In particular, our work would also call for dedicated studies of the advantages of beam polarization at these facilities, real-life estimates of beam polarization for transverse as well as longitudinal polarizations that could result for these configurations, and impact of these on detector design and accelerator design.
We also suggest that specific studies of such inclusive processes be implemented also at the level of detector simulations and event generators to study how departures from ideal detection can influence the outcome of the concepts put forward here. | 16,597 | 2017-07-04T00:00:00.000 | [
"Physics"
] |
Influence of Topographic Relief on Sand Transport in the Near-Surface Layer During Dust Storms in the Taklimakan Desert
Dust storms and dust aerosols seriously affect environmental variation and climate change at regional and global scales. Accordingly, these hazards are the current focus in studies related to Earth science. The near-surface layer is an important link for the upward transmission of dust aerosols. However, the difficulty associated with obtaining real-time observation data from this layer has markedly hindered the progress of related research. In sand source areas, the topographic relief of natural dunes is easily ignored, despite serving as an essential factor affecting wind-driven dust emission, transport, and deposition. In this study, we explored the similarities and differences in horizontal dust flux (Q) between Xiaotang and Tazhong using observation data. In Xiaotang, the variation in the Q value with height was found to fit a power function; however, in Tazhong, the Q value did not show a significant gradient change. Such phenomena are caused by the secondary sand source generated by the undulation of natural dunes. The median particle diameter of the dust lifted from the ground during dust storms was essentially the same between Xiaotang and Tazhong, ranging from 74 to 82 μm in Tazhong and from 53 to 81 μm in Xiaotang. The maximum wind speed in Xiaotang was greater than that in Tazhong, resulting in a larger Q value for each particle size range in Xiaotang. The coarse sediment grain was identified as the main factor controlling the vertical variation trend of Q. Further, fine particles were found to have a minor impact.
INTRODUCTION
Dust storms are serious meteorological hazards. In fact, the process of dust transport has exacerbated land desertification. Sand transport has had significant impacts on atmospheric radiation balance, climate change (Coakley et al., 1983;Sokolik and Toon, 1996;Ramanathan et al., 2001;Gautam et al., 2010;Spyrou et al., 2013), environment, air quality, and human health (Chen et al., 2004;Prospero et al., 2014;Viana et al., 2002) and has become an important part of the global biogeochemical cycle.
The Taklimakan Desert is the largest mobile desert in China and is one of the main sources of sand material transmission in China (Gong et al., 2003;Wang et al., 2005). Owing to the joint influence of the topography of the Tarim Basin and Tibetan Plateau (Zhang and Wang, 2008;Xu et al., 2014), dust storms lift dust aerosols into the air, resulting in a unique phenomenon known as persistent floating dust (He and Zhao, 1997;Zhang et al., 2007;Ma et al., 2007;Nan and Wang, 2018;Meng et al., 2019). Under the westerlies, dust aerosols diffuse to eastern China and other parts of East Asia (Liu et al., 2015;Chen et al., 2017). This diffusion has a significant impact on the climate and environment in East Asia and the entire world (Iwasaka et al., 1983;Huang et al., 2009). Since the 1980s, Nickovic and Dobricic, 1996 has focused on the long-range transport of dust in the western Mediterranean, and for the first time, divided the dust transport process into two stages: dust mobilisation at the surface and dust lifting by turbulence. Genthon (1992) investigated the characteristics of dust storms and sea salt aerosols in Antarctica using the atmospheric circulation model. Notably, they found that the vertical distribution of atmospheric aerosols is an important parameter in numerical modelling and the stability of the boundary layer has a considerable influence on the vertical distribution of near-surface aerosols. Based on studies performed by scholars worldwide, we sought to explore how the vertical uplift of dust aerosols in sand source areas can be realistically reflected and how the topographic relief of dunes affects the vertical transport of dust aerosols in the near-surface layer.
Since the beginning of this century, studies on dust storms and dust aerosols have become the focus of scientists in many countries; these studies have led to fruitful achievements Zhang et al., 2009;Park and In, 2003;Tegen et al., 2002;Murayama et al., 2001;Tratt et al., 2001;Xuan et al., 2000;Che et al., 2005). With advancements in observation and analysis methods, significant progress has been made in studies on dust storms and dust aerosols from various aspects, such as synoptic analysis, climatic causes, numerical simulation, climatic effects, and environmental impacts (Zhou et al., 2002;Shen et al., 2003;Sun et al., 2003;Wang et al., 2003;Zhou and Zhang, 2003;Lei et al., 2005;Huang and Zheng, 2006;Yue et al., 2008;Chen et al., 2017;Zhou et al., 2017;Hu et al., 2019). Investigations regarding dust emission, transport, and deposition during dust storms mainly focus on: 1) estimating the amount of dust emission at the ground surface and analysing the mechanisms of dust emission and its influencing factors; and 2) simulating the processes and calculating the total amount of dust transport at high levels and dust deposition (Zhao et al., 2011). Beside, for the near-surface layer, as an important link between dust mobilisation at the ground surface and dust transport within the boundary layer, the evolution patterns of the horizontal dust transport flux and the sediment particle size parameters during dust storms are unknown. Unfortunately, related studies are rare as observation data are difficult to collect. Thus, the variations in dust transport parameters in the near-surface layer under natural conditions are unclear. Little attention has been paid to the effects of topographic relief in the Taklimakan Desert on the vertical structure of the near-surface layer during dust storms owing to the existence of different research perspectives. However, these scientific problems must be urgently addressed. Thus, this study sought to explore the influence of topographic relief on sand transport in the near-surface layer during dust storms. By employing a new perspective, this study aimed to provide new scientific information on the material exchange between ground and atmosphere affected by a non-flat, uniform underlying surface and a basis for improving numerical forecasting models of dust storms.
In this study, we innovatively designed an observational experiment to evaluate the variability of near-surface vertical gradient of dust storms at two sites in the Taklimakan Desert. Accordingly, this study sought to reveal the vertical distribution characteristics of horizontal dust fluxes and particle size parameters under different topographic conditions based on the invaluable observation data. The rest of the paper is arranged as follows: Section 2 describes the observation area and the design of the observational experiment; Section 3 presents the characteristics of horizontal dust fluxes (Q) and grain-size components and the analysis of wind dynamics based on the experimental data; and Section 4 and Section 5 discusses and concludes the study, respectively.
OBSERVATION SITES, INSTRUMENTS, AND DATA COLLECTION 2.1 Overview of the Field Observation Experiment
To study and demonstrate the influence of undulating dunes on the key dust particle parameters in the near-surface layer under natural conditions, we innovatively designed an observational experiment to evaluate the variability of the near-surface vertical gradient of dust storms at two sites in the Taklimakan Desert. The 80-m gradient observation system in Tazhong is at the centre point of the desert, hereinafter referred to as TZ, and the topography of its surrounding observation environment is dominated by naturally undulating dunes. The 100-m gradient system in Xiaotangis at the northern edge of the Taklimakan Desert, hereinafter referred to as XT, and its surrounding observation region has a nearly flat terrain, as shown in Figure 1.
Acquisition of the Experimental Parameters
Key observation parameters, such as the gradient wind speed, temperature, wind direction, and horizontal dust flux during dust storms, were obtained using the 80-m near-surface micrometeorological gradient observation system in TZ, the 100-m system in XT, and the BSNE sand collectors installed at different heights. Combined with the particle-size determination work performed in the laboratory, the particle size parameters of the dust samples, including the median particle diameter and grain-size component data, were obtained.
Details of the Experimental Design
Dust samples were collected between January 2018 and August 2018, the spring-to-summer dust storm season in the Taklimakan Desert. The flux observation system in TZ was 80 m high, while that in XT was 100 m high ( Figure 2).
The BSNE sand collectors that conform to international standards were adopted to measure the horizontal dust transport flux. The volume, appearance, and sand inlet size of these BSNE sand collectors were designed in accordance with international standards, and the sand inlet was 2 cm wide and 5 cm high. At the beginning of the experiment, professionals were hired to inspect the BSNE sand collection systems installed at different height levels and clean the sand collectors. On the day after each dust storm, the professionals retrieved the dust aerosol samples collected at different levels, placed the samples in sealed bags, and cleaned the sand collectors when the wind speed was less than 5 m/s (to ensure personnel safety). In the measure step,in order to reduce the error, we first measured and recorded the weight of the sealable bags. Then the samples were weighed at the TZ and XT observation stations to avoid weighing errors caused by the wear and tear of sample bags during the transportation and ensure data accuracy (The sampling staff was qualified to work at the required heights).
Particle Size Determination in the Laboratory
In general, the dust samples can be approximated as uniform particles (Such as sand and dust particles), and their grain size can be measured using the dry method for convenience. However, the content of the dust samples collected using the gradient BSNE collection system was small. Therefore, the wet method was employed for the subsequent particle size tests in the laboratory to ensure measurement accuracy. The differences between the dry and wet methods were compared in a previous study (Huo et al., 2016), and the error range of the results was found to be minimal.
Near-Surface Distribution Patterns of Horizontal Dust in XT and TZ
Initially, our research team focused on the ground surface (e.g., Yang et al., 2011;, while Dong et al. (2010) had measured the flux of dust sediments in the near-surface layer in 2010. Huo et al. (2016) conducted a study on the characteristics of near-surface dust flux in the Taklimakan Desert. The sediment fluxes of ten dust storm events during the study period have been reported, and detailed information on the proportions of grainsize components comprising these fluxes has been provided. In this section, the variations in horizontal dust flux for all dust storm processes in TZ and XT during the observational experiment are presented. Figure 3 demonstrates the variations in horizontal dust flux with height for eleven dust storm events in XT and eight dust storm events in TZ. First, the Q value in XT gradually decreased with height, reflecting a power function; this feature was shared by the eleven dust storm processes. Second, the Q value variation in TZ differed from that in XT as described below. The Q value decreased with height at the level of two to eight m; Q increased with height in the range of 8-48 m; the horizontal dust flux Q did not exhibit a significant variation trend with height at the level of 48-60 m. The eight dust storm processes displayed these three characteristics.
It should be noted that among the samples of all dust storms, five processes were widespread dust storms triggered by the same weather processes, and they occurred on March 3, April 2, April 27, May 24, and May 31 (as shown in Figure 3), which increases the comparability of the two observation sites. As shown in Figure 4, the eight dust storm processes in 2018 were compared with the ten processes in 2016 in TZ. These two sets of dust storm processes had almost the same characteristics, except that the near-surface (one to eight m) observations were absent for the 2016 events. The data of XT in 2018 shows a good power function fitting relationship, with a coefficient of determination (R 2 ) as high as 0.9434. Therefore, the differences in the dust storm processes characterised by the variations in horizontal dust flux Q with height between XT and TZ in the Taklimakan Desert are of particular interest.
Characteristics of Grain-Size Components in XT and TZ
The factors that influence the variation in horizontal dust flux during a dust storm include the physical properties of the underlying surface (grain-size components of the sand source) and the dynamic condition (wind speed). Thus, we opted to focus on these two critical factors. The three-dimensional variation patterns for the average median particle diameter and mean horizontal dust flux with height during multiple dust storms in XT and TZ are presented in Figure 5. First, in XT, the median particle diameter decreased with height, from 81 μm at 1 m to 53 μm at 100 m, and this trend was similar to the variation trend observed by Dong et al. (2010) in the flat sandy land in Minqin. Second, the median particle diameter at 1 m above ground was 82 μm in TZ and 81 μm in XT, indicating similar physical properties of their dust sources, consistent with the results of Huo et al., 2011. Third, no significant variation in the median particle diameter with height was found in TZ, with only minor fluctuations (74-82 μm) in the entire vertical profile of d (0.5). d (0.5) means the corresponding particle size when the cumulative particle size distribution percentage of the sample reaches 50% (Huo et al., 2016). Evidently, the horizontal dust flux is influenced by horizontally moving dust particles. Further, greater dynamic support is required to transport the coarser and heavier dust grains as the height and particle weight increase.
As shown in Figure 5, the average median particle diameter d (0.5) of multiple dust storm processes in XT and TZ was extremely close. Thus, the dust sources of the samples collected during the dust storms can be concluded to be the same. Combined with the laboratory-determined particle size of the dust storm samples from a sequence of height levels, the coarse and fine-grained components of the dust samples were further investigated to obtain the percentages of the particles with a diameter less than 1, 2.5, 10, 20, 50, and 100 μm in the samples, respectively. Thereafter, the Q values of the particle size ranges can be calculated. Four additional large-scale dust storm processes that affected both TZ and XT were included to perform a thorough analysis (Figures 6, 7). Warm colour spheres were used to represent the data for TZ, while cold colour spheres represented the data for XT to enable easy distinction of the results. It can be seen from Figures 6, 7 that the cold colour spheres were larger than the warm colour spheres in each of the particle size ranges (i.e., <1 μm, <2.5 μm, <10 μm, <20 μm, <50 μm, and <100 μm). Such findings indicate that the Q value obtained for each particle size range in XT is greater than that in TZ. Moreover, the vertical profile of the Q value for the particle size range close to d (0.5) (i.e., <100 μm) was consistent with the average results shown in Figures 3, 4, indicating relatively stable accumulation of the coarse particles with height. Coarse particle is the main factor controlling the variation in Q value with height. The distribution of the fine Frontiers in Environmental Science | www.frontiersin.org June 2022 | Volume 10 | Article 931529 5 particles is highly random, and the contribution of the fine particles to the variation trend of the Q value is minimal.
Analysis of Wind Dynamics (XT and TZ)
The curve in Figure 8 (Huo et al., 2016) has a very similar pattern to the fitted curve for the wind speed across the flux tower. Following a logarithmic profile that is normally expected for the atmospheric boundary layer, the wind speed exhibits a remarkable increase with height in the lower part of the surface boundary but does not display significant vertical variations at the upper levels. This pattern reflects the nature of a well-mixed middle and upper boundary layer. Accordingly, the average sediment fluxes share the same property as the wind speed, revealing the nature of wind-driven sand-dust transport during dust storms. As shown in Figure 9, the maximum wind speed values at the two observation sites (XT and TZ) during four dust storms were selected to explore the variation in the maximum wind speed with height. First, the maximum wind speed increased with height at the two observation sites, which was consistent with the pattern of the universal wind profile. Second, the maximum wind speed values in XT were evidently higher than those in TZ during the four typical dust storm processes. Such findings indicate XT had better wind dynamic conditions than TZ during the dust storms and the dust particles had greater dynamic support in XT than in TZ. Such finding also explains the greater Q value of each customised particle size range in XT than in TZ, as demonstrated in the previous section.
(The number after legend represent the date of dust storm. For example, TZ0402 represents the dust storm in TZ station on April 2. XT0402 represents the dust storm in XT station on April 2.) The Taklimakan Desert is one of the critical sand source regions for the upstream weather zones in China. However, the lack of a clear and systematic understanding of the dust transport conditions in the Taklamakan Desert due to scarce observation data has limited the localisation and effective application of numerical simulation of dust storms in this region. Further, owing to a lack of consideration and understanding of the topographic relief in the desert, the differences in dust transport parameters under terrain undulations are ignored, resulting in uncertainties regarding the dust transport parameters. Thus, the horizontal and vertical transport of dust particles during dust storm events must be quantified. During dust storms under natural conditions in the Taklimakan Desert, the horizontal dust flux decreases with height in XT; however, no significant change occurs with height in TZ. Notably, our observations in XT are highly consistent with those of Dong et al. (2010). The most remarkable feature leading to this commonality between the two studies is the flat sandy land employed as the observation area by Dong et al. (2010). This area has a topography similar to that of XT at the northern edge of the Taklamakan Desert (Figure 1). The main factors controlling the variation in Q include the wind dynamics W, surface sand source (material basis) M, and topography of the observation environment E. The M of XT was equivalent to that of TZ. The W of XT was greater than that of TZ; however, the variation trend of W with height in XT and TZ was consistent. The larger W value in XT leads to an increased proportion of coarse particles in the dust storm processes. However, as shown in Figure 6, the d (0.5) value in XT decreases with height during multiple dust storms, and the d (0.5) value in TZ stabilises in the range, 74-82 μm. Meanwhile, the Q value in TZ increases with height at 8-48 m, and Q does not show significant variation with height at 48-60 m. The reasons for these contradictory phenomena may be related to E. Huo et al. (2016) pointed out that grain size actually increases in the lower surface layer between 8 and 24 m, decreases in the middle levels, and slightly increases at the top of the tower. This pattern is partly caused by the wind-driven sand-dust transport from the nearby natural dunes. This process is called the "secondary sand source" to illustrate the sand-dust transport process during dust storms in the desert where large dunes and valleys exist. The previous work lacks comparability; however, in this study, combined with the observation results in XT, the observation results in TZ is caused by "secondary sand source" is consistant with the results in 2016. (Figures 10, 11). pointed out that at the top of the dune Q all showed a significant decreasing trend with height; Figure 8 cites the results of 2016, from the 10 dust storm processes in TZ, with both Q and maximum wind speed taken as an average of the 7-level observed heights of the 10 dust storm processes. The wind speeds during the four dust processes are used for comparative analysis in Figure 9. These four processes are systemic weather-induced dust storms, and the dust storms occur at the same time period at XT station and TZ station, so they are more representative.
FIGURE 9 | Variation in the maximum wind speed with height during four typical dust storm processes in XT and TZ.
Frontiers in Environmental Science | www.frontiersin.org June 2022 | Volume 10 | Article 931529 therefore, assuming that the tall tower is located at the top of the dune, the variation of Q with height may be similar to the results of our study in XT. Because, firstly, from the TZ (50-80 m) observations, there is a slight Q decreasing with height. Secondly, if the top of the dune is considered as a flat surface, the variation of Q will no longer be influenced by the "secondary sand source". On the contrary, if the tower is at the edge of the dune, Q below the dune height may still appear as a uniform mode, and Q above the dune height will be influenced by the "secondary sand source". In the numerical model calculation of dust storms, Q is an important parameter, and the vertical dust flux F is also calculated based on Q. We use tower observations, unlike the observations of Q near the ground or in ideal flat sand , to break away from localities. Therefore, our results can provide a new basis and reference for the calculation of parameterization in the model.
Of note, dust particles at the top of the dunes are coarser than at the bottom of the dunes owing to long-term sorting (Lin et al., 2021;2022). Due to the presence of the secondary sand source, more coarse particles are easily collected, which affects the variation in Q with height in TZ. The environmental factor E of undulating dunes is also a decisive element influencing the transport of dust particles. Secondary sand source leads to an increase in <20 μm fine particles, and we calculated the mean Q values for multiple sandstorm samples <20 μm for TZ and XT above 80 m height, TZ: 0.015 kg m −2 and XT: 0.080: kg m −2 . It is clear that XT is larger than TZ in the fine particle collection, so the topographic undulating conditions may affect the long-distance transmission of fine particles. This may also affect the internal cycle in the Taklamakan Desert and its surrounding areas, the effect of which on the long-range transport of sand fine particles needs to be corroborated by a combination of large-scale experiments and numerical models. At present, the parameterisation schemes of dust emission mainly consider the effects of wind speed, dust particle size, surface roughness length, soil moisture, and vegetation cover. Topographic relief is also one of the important factors affecting dust emission. However, the underlying surface is assumed to be a flat desert surface in the current parameterisation schemes of dust emission, and the impacts of an undulating underlying surface in a desert on the dust emission flux are ignored, leading to considerable uncertainty in the simulation results of these parameterisation schemes for the Taklimakan Desert (Marticorena and Bergametti, 1995;Shao et al., 1996;Shao et al., 2004;Shao et al., 2010;Lu and Shao, 1999;Ginoux et al., 2001;Shen et al., 2003;Klose and Shao, 2012;. Therefore, this study provides a good experimental basis and data support for the localisation improvement of the parameterisation schemes of dust emission, the accurate assessment of the regional and global contribution of local sand emission, and the development of dust storm forecasting.
The scarcity of observations in dust source areas constrains the development of dust storm models. However, we have made great efforts to record valuable observation data in environments with extremely harsh dust storms. Although the collected samples are not large enough, this task has been ongoing since 2016, and will continue. We wish to provide a good prospect for future experiments and observations and anticipate the performance of similar experiments in other deserts or dust source regions. In addition, we are delighted to share our observation data and analysis results for collaborative research on dust storm monitoring and modelling.
CONCLUSION
The variation in the horizontal dust flux with height during dust storms at the XT station at the edge of the desert was found to fit a power function. At the TZ station located at the centre of the desert, the Q value was found to increase with height at 8-48 m; however, no significant gradient change in the horizontal dust flux Q was found between 48 and 60 m. Such different distribution patterns are caused by the secondary sand source derived from the tall dunes nearby.
The median particle diameter of the dust lifted from the ground was essentially the same between XT and TZ during large-scale dust storms. The vertical distribution of wind speed at the two observation sites conforms to the typical patterns of wind speed varying with height; however, the maximum wind speed in XT was greater than that in TZ, resulting in a larger Q value for each particle size range in XT than in TZ. The Q values for the particle size ranges of <1 μm, <2.5 μm, <10 μm, <20 μm, <50 μm, and <100 μm, were provided herein. This information is crucial for evaluating the long-range transport of dust aerosols and their impacts on weather and climate (Shao, 2004;Alpert et al., 2006;Gong et al., 2006;Yue et al., 2010;Ghan et al., 2012;Kim et al., 2013;Feng et al., 2015;. Coarse particle is the main factor controlling the variation trend of Q in the vertical direction. Further, the effect of fine particles is relatively minor. The trend of coarse particles with height (as in Figure 7) is similar to the total trend (as in Figure 5), for example, for particles <100 μm, and in addition, fine particles <20 μm will be more involved in long-distance transmission. Therefore, the contribution of fine particles to the change trend is relatively small. The observation results in TZ show that the dust samples collected at observation levels with the same height as the nearby dunes were derived from multiple sand sources and contained large proportion of coarse grains, which reflects the real variation pattern of the horizontal dust flux Q under natural conditions in the desert centre. Compared with the ideal flat sand or 2 m observations close to the ground, our observations using high tower can truly and objectively reflect the transmission pattern of Q under undulating terrain conditions of dune, which can improve the calculation of Q in sandstorm forecasting models, especially in important dust source areas. | 6,245.4 | 2022-06-29T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Unsteady Mixed Convection Flow in the Stagnation Region of a Heated Vertical Plate Embedded in a Variable Porosity Medium with Thermal Dispersion Effects
The mixed convection flow finds applications in several industrial and technical processes such as nuclear reactors cooled during emergency shutdown, solar central receivers exposed to winds, electronic devices cooled by fans and heat exchanges placed in a low-velocity environment. The mixed convection flow becomes important when the buoyancy forces increase due to the temperature difference between the wall and the free stream. The mixed convection flow in the stagnation region of a vertical plate has been investigated by Ramachandra et al. [16]. When there is an impulsive change in the velocity field the inviscid flow is developed instantaneously, but the flow in the viscous layer near the wall is developed slowly which becomes fully developed steady flow after a while. For small period the flow is dominated by the viscous forces and the unsteady acceleration, but for runtime it is dominated by the viscous forces, the pressure gradient and the convective acceleration. The unsteady mixed convection flow in the stagnation region of a heated vertical plate due to impulsive motion has been studied by Schadri et al. [17]. The boundary layer flow development of a viscous fluid on a semi-infinite flat plate due to impulsive motion of the free stream have been investigated by Hall [5], Dennis [3] and Watkins [22]. The corresponding problem over a wedge has been studied by Simth [18], Nanbu [11] and Williams & Rhyne [23]. The problem of unsteady free convection flow in the stagnation-point region of a rotating sphere embedded in a porous medium has been analyzed by Hassanien et al. [7]. The unsteady flow and heat transfer of a viscous fluid in the stagnation region of a threedimensional body embedded in a porous medium was investigated by Hassanien et al. [8]. The problem of thermal radiation and variable viscosity effects on unsteady mixed convection flow in the stagnation region on a vertical surface embedded in a porous medium with surface heat flux has been studied by Al-Arabi and Hassanien [6]. Motivated by all of the above referenced work and the significant possible applications of porous media in industries, it is of interest in this paper to consider the unsteady mixed convection flow in the region of a heated vertical plate embedded in a porous medium
Introduction
The mixed convection flow finds applications in several industrial and technical processes such as nuclear reactors cooled during emergency shutdown, solar central receivers exposed to winds, electronic devices cooled by fans and heat exchanges placed in a low-velocity environment. The mixed convection flow becomes important when the buoyancy forces increase due to the temperature difference between the wall and the free stream. The mixed convection flow in the stagnation region of a vertical plate has been investigated by Ramachandra et al. [16]. When there is an impulsive change in the velocity field the inviscid flow is developed instantaneously, but the flow in the viscous layer near the wall is developed slowly which becomes fully developed steady flow after a while. For small period the flow is dominated by the viscous forces and the unsteady acceleration, but for runtime it is dominated by the viscous forces, the pressure gradient and the convective acceleration. The unsteady mixed convection flow in the stagnation region of a heated vertical plate due to impulsive motion has been studied by Schadri et al. [17]. The boundary layer flow development of a viscous fluid on a semi-infinite flat plate due to impulsive motion of the free stream have been investigated by Hall [5], Dennis [3] and Watkins [22]. The corresponding problem over a wedge has been studied by Simth [18], Nanbu [11] and Williams & Rhyne [23]. The problem of unsteady free convection flow in the stagnation-point region of a rotating sphere embedded in a porous medium has been analyzed by Hassanien et al. [7]. The unsteady flow and heat transfer of a viscous fluid in the stagnation region of a threedimensional body embedded in a porous medium was investigated by Hassanien et al. [8]. The problem of thermal radiation and variable viscosity effects on unsteady mixed convection flow in the stagnation region on a vertical surface embedded in a porous medium with surface heat flux has been studied by Al-Arabi and Hassanien [6]. Motivated by all of the above referenced work and the significant possible applications of porous media in industries, it is of interest in this paper to consider the unsteady mixed convection flow in the region of a heated vertical plate embedded in a porous medium having porosity distribution in the presence of the thermal dispersion with the effect of the buoyancy force. The unsteadiness in the flow field is caused by impulsively creating motion in the free stream and at the same time by suddenly increase in the surface temperature. The partial differential equations governing the flow and the heat transfer have been solved numerically using the finite difference scheme by Pereyra [14]. Particular cases of the present results are compared with previously numerical work by Ramachandra et al. [16] and Scshadri et al. [17]. The problem is formulated in such way that it represented by Rayleigh type of equation at t=0 and for t →∞ it represented by type of Hemennz equation.
Mathematical analysis
Let us consider a semi-infinite vertical plate embedded in a variable porosity porous medium with thermal dispersion effect and uniform temperature T ∞ . At t = 0.0 the ambient fluid is impulsively moved with a velocity U e and at the same time the surface temperature is suddenly raised. Figure (1) shows a flow field over a heated vertical surface where the upper half of the field is assisted by the buoyancy force, but the lower part is opposed by the buoyancy force. The surface of the plate is assumed to have an arbitrary temperature. All the physical properties of the fluid are assumed to be constant except the density variation in the buoyancy force term. Both the fluid and the porous medium are assumed to be in local thermal equilibrium. Under above assumptions along with Boussinesq approximation, the unsteady laminar boundary layer equations governing the mixed convection flow are given by Vafai and Tien [20].
The initial conditions are given by The boundary conditions for t ≥ 0 are given by The indices n = 0 and n = 1.0 correspond the constant surface temperature and the linear surface temperature respectively. The variable x is measured along the surface and y is measured normal to it. The fluid velocity u, v is in x and y direction respectively as shown in figure (1). The fluid density, the fluid dynamical viscosity, the gravitational fluid acceleration, the thermal expansion coefficient and the temperature will be denoted by ,,,, gT ρ μβ respectively. K(y) is the porous medium permeability, c α is the effective thermal diffusivity and ε is the porous medium porosity. Equations (1) through (3) are supplemented by constitutive equations for the variations of the porosity permeability and thermal conductivity of the porous medium. It has been shown by Vafai [21] that the results obtained experimentally by Nithiarasu et al. [12 ] in their study on void fraction distribution in packed beds gives the functional dependence of the porosity on the normal distance from the boundary and so the porosity can be represented by the exponential form where 0 ε is the free-stream porosity, d is the particle diameter and b, c are empirical constants that depend on the ratio of the bed to particle diameter. The values for 0 ε , b and c chosen to be 0.38, 1, and 2 respectively. These values were found to give good approximation to the variable porosity data given by Nithiarasu et al. [12] for a particle diameter d=5 mm. The type of decay of porosity as the normal distance increases given by Equation (4) is well established and has been used extensively in studies on flow in porous media with variable porosity. It is also established that k(y) varies with the porosity as follows The effective thermal conductivity of the porous medium is given by Al-Arabi and Hassanien [6] d cm u αα γ =+ where m α and γ are the molecular thermal diffusivity and mechanical dispersion coefficients, respectively. Equations (1) through (3) can be transformed into a set of ordinary differential equations by using the following transformations given by Williams and Rhyne [23].
Method of solution
We are going now discuss the local non-similarity method to solve equations (10), (11). Since it was already seen by Pereyra [14], and Sparrow et al. [19] that for the problem of coupled local non-similarity equations, the considerations of equation up to the second level of truncation gives almost accurate results comparable with the solutions from other methods. We will consider here the local non-similar equations (10), (11) only up to the second level of truncation. To do this, we introduce the following new functions Introducing these functions into equations (10) and (11) we get Differentiating the above equations with respect to ξ one may easily neglect the terms involving the derivative functions of g and φ with respect to ξ as follows
Results and discussion
In order to validate our numerical solutions, we have compared the surface shear stress (, 0 ) f ξ ′′ and the surface heat transfer (, 0 ) θ ξ ′ − for the prescribed surface temperature with those of Ramachandra et al. [16] and Scshadri et al. [17]. The results are found to be almost compatible to a reasonable degree.. The comparison is shown in Figures (2) and (3), which corresponding Figures (2) and (3) in Scshadri [17]. decrease with the Darcy and dispersion parameters increasing for the two cases (the buoyancy assisting flow (1 ) λ = and the buoyancy opposing flow (1 ) λ =− ). It is also clear from these figures that the surface shear stress and heat transfer for buoyancy assisting flow are greater than those of the buoyancy opposing flow. Also, the surface shear stress and the heat transfer rate increase with increasing the Darcy parameter Da and the dispersion parameter Ds. | 2,381.4 | 2011-09-15T00:00:00.000 | [
"Engineering",
"Physics",
"Environmental Science"
] |
Deep Learning-Inspired IoT-IDS Mechanism for Edge Computing Environments
The Internet of Things (IoT) technology has seen substantial research in Deep Learning (DL) techniques to detect cyberattacks. Critical Infrastructures (CIs) must be able to quickly detect cyberattacks close to edge devices in order to prevent service interruptions. DL approaches outperform shallow machine learning techniques in attack detection, giving them a viable alternative for use in intrusion detection. However, because of the massive amount of IoT data and the computational requirements for DL models, transmission overheads prevent the successful implementation of DL models closer to the devices. As they were not trained on pertinent IoT, current Intrusion Detection Systems (IDS) either use conventional techniques or are not intended for scattered edge–cloud deployment. A new edge–cloud-based IoT IDS is suggested to address these issues. It uses distributed processing to separate the dataset into subsets appropriate to different attack classes and performs attribute selection on time-series IoT data. Next, DL is used to train an attack detection Recurrent Neural Network, which consists of a Recurrent Neural Network (RNN) and Bidirectional Long Short-Term Memory (LSTM). The high-dimensional BoT-IoT dataset, which replicates massive amounts of genuine IoT attack traffic, is used to test the proposed model. Despite an 85 percent reduction in dataset size made achievable by attribute selection approaches, the attack detection capability was kept intact. The models built utilizing the smaller dataset demonstrated a higher recall rate (98.25%), F1-measure (99.12%), accuracy (99.56%), and precision (99.45%) with no loss in class discrimination performance compared to models trained on the entire attribute set. With the smaller attribute space, neither the RNN nor the Bi-LSTM models experienced underfitting or overfitting. The proposed DL-based IoT intrusion detection solution has the capability to scale efficiently in the face of large volumes of IoT data, thus making it an ideal candidate for edge–cloud deployment.
Introduction
The industrial sector has been revolutionized due to the widespread adoption of the Internet of Things (IoT).But, as a result of this expansion, attackers have become more vigilant in the pursuit of vulnerable entry points in IoT networks.Cyberattacks aimed at susceptible smart devices have increased dramatically in recent months [1].When linked to Critical Infrastructure (CI), IoT-enabled networks are particularly vulnerable to a variety of attacks.The quality of service and security of CI systems may be negatively impacted by delays in supporting infrastructure, such as smart grids, and manufacturing.Enhanced Intrusion Detection Systems (IDSs) depend on attack signatures but also employ network traffic characteristics to identify aberrant network connections, which are necessary to determine attacks over IoT devices.Many Deep Learning (DL)-inspired algorithms have been presented to provide a solution to the IDS security issue [2].They outperform more conventional Machine Learning (ML) methods, like Support Vector Machines (SVMs) in terms of detection performance, including accuracy (96.65%), specificity (95.45%) and recall (96.68%).Because of their computational complexity and high performance, DL methods are implemented in shared or cloud-based infrastructures [3].As IoT data are large, they must be aggregated to centralized nodes for training DL techniques [4]; this also causes delays in the detection process.Delay-sensitive CIs make DL techniques impractical for use in intrusion detection.Distributed edge-cloud frameworks are proposed as a countermeasure [5].They can efficiently determine attacks over the edge platform, avoiding delay in the identification of malicious actions.It is possible to scale IoT devices in large domains with faster reaction times during attacks by using edge nodes to offload compute workloads from a centralized cloud node.For the edge layer to implement existing DL-based IDS approaches, either a high number of edge nodes is needed or edge traffic is moved to central nodes to train the DL algorithm [6], both of which might raise the communication overhead.The generated DL models have greater compute and memory requirements, making it impractical to deploy them on edge nodes for efficient intrusion detection because of the associated communication overhead.To deal with the high dimensionality of the IoT dataset's attributes, several solutions were suggested, using advanced DL techniques [7].
Backpropagation-based sophisticated attribute selection algorithms, although effective, may lengthen the training process and cause delays in deployment.Henceforth, a twostage procedure is presented to streamline the construction of the DL-inspired security framework for edge computing, which will increase efficacy for the real-time platform.Conspicuously, a multi-class issue is transformed into a binary-class problem by classifying attacks on the IoT network as time series.Figure 1 shows the IoT attack categories (Source: https://threatpost.com/half-iot-devices-vulnerable-severe-attacks/153609/(accessed on 11 October 2023)).Then, basic attribute reduction is utilized to reduce the data needed for training the DL algorithms, including Mutual Information (MI), Group Method of Data Handling (GMDH), and Chi-Square (Chi-Sqr).An optimized technique is used for IoT attack detection, after which the reduced datasets are uploaded to a cloud node for training the DL algorithm.These procedures provide the partitioning of DL tasks and cut down on both network lag and processing time.The BoT-IoT dataset [8] is used to test the proposed technique since it includes both benign and malicious IoT traffic.The state-of-the-art primary contributions are mentioned ahead.
1.
Detect attacks on latency-critical networks via an edge-cloud-distributed Intrusion Detection System.2.
Bifurcating temporal data into smaller subsets according to type of attack allows for the distributed analysis of a large-scale BoT-IoT dataset.
3.
With the help of attribute selection methods, the proposed technique can drastically decrease the size of the dataset without sacrificing accuracy in classifying observations.4.
Recurrent Neural Networks (Simple-RNN and Bidirectional LSTM) are used in the BoT-IoT dataset to identify attack traffic and determine the performance enhancement as compared to the state-of-the-art techniques.
The rest of the paper is organized in different sections.The relevant literature on IoT threat detection is provided in Section 2. Section 3 details the proposed IDS framework, including the attribute selection methods and DL models used.Section 4 presents an experimental simulation for validation purposes.Section 5 concludes the paper with future research directions.
Literature Review
Spadaccino et al. [9] compiles a comprehensive overview of the usage of IDSs in IoT networks, including how edge computing is used to aid in IDS deployment.The authors identified novel problems that occur during IDS deployment in an edge situation and provided potential solutions.Particular attention was paid to anomaly-based IDSs, with a presentation of the primary anomaly-detection methods and a discussion of machine learning techniques and their application in the context of an IDS, including a description of the potential benefits and drawbacks of each.However, limitations include the limited evaluation on diverse edge computing environments, and lack of comparison with traditional intrusion detection methods.To protect the IoT from threats, Khater et al. [10] proposed a lightweight Host-Based IDS that employs the Modified Vector Space Representation (MVSR) N-gram and Multilayer Perceptron (MLP) model.This system was implemented with the help of Fog Computing devices.The limiting aspects included the dataset used for evaluation lacking diversity and possibly not representing real-world scenarios.The proposed mechanism's performance on resource-constrained fog devices was not thoroughly examined.To identify malicious activity in IoT networks, many methods have been suggested.Syed et al. [3] trained and classified attacks on a BoT-IoT dataset using a Feed-Forward Neural Network (FNN).The FNN model was able to identify many types of attacks with an accuracy of 97% and a high F1 score, which is used in classification tasks to evaluate the performance of a model and its harmonic mean of precision and recall.Several types of IoT attacks, however, have poorer accuracy and recall values when using the trained FNN model.An ensemble hybrid IDS was developed by Jasim et al. [11], which combines an attribute selection step based on information gain with an ensemble of ML algorithms.The experimental findings showed that the classification techniques are much better as part of an ensemble of classifiers.As a comparison, the proposed classifiers managed 91% and 89% accuracy, respectively, whereas the ensemble classifiers obtained 98.9% accuracy.Hybrid DL was introduced by Popoola et al. [7], for which the authors suggested using a Long Short-Term Memory Autoencoder (LSTMA) layer for dimensionality reduction and then cascading that layer with a Bi-LSTM layer.The suggested method used less memory and performed better than competing methods of attribute reduction.On the contrary, attribute selection techniques based on DL may be computationally intensive.To train and identify sequential network data in the cloud, Aljuhani et al. [12] developed a bi-directional LSTM DL system.When tested on IoT datasets, the suggested method achieved a high detection rate (between 91.2% and 97.9%) in identifying DoS and DDoS attack traffic.Reconnaissance attacks and data theft were among the non-DoS traffic types for which the model fared badly.To enhance the multi-layered approach that aids in identifying intrusions for IoT network, a DL-based forensic model was suggested by Abd et al. [13].Local and global representations of industrial IoT network traffic were captured by the proposed model's gated RNN unit.By using a gated RNN unit in the proposed model, it is likely that the model can effectively capture both local patterns within shorter sequences of network traffic and global patterns that span across longer sequences.This allows for a comprehensive understanding of the industrial IoT network traffic and enables the model to make informed predictions or decisions based on both local and global representations.Compared to centralized DL-IDS approaches, the experiments on edge nodes showed a considerable performance boost.Nevertheless, the suggested distributed approach needs a large number of edge nodes to significantly enhance detection accuracy, and its performance may degrade under heavy loads.Intrusion detection in edge-based IoT systems was suggested by Guo et al. [14].
In the first phase of the two-stage detection procedure, network traffic is binary classified using K-Nearest Neighbor (KNN) and a Deep Neural Network (DNN).Specifically, the input is classified as non-invasive or intrusive by the DNN model.Instances that DNN fails to correctly classify are used by KNN, which uses the Euclidean distant to determine similarity and provide a suitable class.The key aspect is that it has to be verified for its efficacy on IoT datasets before it can be fully adopted.The suggested strategy has been tested on non-IoT datasets.Moreover, the success of the kNN method is very sensitive to the selection of the k value.Song et al. [5] suggested an ensemble learning-based method for distributed anomaly identification in edge networks.The suggested method employs an IoT-edge-cloud architecture and Gaussian mixture-based correntropy to identify attacks on the IoT.The weaknesses include the need for attribute engineering, the use of shallow ML techniques, and the evaluation of the proposed model on datasets that are not related to the IoT.To identify cyberattacks in the Internet of Medical Things, Nayak et al. [15] suggested an edge-cloud architecture using an ensemble learning technique.In addition to requiring a time-consuming attribute engineering phase for training the algorithms, the method relies on three shallow ML algorithms for ensemble learning.Table 1 provides a summary of the literature, proposing IDS approaches for IoT networks.
Research Challenges
There are several obstacles to overcome when using DL models for IoT intrusion detection.The key difficulties are the computed complexity, time delay, and bandwidth needs, as well as properly dispersing the detection duty to several worker nodes.Therefore, the partitioning of multi-category IoT datasets is performed in distinct class data according to the time of arrival.To further minimize the size of the dataset, attribute selection is applied to each binary class dataset separately.Finally, in the current paper, DL models are utilized that can sort the data into regular or offensive kinds.
Proposed Model
Figure 2 shows the proposed edge-cloud-based infrastructure for monitoring IoT traffic for anomalies.Gaining access to data, pre-processing, attribute identification, training, validation, and testing with the help of IoT intrusion detection models are deployed after training using DL techniques.The proposed framework has three components, including IoT devices, an edge layer, and a cloud layer.IoT devices function and communicate on the most fundamental level.Deployed at the edge layer, tools like TCP collect raw network traffic from these IoT devices.In addition, the packets are transformed into time series data at a pre-processing stage.The BoT-IoT dataset, having a wide variety of attack timestamps, is utilized.Analysis of the timestamps reveals that the vast majority of attacks have been recorded at discrete times, whereas normal data appear across the board.Henceforth, it is recommended to partition the dataset into attack-specific subsets.Therefore, training, testing, and validation on time-stamp-based dataset partitioning are performed.In addition to facilitating the distributed processing of datasets, the proposed approach employs an attribute selection methodology to drastically reduce dataset sizes before sending them to cloud instances for training.Then, the dataset is shrunk by attribute selection at the edge layer and then transferred to the cloud.DL models may be taught and tested with the help of the plentiful computational resources offered by the cloud layer.At last, the effective DL models for intrusion detection are deployed in the edge layer.Moreover, the Group Method of Data Handling (GMDH) [16], Mutual Information (MI) [17], and the Chi-Sqr statistic techniques [18] are used to evaluate the effectiveness of dimensionality reduction algorithms.Furthermore, RNN and a subclass of RNN called Bi-directional LSTM are incorporated to evaluate the attack classification performance.The detailed procedure is discussed ahead.
Attribute Selection
Identifying the vital characteristics and discarding non-vital ones is a crucial part of developing a reliable framework for network intrusion detection.Using the attribute selection step before training a DL model can reduce the dataset size and the amount of computing power needed to train the model.The following sections provide further detail on the different attribute selection methods utilized in the current study.
Group Method of Data Handling (GMDH)
The GMDH [19] is an early example of a feedforward network for DL.It is part of a heuristic family of algorithms that, by identifying correlations between input attributes, may automatically create self-organization models of optimal complexity.The algorithm then forms its internal structure without any input.The Ivakhnenko polynomial [20] is used to describe the connection between the input variables y 1 and y 2 : Each neuron layer has n variables, and b, c, and d are respective weights in the aforementioned equation.The GMDH algorithm uses inductive learning to infer the connections between variables with optimal complexity by mimicking the natural evolution process.Input connections are simplified so that the algorithm may derive more complicated relationships.Instead of the standard m input variables, the algorithm is given n(n − 1)/2 to predict z.In addition, the computational burden is decreased by eliminating variables or attributes that are correlated with the output.For choosing the best attributes, GMDH considers all possible pairings of input attributes, with "best" referring to the most significant correlations between the input and output vectors.The following procedures are required to implement the GMDH algorithm: 1.
Step 1: Two attributes are chosen at random and supplied into a single neuron.
Step 1.1: Define the pool of attributes.
Step 1.6: Repeat the process.
2.
Step 2: The weights are estimated by comparing the training set to the current state of each neuron.
3.
Step 3: Probabilities are computed using the training and validation datasets at each neuron.
4.
Step 4: The most effective neurons are chosen according to some objective standard.5.
Step 5: Validation error, bias error, validation, and bias error are the available criteria offered by the Python version of GMDH.In the current scenario, a validation error is selected.6.
Step 6: Users have the option of customizing neurons for every layer or having it determined automatically based on input variables.7.
Step 7: In the event of a validation error, reaching the maximum number of layers, or selecting a single neuron, the process will restart from the beginning.
In the current study, the GMDH technique is used to identify attributes, then feed those attributes into a DL model as inputs.The GMDH method is measured against both linear and covariance functions.As indicated in the equation ahead, the linear function accumulates variables linearly with the corresponding weights: where linear convergence includes y 1 and y 2 , the covariation of input variables and weights (x) as depicted ahead:
Mutual Information
One popular metric for selecting attributes based on their quality is Mutual Information (MI).To find the most useful subset of characteristics, attribute selection algorithms use this strategy.How much information an attribute gives about the outcome and how independent it is from other characteristics are two factors in the goodness meter.MI is based on Information Theory concepts and measures the degree of dependency between two random variables.MI gives a measure of information about Z rather than only detecting the linear connection between Y and Z as would be the case with a simple linear regression analysis.Hence, MI of Y and Z is defined as where the joint distribution of Y and Z is denoted by q(y, z).Both q(y) and q(z) represent the marginal probability distributions of Y and Z, respectively.The definition in terms of entropy G(.) is as follows: where G(Y/Z) and G(Z/Y) are conditional entropies, and G(Y:Z) are the joint entropy of Y and Z.The MI measure of both variables is independent.
The independence of two occurrences or two traits may be determined with the use of statistical tests.This method of attribute selection calculates the observed and expected values of an attribute to determine whether it is effective in differentiating the target class characteristic as illustrated ahead: where P j is the actual frequency of a property and F j is the predicted frequency.The Chi-Sqr statistic quantifies the absence of independence between attribute g and output class d, where g and d are instances and attributes of a dataset, respectively.Variables can be represented as the field length or size of attributes to distinguish between benign and malicious traffic aimed at IoT devices.At times of attack, frame lengths are distributed differently than they are during regular traffic.This may help distinguish between malicious and benign IoT communications.The MI with output (Z) may be calculated using heterogeneous length measures in the baseline and attack phases.Both MI and Chi-Sqr are examples of attribute selection algorithms.Attacks against IoT devices and applications may be detected by paying attention to other characteristics, including size value and header length.The reason behind this is that when creating an IoT application, the device is pre-programmed to deliver messages of a certain size.Hence, irregular connections in IoT settings may be detected by measuring packet and field durations that deviate from the norm.Chi-Sqr attribute selection uses a hypothesis assessment to choose the most relevant characteristics with independence scores over a threshold.Hence, characteristics that do not rely on the target class contribute little to the classification of the instance and have low Chi-Sqr scores as a result.In a similar vein, MI chooses characteristics that tell us the most about y, the outcome variable.The Scikit learn [21] SelectKBest library was used to find the top K attributes that have a significant correlation with the final metric (MI and Chi-Sqr, respectively).To provide a level playing field with the GMDH technique, which chooses fewer than 15 attributes for all classes of the dataset, the value of K is set at 15.
Selection Validation
To find the best characteristics for DL, researchers used a four-stage experimental design.First, several types of sub-datasets are formed, and then 20% of the dataset is taken out for ranking and selecting attributes.As computing time is expensive, this would help speed up the process of determining which attributes are the most useful.Third, the sampled dataset is used as input for the proposed algorithms.The best characteristics of the dataset were analyzed using the GmdhPy package, which was used here with its default settings.For both development and testing, we used the 65-35% split, which is GmdhPy's default.Negative values were eliminated by performing min-max scaling on the input data for MI and Chi-Sqr.The K-Nearest Neighbor technique is used to calculate entropy as part of the MI algorithm in sci-kit learn, a nonparametric approach discussed in [22].To learn the boundaries between classes, a DL algorithm is given the trimmed-down dataset after attribute selection is performed.The following sections provide further detail about comparing DL algorithms.
RNN
By implementing information flow in loops from the present state to the past states, RNN enables the persistence of information.Because of their ability to take into account the historical status of the network traffic to predict the present state, RNNs are well suited to the task of detecting network intrusions [23].Each cell in an RNN has one input with one internal stage, and they propagate from one cell to the next at each time step.Certain data are sent from one time step to the next through the activation of the hidden layer.To fully process the input data, an RNN loops over Tn time steps, each of which involves computing the attributes of the previous time step.X t is the current data input, y denotes the input value of parameter at time t, and g t−1 is the prior hidden state that stores historical data in a single RNN cell design.In Figure 3, the basic structure of a single RNN cell is presented.Figure 4 provides the generic RNN model and Figure 5 represents the Generic RNN framework and associated variables are accordingly.The hidden state g ′ is calculated as: where H is the activation function in the hidden layers, X is the associated weights, and c is the bias.Moreover, the prediction can be performed as z t = softmax(X zg + c z ), a combination of RNN cells based on temporal steps.
Bi-LSTM
LSTM represents an enhanced DL technique that takes into account long-term dependencies to forecast the output class.Hence, LSTM has a greater capacity for long-term memory, allowing individuals to make more informed choices.RNNs have trouble learning long-term dependencies because of the time lag between receiving input and making a decision, a phenomenon known as the vanishing gradient issue.To resolve it, LSTM uses gates to forward data to the appropriate cells and to store context data for longer.Adding input from both the forward and backward directions into an LSTM hidden layer is a great way to boost its performance (Figure 6).
Completion Time
Various phases of detection were analyzed to obtain delay for the suggested IDS method.Pre-processing, decomposing time series data, selecting attributes, and running the DL model are the primary procedures that need computational resources.The packet capture tool PCAP is assumed to include O attributes and O samples.The complexity of the pre-processing step is O(O) since it must take into account all O samples for all O characteristics.During the attribute selection phase, three competing techniques are evaluated concerning their respective time complexity.The complexity of the ultimate detection method will vary depending on the chosen algorithm.The temporal complexity of the GMDH algorithm is derived.Time complexity should primarily focus on stages 2, 3, and 6 of the GMDH.Probabilities are determined by combining the training and validation datasets, and an initial set of L models is constructed using a training set of T samples (T < N).It has a temporal complexity of O(L 2 ).The proposed procedure involves applying an external selection criterion to the original models and ranking them accordingly.When R l are the created candidate models from the input characteristics, the temporal complexity of this operation is O(P l * logP L ).Then, until a halting criterion is met, steps two and three are repeated.If the whole GMDH network has m layers, the best neurons/models from each layer are promoted to the next.The maximum computational time of GMDHspecific attribute selection is O(M * P l * logP L ), where N is the maximal layers and l is the initiating modules.The computational time for MI-specific attribute identification is O(NO) for N samples and O attributes due to the necessity to compute joint entropy of attribute-to-category mapping.The Chi-Sqr statistic has a time complexity of O(S * N 2 ).S represents the total number of random permutations, and N represents the total number of samples.The suggested method concludes with a study of the classification border between target classes using DL models.O(X), where X is edge nodes, is the temporal complexity for LSTM networks.RNN and LSTM networks have O inputs, Z outputs, and G hidden layers, where each edge represents a neuron in the network.As the RNN weight calculation difficulty is given by the LSTM weight calculation complexity is given by We simplify the DL process by using attribute identification and converting a multicategory output to the specific category.To further improve the performance of the DL models, hyper-attribute tuning is used to determine the optimal neuron count.
Experimental Implementation
This section provides the experimental simulation of the proposed technique for validation purposes.Moreover, the performance enhancement is estimated based on the comparative analysis with state-of-the-art research works.
Conception of Experiments
Experiments were performed on a high-performance computing cluster equipped with 16 GTX 2160 Ti GPU powered by Intel Gold at 3.10 GHz and 512 GB of memory.The RNN and bidirectional LSTM components were implemented using the TensorFlow library and the Keras libraries.A total of 65% was set aside for training, 15% was used for validating the models, and the remaining was used in testing.The 70/30 rule was used to divide the dataset into training and testing sets, which is consistent with the Pareto principle.The remaining 65% and 15% of the training set were used for validation.The validation set helps evaluate trained models objectively and gives guidance throughout the training process.
Dataset
To estimate the effectiveness of the intrusion detection mechanism, the BoT-IoT dataset was used.The dataset was chosen because it includes both malicious and benign IoT traffic in large quantities.DDoS, DoS attacks across UDP, TCP, HTTP, data exfiltration, reconnaissance traffic, and the keylogging attack are the attack types.In Table 2, different types of attacks are shown in the BoT-IoT dataset, along with the times when they occurred.By splitting the cases into sub-classes comprising just one attack type and regular traffic, the dataset is transformed into a binary classification.The process of translating raw data into attributes at the datum level is performed.IP, Frame, UDP, and HTTP fields were employed among the total of 30 used as attributes.To train the RNN DL system, we next had to transform the data collection into a temporal segments.The arrival time of the frame was determined by querying the epoch property.After obtaining the timestamps, the data underwent pre-processing, which included operations such as embedding to encode the categorical data.In particular, categories existing in the dataset were used to encode port variables.Sub-datasets, each including just attacks of a certain kind and normal occurrences as background traffic over the same period, were then created by separating and sorting individual attack instances based on packet timestamps.First, the BoT-IoT dataset is transformed into a csv file with packet-level information.By taking into account the packet arrival timings, the dataset is then transformed into a time series.Sub-datasets are created for each attack type and organized by period, with typical occurrences in the same period included.The next phase is pre-processing, which involves the elimination of duplicates and the encoding of categorical data, such as the HTTP method.Finally, occurrences are normalized.Table 3 displays the number of cases and characteristics retrieved for each class across all periods.Dual category data are gathered and incorporated for attribute identification, and subsequent training, validating, and testing of the DL model is performed, one for each attack category, and cases are seen throughout its time.
Model Formulation
We utilized the best-ranked attributes to construct a model in Keras' RNN implementation.As the input shape is affected by the attack category, the attribute for the proposed framework is determined accordingly.For the service attack category, 92 attributes, with a 3 s window, and 512 neurons result in 23,698 training attributes.Using the GMDH attribute selection approach, training attributes are reduced in the RNN initial layer.The Bi-LSTM framework was developed with the use of the Keras DL package.The activation functions tanh and softmax were selected for the hidden and dense layers of RNN and BiLSTM, respectively.Using the Adam Optimizer, we decided to use accuracy as our measure of choice and sparse categorical cross entropy as our loss value.The proposed framework is depicted in Figure 4. First, the BoT-IoT temporal data segments were transformed in segments, which use a predetermined amount of temporal instances to successively cover the complete time series.The size option determines how many time samples are used to create a window.The input characteristics are collected by an input layer in the architecture.Next, the calculation is carried out by a hidden layer made up of many neurons, and the output layer is responsible for sorting occurrences into normal and attack categories.Interconnecting weights were computed and fine-tuned across the three layers of the DL network during training.The backpropagation technique is used to update and choose the interconnection weights that result in the lowest possible loss.
Adjusting Hyper-Attributes
It was determined by experimenting with different values for RNN hyper-attributes which settings were most suited for the model suitable.Hidden layers, learning rate, dropout rate, batch size, neuron count, epochs, and window size were all taken into account during tuning.The starting point was an RNN model with three dense layers and a 512 neuron count in the first two dense layers, trained using the Adam optimizer's default learning rate of 0.002.Table 4 displays the range of hyper-attributes that was tested.The results show that adding more hidden layers to RNN improves the model's performance but that adding more than three hidden layers has no noticeable effect on performance, hence three hidden layers were used here.The optimal performance across all classes was achieved by using 512 neurons.No more tweaking was performed since the dropout rate did not influence the model's performance.The Adam optimizer's learning rate had a significant effect on performance, with a value of 0.0002 producing the best results.This was the value used for model training.Most iterations terminated before 19 epochs; therefore, increasing the number of epochs did not improve the model's performance.To avoid the overfitting issue, we included an early stopping condition with four iterations, which would terminate the train procedure if validating loss remains fixed or rises with subsequent iterations.The learning process speed is affected by the batch size when working with huge datasets.The testing showed that the accuracy of RNN was enhanced by increasing the batch size to 64.To generate window data segments, the size of the window must be specified; the results indicate that the optimal window size is 4. Specifically, simulations were performed for window sizes 2, 4, 8, and 16.Based on the accuracy acquired, window size 4 was selected.Moreover, window size 2 registered an accuracy of 82.65%, window size 8 registered an accuracy of 81.01%, and window size 16registered an accuracy of 78.15%.When it came to Bi-LSTM, similar hyper-attribute measures were used since they resulted in better model performance.But instead of 25, RNN now uses batches of 60.The RNN network was designed with three hidden and a dense layer, whereas the Bi-LSTM network had two hidden and a dense layer.
Evaluation of Outcomes
We give the evaluation findings, which detail how well the RNN and bidirectional LSTM models performed, as well as the characteristics those algorithms chose to use.Each attribute selection algorithm's findings are displayed, together with the amount of data reduction that happened, thanks to the best-picked attributes.It allows for a thorough assessment of the attribute selection outcomes.Confusion matrices are used to evaluate how well a model can distinguish between normal and attack classes in a given subset of the network traffic dataset.Accuracy, recall, precision, and the F1 score are four more measures used to evaluate performance.Specifically, the detection performance was measured in terms of the accuracy, precision, recall, F1 score, and area-under-curve metrics.
Data Reduction and Attribute Selection
Table 5 shows the dramatic decrease in data size that occurred from attribute selection using the three techniques.The service scan category had the greatest decrease in data size, measured in MegaBytes (MB).In this subcategory, the reduced data storage space is used for GMDH, MI, and Chi-Sqr, whereas the entire dataset required 1022 MB of storage space.For data theft, DDoS-HTTP, keylogging, and DoS-HTTP, the GMDHlr and GMDH-lr-cov approach reduced data size by over 85%.The data size was reduced by between 85% and 95% after selecting 10 characteristics for both MI and Chi-Sqr.Our findings suggest that, by filtering out irrelevant and uncorrelated information, attribute selection approaches may significantly cut down on the quantity of data needed to train and assess DL models.Table 5 lists the top ten characteristics chosen by each separate algorithm.The abductive network training technique GMDH chooses the attribute sets' best predictors as the model complexity rises.Using this method, high-dimensional BoT-IoT datasets may be represented in a low-dimensional attribute space that nevertheless captures their essential characteristics.As compared to the non-linear covariance function employed during GMDH classifier training, the findings of the attribute selection suggest that the latter selects fewer attributes, allowing for more data reduction.This could be because of the relatively linear connection between the dataset's input variables and its output class variables.Low-dimensional representations of the original input attribute space are sought by attribute extraction methods, like Principal Component Analysis and embedding approaches, but these methods are difficult to understand.On the other hand, GMDH creates candidate models and picks intermediate models of increasing complexity depending on preset selection criteria, like validation or bias error, number of neurons, and the maximum number of layers.By doing so, GMDH can objectively choose attributes, picking the characteristics that have the most bearing on the goal objective automatically based on the finest examples of those attributes.The effectiveness of both full-attributed and attribute-selected RNN and Bi-LSTM models was compared over the BoT-IoT dataset and NSL-KDD dataset.The NSL-KDD dataset (Source: https://www.unb.ca/cic/datasets/nsl.html (accessed on 11 October 2023)) is a more recent version of the well-known KDD Cup 1999 dataset.It is a benchmark dataset for network-based Intrusion Detection Systems, and it includes various types of attacks and normal network traffic.RNN studies on different attack traffic subsets reveal that attribute-selected models outperform and are on par with complete attribute-based models.Experimental results show that compared to models trained on the whole attribute set, those trained on the limited set of attributes resulted in greater recall rates.Nevertheless, models trained on characteristics picked by the GMDH approach have reduced accuracy compared to other models in a few categories of attacks, including OS fingerprinting, Keylogging, and DoS-HTTP.For the classification task comparison, the DL models were also compared to other machine learning models.The models were compared using a binary classification on the complete dataset without any time-based data partitioning or attribute selection.According to Table 6 (BoT-IoT) and Table 7 (NSL-KDD), RNN and Bi-LSTM models outperformed other popular DL algorithms, including Naive Bayes (NB), Random Forest (RF), and Support Vector Machines (SVMs).In addition, training using limited subsets of data points significantly increases the time required for SVM to converge and finish the training process.Table 8 shows the comparative analysis of the suggested method against various IDS frameworks.According to the findings, certain frameworks may be more accurate overall but at the expense of precision and recall.The accuracy, F1 score, and recall rates for the suggested framework presented in [24] are excellent.To achieve quicker and more accurate edge detection, however, our work focuses on reducing the number of attributes in DL models and creating class-based sub-datasets for distributed processing.Both full-attribute and attribute-selected Bi-LSTM models produced the same result.Experimental results show that compared to the model trained with all available attributes, the performance metrics for models trained with a subset of attributes are better.The Bi-LSTM loss throughout training and validation is shown in Figures 7 and 8.Both figures show that for the vast majority of sub-datasets, training and validation lasted for more than 25 epochs but less than 60 epochs.At the third epoch, the training loss decreased below 0.1 across the board for attack sub-categories.Moreover, the validation loss decreased to below 0.05 across all categories, ranging from 0 to 0.02 for DoS HTTP and DoS UDP, respectively.As it can take into account long-term temporal dependencies while generating choices, the Bi-LSTM model also exhibited overall superior performance than the RNN models.Overfitting and underfitting may be problematic in intrusion detection tasks, but the models trained on decreased attribute space demonstrated resistance to both.
Training and validation loss for both the whole and restricted attribute space of a single kind of attack are compared in Figures 9 and 10.Models trained with a smaller attribute space did not overfit or underfit as measured by the training and validation losses, which stayed between 0 and 0.01 after the first 13 epochs.
Comparative Analysis
The findings of the deep blockchain framework (DBF) presented in [25], which uses Bi-LSTM to categorize attack traffic in the BoT-IoT dataset, were compared with those of our own suggested method.Figure 11 gives a comparison of the recall rate of several attack sub-categories.We found that our strategy for identifying attack traffic had a greater recall rate than theirs did.Service scanning, operating system fingerprinting, data exfiltration, and keylogging are just a few examples.The attribute selection procedure enhances the efficacy of DL models in identifying IoT-based threats as shown by the results of attribute selection and DL-based categorization.Using an attribute selection step with DL has the primary benefit of drastically shrinking the dataset without sacrificing any of the useful class-discriminating information between the input and output variables.As compared to other attribute selection algorithms, the MI algorithm's choices resulted in the greatest performance gains across several types of attacks.The area-under-the-curve metrics for each of the subcategories is shown in Figures 12 and 13.If a model has a higher AUC, it suggests it does a better job of predicting what category each data point belongs to.For simplicity, we only provide the total AUC score when there are numerous types of attacks.Using attribute selection helps the deployed model consume fewer computing resources.Using the stored DL models, the number of Floating Point Operations Per Second (FLOPS) needed was estimated to compare the computational needs of the models with full and reduced attribute sets.Multiplication, addition, and other batch normalization and activation function operations are all part of the FLOPS that are produced.Table 9 details the FLOPS demands of the created models for different types of attacks.Empirical findings show that, across all categories, models trained with a subset of attributes need around 0.20 million fewer FLOPS.Hardware devices used at the edge and edge layers, which generally allow 1-99 and 100-999 million FLOPS correspondingly, are likewise compatible with the FLOPS recorded for the proposed technique.Nevertheless, following attribute selection, all models trained on the smaller dataset recorded fewer than 0.62 million model attributes, using less than 2 MB of memory.Nevertheless, low-powered IoT devices may not be able to accommodate such memory utilization due to memory limits on the order of Kilo Bytes.In comparison to micro-controllers, the memory and processing power of most devices at the edge layer-such as access points, tiny servers, routers, gateways, and so on-is far greater.As a result, the proposed method is well suited for DL-based intrusion detection applications at the edge layer since its low FLOPS and memory needs allow for faster detection times.To improve generalization performance, the suggested method may also be used in settings where attack detection is dispersed across fewer processing nodes in the edge layer.As a result, fewer worker edge nodes and fewer computational resources are needed for intrusion detection.In addition, a cloud-edge intrusion detection framework is provided to correctly identify attacks on IoT devices by reducing the dataset size using attribute selection and outsourcing the time-consuming and difficult model training activities to the cloud nodes.The process of creating IDS may be considerably improved by adding an attribute selection phase before the DL layer.This is because experts have figured out which characteristics are crucial for spotting attacks.One of the disadvantages of using a subset of the dataset is that the accuracy value for certain categories is lower than it would be using the whole dataset.To get around this restriction, an ensemble of classifiers that have been trained on carefully chosen attributes may be constructed.
Conclusions
DL-inspired IDS have been found to successfully recognize attack patterns.DL-based IDS needs to be put closer to the IoT devices to decrease the time delay in detecting sophisticated attacks targeting the IoT paradigm.The adoption of DL-based IDS near-theedge IoT devices may be hampered, however, by issues such as the vast amount of IoT data and the sophisticated computing needs of DL approaches.Conclusively, we present a DL-based IDS framework for the IoT that may be efficiently implemented utilizing an edge-cloud design.The suggested system includes partitioning the dataset based on the time of arrival of the attack flow.The high-dimensional BoT-IoT dataset is further reduced in size by selecting the characteristics of interest.Hybrid RNN algorithms are trained and tested on the reduced dataset to categorize the occurrences as an attack or regular traffic.Our findings demonstrate the usefulness of our suggested framework and its resistance to over-and underfitting.The dataset size was decreased by 85% due to the attribute selection phase, which means that vast amounts of IoT data may be sent to the cloud network for DL tasks with far less impact on the network's latency.Both the RNN and Bi-LSTM models, when trained on a smaller attribute set, outperformed conventional techniques.In addition, the attribute selection process minimized the time and space needed to train DL models.The proposed DL-based edge-cloud IDS method is enhanced in detecting cyberattacks targeting the IoT devices in CI due to its ability to split datasets, reduce dataset size through attribute selection, reduce the computational requirements of trained models, and provide superior detection capability.
Figure 12 .
Figure 12.Comparison of AUC of proposed RNN.
Table 5 .
Comparative analysis of percentage of memory reduction.
Table 6 .
Performance comparative analysis of ML techniques: Bot IoT data.
Table 7 .
Performance comparative analysis of ML techniques: NSL-KDD data.
Table 9 .
Comparative analysis of number of floating point operations. | 9,604 | 2023-12-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Perception of sponge city for achieving circularity goal and hedge against climate change: a study on Weibo
Purpose – Global climate change speeds up ice melting and increases fl ooding incidents. China launched a sponge city policy as a holistic nature-based solution combined with urban planning and development to address fl ooding due to climate change. Using Weibo analytics, this paper aims to study public perceptions of sponge city. Design/methodology/approach – This study collected 53,586 sponge city contents from Sina Weibo via Python. Various arti fi cial intelligence tools, such as CX Data Science of Simply Sentiment, KH Coder and Tableau,were applied inthe study. Findings – 76.8% of public opinion on sponge city were positive, con fi rming its positive contribution to fl ooding management and city branding. 17 out of 31 pilot sponge cities recorded the largest number of © Liyun Zeng, Rita Yi Man Li, Huiling Zeng and Lingxi Song. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/ legalcodeTheauthors thank Yunyi Mao (Panzhihua University) for his useful contributions and suggestions regarding sponge city.
Introduction
Flooding has become a global natural hazard when climate change speeds up the ice melting in the Antarctic ice sheet (Stokes et al., 2022).It has become a global concern in recent years (Tian et al., 2023).It adversely impacts humans and societies when urbanisation intensifies (Cheng et al., 2022).According to the World Meteorological Organization (WMO) (2022), more than 11,000 water-, weather-and climate-related disasters have been reported, resulting in more than two million deaths, an average of 115 deaths per day and $3.64tn economic losses over the past 50 years.
According to Zha et al. (2021), Zhao et al. (2019) and Nguyen et al. (2019), countries worldwide have implemented various water control measures since the 19th century.France incorporated the urban drainage system into the construction plan in 1852.The UK began to build underground drainage system in 1859.Germany emphasised "zero increase in drainage" in 1990s.Japan promoted the "rainwater retention and infiltration plan" in 1920, and Tokyo built the world's most advanced sewer drainage system since 1992.The USA has constructed large-scale drainage systems since 1972, proposed the low-impact development model in urban construction and enforced "in-situ flood storage".The USA was the first country to research stormwater regulation and storage.Besides, Australia has prevented and controlled urban waterlogging since 1975.In Bangladesh, floating garden generates economic, social and ecological benefits in low-lying areas (Abdullah Al Pavel et al., 2014).
In recent years, China has launched a new national initiative called "sponge city" which enables cities to absorb and save rainwater like sponges (Zhang et al., 2019;Guan et al., 2021) to improve urban resilience.It is a low-impact development rainwater system (Guan et al., 2021) that aims to solve the problem of water storage and waterlogging, reduce water pollution, improve water quality and enhance water ecology (Gu and Cui, 2017;Ji and Bai, 2021).
Although sponge city has become increasingly prominent (Lin et al., 2019), it requires highly collaborative and innovative work (Gu and Cui, 2017).Sponge city construction has many problems and controversies due to high construction costs, a lack of technical guideline, high management costs and low management efficiency (Zhang et al., 2019).Fu et al. (2022) doubted whether sponge cities provided appropriate solutions to China's growing urban flooding problems: it needs all stakeholders' joint efforts (Gu and Cui, 2017).On the other hand, sponge city provides solutions to water hazards and disasters and improves the urban environment and human well-being.Thus, thriving sponge cities enhance urban resilience and city branding (Thadani et al., 2020).Likewise, online social media content reflects the public's perception, content about sponge city affects a city's brand image (Thadani et al., 2020).Thadani et al. (2020) analysed the current application of China's online social media in city marketing and branding based on the sponge city plan.Besides city branding, sponge city construction requires active participation and public support.As Sina Weibo is China's leading social media, public opinion mining could be useful to know more about stakeholders' perception on sponge city.Yet, research on the public perception of sponge city on Sina Weibo is scarce.To fill the research gap, the study analysed data by Python and artificial intelligence (AI) tools to study the public's foci, locations, content and sentiment on Sina Weibo.It provides policymakers with insights regarding the public's main concerns about sponge city.
This paper puts forward the following research questions: RQ1.What are the public's concerns about sponge cities?
RQ2. Are the sponge cities posts optimistics?
RQ3. Has the sponge city concept been efficiently communicated among the public?
The study's objectives are as follows: to visualise and analyse the popular topics and content about sponge city on Sina Weibo; to study the sentiment of Weibo's sponge city posts and the possibility of sponge city to enhance city branding; and to explore the beneficial strategies for government and enterprise about sponge cities based on the public's perception.
This paper offers valuable information to monitor sponge city development policies to address water scarcity and climate change challenges.Besides, it proposes that government departments to address public opinions in social media, which helps them formulate policies, spreads the sponge city concept and encourages residents to participate.This article consists of six sections.The literature review is described in Section 2. The research method is in Section 3; Section 4 describes the data analysis; Section 5 describes the results and discussion; and Section 6 gives the conclusion.With big data and social media analytics, this research offers a new perspective on investigating the public perception of sponge cities, providing insight to the government to improve sponge city communication and brand management.
Literature review 2.1 Global water strategies
Water problems brought by climate change is one of the global challenges in pursuing sustainable urban development (Ma and Jiang, 2022).Countries use different measures to control flooding: sustainable urban drainage systems in the UK (Ma and Jiang, 2022) minimise surface water runoff and flooding risk by mimicking natural water systems such as ponds, wetlands, depressions and basins; water-sensitive urban design in Australia (Ma and Jiang, 2022) installs green stormwater infrastructure that uses facilities to reduce the impact of urbanisation and enhance cities' comfort, ecosystem and livability; and the Active, Beautiful and Clean Plan in Singapore aims to create beautiful and clean streams, rivers and lakes and provide postcard-like community spaces by integrating drains, canals and reservoirs holistically with their surroundings (Tan et al., 2019;Nguyen et al., 2019). 1 shows China's sponge city policies since 2014.In 2015, the Ministry of Finance, the Ministry of Housing and Urban-Rural Development and the Ministry of Water Resource of the PRC jointly issued a document to implement the pilot sponge city and explore the best approach that fits China's conditions.Sponge city has received significant attention and a positive response from the local government.In June 2015, the national government conducted sponge city construction performance evaluation and assessment from six aspects:
In 2015 and 2016, pilot sponge city construction took place in 30 cities in China.According to the Ministry of Housing and Urban-Rural Development of the PRC (2017), China's Urban Municipal Infrastructure Construction "13th Five-Year Plan" proposed that over 20% of urban built-up areas must meet the sponge cities' requirements in 2020 and reach 80% in 2030.In February 2019, sponge city was required to protect the natural ecological pattern.The goals were achieved through retention, infiltration, storage, purification, emission and utilisation (Ministry of Housing and Urban-Rural Development of the PRC, 2019).In 2022, sponge cities construction was included in the "14 th Five-Year Plan", it included 25 sample cities and the central finance provided subsidies to the pilot cities (Ministry of Finance of the PRC, 2022).
The PRC government and municipal departments attach great importance to sponge city and consider the public's safety and health first.Sponge city policy purifies and beautifies the cities, improves city quality and enhances public people's happiness (WU and QIN, 2019).Table 1 shows the strategic plans in China from 2013 to 2022.
Sponge city for climate change in China
The Chinese government attached great importance to adaptation to climate change and has introduced relevant strategies and policies.Owing to climate change, rapid urbanisation, changes in land use and rapid socioeconomic development, surface water inundation was the most severe water-related problem in many large cities in China (Chan et al., 2018).It included extraordinary precipitation in Beijing, Jinan, Chongqing, Zhengzhou, etc.In sponge city research, scholars have studied the future adaptive countermeasures of different cities regarding climate change, especially urban flooding risk resilience.The analysis included Shanghai (Tong et al., 2022), Beijing, Tianjin, Shenzhen (Shao et al., 2021), Guangzhou, Chongqing, Zhengzhou (Tong et al., 2022), Jinan (Cheng et al., 2022), Xi'an (Luo et al., 2022), Nanjing (Liu et al., 2021), Wuhan (Dai et al., 2018) and Xiamen (Shao et al., 2018).Ma and Jiang (2022) Perception of sponge city planning approach, Li and Kim (2022) analysed sponge city projects' impact on Harbin, Quzhou and Sanya, China.The results showed that current sponge city projects could improve the urban climate's warmth (Li and Kim, 2022).
Sponge city for enhancing water and resources circularity in China
Sponge city development addresses climate change and the water-related challenges of urbanisation (Ma and Jiang, 2022).It solves urban surface water flooding problems and improves urban water resource management (Wang et al., 2021b).Nature-based solutions for sponge city have been promoted as sustainable solutions for urban stormwater management and addressing the urban flooding problem (Fu et al., 2022).To improve the assessment of the hydrological cycle and sponge city's options, Jiang and McBean (2021) used the concept of "One Water" to demonstrate structured thinking about how each dimension of the hydrological cycle could be used to study the degree of interrelationships.Some scholars discussed the relationship among water elasticity, resources, treatment, ecology, waterscape and management modules based on sponge cities to solve urban water problems and human livability (Wang et al., 2021a).Based on the sponge city plan, six critical processes of water circularity include retention, infiltration, storage, purification, emission and utilisation (Liu et al., 2022) are applied in more than 30 cities in China.The increase in urban greening, urban river and lake wetlands expansion, and rainwater resource utilisation have reduced the city's carbon emissions (Shao et al., 2018).Sponge city can benefit from the circularity issues on water and carbon dioxide.
Social mediaa missing piece in sponge city study
Social media has a considerable global user with a diverse geographic distribution; users can quickly and easily post, comment and repost any message; and information can be found and swiftly shared (Cheng et al., 2019).Thus, natural disaster management and prevention rely heavily on social media analytics.Moreover, four metadata fields in social media dataspace, time, content and networkprovide helpful information to understand the situation better and respond to disasters (Wang and Ye, 2018).This can overcome the problem of traditional approaches like questionnaires and interviews, which suffer from low response rates (Li et al., 2022).
IJCCSM
Research shows that while most people are aware of flooding hazards, they lack awareness and understanding of sponge city initiatives (Zheng et al., 2022).Because the effectiveness of implementing nature-based solutions depends on the participation of a wellinformed public, researchers conducted a survey in Wuhan to identify factors that influence public perception of sponge city plans (Zheng et al., 2022).Previous surveys showed that the public's attention, satisfaction and acceptance of sponge cities differed depending on living environments (Luo et al., 2022).Sponge city residents are satisfied with the travel and living conditions and strongly support the local government (Luo et al., 2022).Fu et al. (2022) believed that bottom-up community-based approaches are essential to transform sponge cities into flood-resistant ones.
The role of social media in affecting public perception of sponge city was remarked to a different degree.Taking Xi'an China as an example to compare awareness differences of sponge city, Zheng et al. (2022) stated that the concept of sponge city was one of the few ways for locals to come across them because it was occasionally mentioned in social media and is not often visible in the streets.Cheng et al. (2019) pointed out that during the flooding, the public questioned the efficacy of sponge city investments on social media (Sina Weibo), negatively impacting the mitigation and recovery stage of the flooding disaster.
Sponge city has received public attention for over 10 years (Yin et al., 2021).Although the above studies highlighted people's perceptions, they adopted offline public and community data.Most focus on sponge city's construction, technical aspect and management method.Research about public perceptions of the semant analysis and foci of sponge city is scarce.Thus, there is a research gap in public perceptions about sponge city's foci and sentiment on social media.
Ecological modernisation city branding and social media
City branding refers to research and management of brands that represent cities which include the research of several branding relevant concept (Molina et al., 2017).A comprehensive understanding of the current city image is the main concern in the first step of city branding (Shirvani Dastgerdi and De Luca, 2019).According to de Jong et al. (2018), the ecological modernisation city labels like resilient city and green city appear widely in the academic research and policy and are frequently adopted for promoting ecological modernisation.Sustainability as an additional dominant city-branding narrative has increased in prominence as a result of the discourse on sustainable development (Rinaldi et al., 2021).Cities reflect imperative of ecological modernisation in the branding practices by responding to the ecological modernisation requirements (de Jong et al., 2018).
The way cities can and should communicate and build their local brands has changed, thanks to the internet and its tools, as social media networks allow users to generate the brand content.Social media platforms may be one of the most visible aspects of online branding strategies (Molina et al., 2017).
Research method
Recent research has shown that social media is increasingly being used to respond to crises.Using big data and social media to improve flooding preparation and prevention significantly reduced flooding impacts (Chan et al., 2022).Social media is used for disaster preparation, response, mitigation and recovery (Tang et al., 2015;Carley et al., 2016;Cheng et al., 2019).Despite there is an increase in research on using social media in times of different disasters, and there is rare relevant research in China (Cheng et al., 2019).
From 1st January 2011 to 17th September 2022, the study collected Sina Weibo users' information about sponge city (in Chinese "海绵城市") via Python 3.10.Then, the KH-Coder and CX Data Science of Simply Sentiment were applied for the semantic analysis of Sina Weibo content.The cluster analysis uses the statistical technique, and the frequency of coword occurrence studies the co-occurrence network and the relationships between various groups.A research field can be inferred intuitively from multidimensional scaling analysis, which determines the topic structure by calculating the distance between topic words (Li et al., 2023).Clustering is accomplished using the KH Coder.This study also reviewed sentiments expressed in the posts via AI into categories like opinions, facts and positive and negative (Song et al., 2022).The method flowchart is shown in Figure 1.
There were 64,693 contents from Sina Weibo's users.After selecting, cleaning and deleting the data not related to "sponge city", 53,586 data were collected.From 2011 to 2016, the government put forward the concept, method and standard of "sponge city".Thus, sponge city microblogs increased and peaked at 8,589 blogs in 2016, with comments recorded high in the same year (data for 2022 is incomplete).The number of likes 151,718 peaked in 2021, which was several times higher than others (Figure 2).
Data analysis 4.1 Keywords analysis
After running pre-processing in KH Coder, there are 53,586 paragraphs and 351,375 sentences.When we selected words to analyse, keywords such as sponge city, city, and sponge cities were deleted to enhance popular topic visualisation.This study also included stop words for keywords like the pronoun, prepositions, etc.
4.1.1Word frequency.This study visualised the top 300 keywords via Tableau 2021.3.Top 50 keywords with higher frequencies are shown in Table 2. From Figure 3, the font size indicates the total number of term frequencies.Among them, water (67,087), construction (66,488), urban (55,944), road (37,952), new (37,925), development (36,049), ecological (28,855), "park" (28,282), "project" (27,752), "green" (27,482), area (27,387), projects (21,906), district water safety, resource, environment, ecology and culture.These five topics are the main factors affecting the urban water system in the urban water cycle (Table 3).4.1.2Co-occurrence network.Constructing a word co-occurrence network helps analyse the relationship between words and words that appear together in a sentence.The minimum word frequency was set as 5,000, and the minimum document frequency was set as 1.The co-occurrence network is shown in Figure 4. Thirteen clusters focused on three topics: (1) sponge city's construction; (2) ecological environment and urban-rural development; and (3) management by municipal departments and enterprises.
The most significant cluster is water, construction, urban, new, green, area, road, project, district, build, squire, meters, kilometres and space.It is represented by dark blue colour.The largest cluster also links to the cluster in orange, and the second cluster with development, management, promotion, housing, urban-rural and system.The biggest cluster relates to the third cluster with green colour, which includes ecological, park, environment, improve, quality and improvement.4.1.3Parts of speech.Figures 5 and 6 show the most frequent adjectives and verbs.Municipal represents that the many blogs are related to governments' systematic planning, report, commitments, implementation, projects, etc.The word "Natural" were usually mentioned: "Natural resource", "Natural Sponge", "Natural surroundings", "Natural accumulation", "Natural purification", "Natural process", "Natural ecological security", "Natural storage", "Natural ecosystems", "Natural disasters", "Natural aquatic organisms", "Natural cycle of water", etc. Natural methods construct a sponge city, and it is based on "Natural ecological security" and "Natural surroundings".
Content analysis
This study studied the annual most-liked blogs related to sponge city from 2013 to 2022.As abovementioned in Section 3, the most like posts happened in 2021.Sponge city microblogs Water safety "purify water", "water control", "water treatment", "water replenishment", "water storage", "water collection", etc The shortage of drinking water resources and the severe decline of agricultural water and groundwater have issued more serious warnings!For example, User A posted a blog about developing sustainable circular agriculture and sponge city design, as well as vigorously improving the water storage function of reservoirs and canals that are more important in Jinjiang.User B suggested that sponge city refers to a city that is like a sponge and has good ''elasticity'' regarding environmental changes, adaptation and reaction to natural disasters.It absorbs, stores, infiltrates, and purifies water when it rains Water resource "circular water resources", "water resources", "renovation of water supply", "water system", etc User C posted, "Chinese water resources are in short supply, and there is a huge demand for water resources.Rainwater collection and utilisation technology can greatly alleviate the consumption of water resources such as municipal and irrigation water, and can also reduce environmental problems such as urban waterlogging and ecological balance damage".Likewise, another user said that the construction of sustainable and circular water resources is indeed significant Water environment "water accumulation point", "regional water elasticity", "water catchment areas", "water body", etc User D says we should further promote the classification of domestic waste and urban landscaping and eliminate black and smelly water in built-up areas of cities above the prefecture level, making the urban environment more liveable Water ecology "ecological water restoration", "water conservancy", "sustainable water", "water conservation", "water balance", "water permeability", etc User E said there is no water accumulation in summer, and the road surface is bonded with natural stones and colloids, with a water permeability of 70%.There are many green plants in Guixi ecological park, and the design is excellent, which will effectively reduce the heat island effect in the area.You can jog and do morning exercises here.User F said, the project adopts the concept of hydrophilic, living water and good water in Singapore's ABC water plan and combines the advanced experience of Fengxi new town to build according to the local conditions, harmonious human and ecological water restoration, comprehensive management, and implemented by local government divisions (continued) Table 3.
Detailed information about "water"
Perception of sponge city peaked at 100,390 in 2021 (Table 4).Table 4 describes good news about China's Construction Second Bureau Qilu Branch winning one provincial engineering construction method in Shandong Province.The user shared sponge city knowledge and "PDS Anti-siphon Drainage Collection System Construction Method".This kind of knowledge sharing is popular with the public on Weibo and benefits the public, sponge city managers and governments.
On the contrary, some negative blogs attracted many likes (Table 4).Sheng**'s blog, with 2,623 likes, reported that the central investigation team revealed that Zhengzhou spent ¥50bn on the "sponge city" project, but only 32% of it was used in related projects.He added that it did not seem to have any effect during heavy rain.But many experts pointed out that similar views were biased.The severe rain disaster in Henan this time was very rare.One-third of last year's rainfall fell within one day.The amount of rainfall exceeded the ability of the sponge city to deal with it, and it has nothing to do with the sponge city.The recurrence period of the waterlogging prevention and control design in Zhengzhou city matches the city scale.If it is designed according to this extreme situation, it will cause serious waste of resources.Gu**'s blog, with 4,702 likes, mentioned Beijing's rainwater in 2020.It is similar to Zhengzhou's case.A blog with 921 likes was posted in 2016.Since the second half of 2015, due to the sponge city project, rail transit construction and elevated road construction in Jinan City, roads are frequently repaired, resulting in extreme traffic congestion for a long time, and many Jinan citizens have been angry.People suggested that "Jinan should be blown up and rebuilt".Despite controversies, the Jinan Municipal Government did not remove the posts but actively responded to public comments.
Publish location analysis
Among 286 locations, this study selected the top 30 sites with higher frequencies.Thirty popular sponge cities on Weibo are as shown in Figure 7. Beijing (203), Xi'an (157) and Zhengzhou (102) ranked top 3. 17 out of 30 pilot sponge cities recorded the largest number of Weibo posts.Special funds from the central government subsidise these 30 pilot cities. Beijing is the first batch of pilot sponge city in 2016.Besides Beijing, top 30 sponge cities with largest number of Weibo posts include Wuhan (71),
Contents about water Related keywords
Examples from users Water culture "water landscape", "water recreation" User G suggested we "make water conservation a habit in the capital city".User H proposed to include water culture in education."Fine arts and environmental design graduation exhibition: relying on the original pattern for landscaping and functional arrangements, and then extracting the site's history and culture memory, integrated into the design details, it forms a coherent campus ecological network system based on water, and creates a place for teachers, students and residents to watch, rest and play" Source: Created by author; https://m.weibo.cn/Table 3. 24) are all pilot sponge cities in China.
IJCCSM
The other cities with higher frequencies are not the pilot sponge city, such as Xi'an (157) and Zhengzhou (102).These cities have once suffered from severe rain and flood disasters in recent years, including 7.20 Zhengzhou Heavy Rainstorm in 2021 and 7.24 Xi'an Heavy Rainstorm in 2016.Thus, these cities' citizens posted more microblogs about sponge cities.The city with practices and constructions of sponge cities or some cities that occurred rainwater and urban flooding may become popular and influential cities on Weibo in the context of sponge city research.People live in sponge cities publish more blogs about sponge city than people living in traditional cities.
Sentiment analysis
To explore the public's attitude towards sponge city, CX Data Science of Simply Sentiment tools analyses the 53,586 contents from Weibo.A total of 58% were positive (þve) (Figure 8), and only 14% were negative (-ve).Positive sentiment accounted for 76.8%, which was fourth times higher than negative one (18.5%, Figure 9), confirming sponge city might help city branding online.Although Fu et al. (2022) held that there is a limit to how much rainfall sponge cities can absorb, they are unlikely to be a panacea for urban flooding problems.Uncertainty in sponge city design and planning and insufficient funding are the most severe issues that can IJCCSM lead to the failure of the sponge city concept (Nguyen et al., 2019).Besides, some people post on Weibo about sponge city with negative attitude (Section 5.2).Most public users were optimistic about the sponge city initiative.Positive sentiments included "improve", "comprehensive", "good", "high quality", "beautiful", "support", "improvement", etc. (Figures 10 and 11), while the negative sentiments consisted of "waste", "problems", "pollution", "problem" "epidemic", "disaster", etc. (Figures 10 and 12).These negative sentiments might provide useful insights for future sponge city plans making or strategies.
Results and discussion
The sponge city is closely related to climate issues, especially urban flooding and rainwater hazards.Climate change, Climate change, increased urbanization, and ineffective urban planning regulations have resulted in water-related issues in numerous countries, including flooding hazards, water pollution and water scarcity (Nguyen et al., 2019).Climate change and related issues were frequently mentioned in sponge city.This is related to people's expectations of more hazards will, emerge, cities will face more severe and complex climate change risks, impacting human health, economic development and ecosystem services (Zhai et al., 2019).Likewise, in China, climate change and sustainable urban development are extremely prominent issues (Zhai et al., 2019).Modern urban flood management includes engineering methods like infrastructure construction and public opinion mining to know more about their perception to ensure better public participation via social media and enhance preparation for flooding and response (Lu et al., 2022).
To answer the first research question, the word frequency results indicated that Weibo's users focused on "water".The co-occurrence networks reflected that "water" remained the most critical issue in sponge city construction.A specific urban water management strategy for a sponge city is a complex approach that faces many challenges (Nguyen et al., 2019).It indicated that water is an essential topic related to sponge cities in public perception.The public focuses on five aspects: water safety, resources, environment, ecology and culture.The former four factors are related to urban water circularity.The essence of sponge city construction should be classified into the category of comprehensive improvement of the five elements of urban water.We need to consider the natural water cycle and social cycle as a whole and construct a sponge city based on national conditions.
To answer the second research question, this study analysed the annually published blogs with the most likes from 2013 to 2022, except for the most popular blogs in 2021, to IJCCSM share knowledge about sponge cities.The sentiment results indicated that 76.8% were positive, meaning that sponge city might be helpful in building a positive city image and enhance city branding.Nevertheless, some blogs criticised certain sponge cities (such as Beijing, Jinan and Zhengzhou) per Table 4 and attracted many likes.For example, the user criticised the sponge city in Zhengzhou but ignored that the rainstorm disaster in 2021 was very rare, and the heavy rainfall exceeded the sponge city's capacity.
According to Thadani et al. (2020), given the significant impact of online social media on brand image, PRC has used online social media for urban project promotion like sponge city.That can also encourage public participation in sponge city design and construction and advocate for all stakeholders to work together in water resource cycle and climate change projects.Public involvement in the sponge city design and construction process can be incorporated into the government's strategic planning and city brand building.Besides, while the government might collect public opinion through social media, it is also important to note that some social media content might be biased (Finch et al., 2016).
To answer the third research question, the parts of speech showed that many blogs are related to governments' systematic planning, report, commitments, implementation, projects, etc.The content analysis results reflected that the public and sponge city managers, and governments could benefit from the sponge city's information on Weibo.It proved that the sponge city concept had been communicated to the public.In addition, people who live in sponge cities are more likely to post related blogs.As questionnaires in offline communities were done by Luo et al. (2022), residents of sponge cities are generally satisfied with the travel and living conditions and have strongly supported the local government.The online public's perceptions are affected by the living environment's differences.Many writers, who wrote about sponge cities, are those with frequent and severe rainwater and flooding.They are more likely to learn about or even support the sponger city initiative than people elsewhere.As more young people use social media like Weibo, the results may conclude that young people support the sponge cities movement.Moreover, sponge cities' relevant enterprises may pay attention to Weibo for better branding.
Conclusion 6.1 Theoretical implications
To assess sponge city's limitations and opportunities, this article critically assesses social media Weibo users' awareness and perceptions of sponge cities.It examines sponge city's popular topics, published locations and the public's sentiment about sponge cities on Weibo.The study revealed the popularity of sponge city on Weibo, contrary to experts who pointed out that sponge city lacks public participation (Wang et al., 2022) .Comparing with the traditional research methods like surveys with offline public people and community, it contains more critical reviews which can improve policies and regulations to incorporate social goals and to include the public in the sponge city-building process.
Practical implications
The study supports the public's foci and perceptions, especially the five "water" strategies and circularity (safety, resources, environment, ecology and culture) for online governments and city managers.These results will be beneficial in sponge city's decision-making.Moreover, the shift of scholarly attention from city marketing to city branding opens a new era in the representation and meaning of city branding (Thadani et al., 2020).Government can increase online social media publicity and education to cultivate people's awareness of water conservation and make water conservation everyone's action.Related departments can let the public learn about and participate in the sponge city construction.They can also Perception of sponge city encourage public on Weibo to actively respond to urban climate change issues, strengthen adaptation and mitigation measures and choose development paths to enhance the city's climate adaptation via the sponge city plan.
Limitations and future research directions
Although public opinion collection via social media could be faster than questionnaires, the sample data may mainly focus on young people and miss older people who do not use that.
Further research might focus on senior public's opinion on sponge city by using traditional research methods like delphi interviews.Besides, it may be possible to apply the same approach to study other topics like smart and low-carbon cities and ecosystem projects that lay a solid foundation for improving carbon sink capacity.
Figure 1.Method flowchart Figure 3.The top 300 keywords with high frequencies Shanghai (62), Shenzhen (58), Pingxiang (56), Chongqing (53), Suining (49), Guyuan (48), Tianjin (45), Changde (45), Nanning (40), Qingyang (39), Jinan (38), Hebi (36), Chizhou (35), Baicheng (35) and Zhuhai ( Figure 4. Weibo co-occurrence network in Sponge city Figure 6.Most mentioned verbs in sponge city Figure 8. Net sentiment Figure 11.Positive sentiment on sponge city average, and more than 60,000 deaths from 1991 to 2021.It caused an average annual direct economic loss of about 163.2 billion CNY and approximately ¥5.05tn (Ministry of Emergency Management of the PRC, 2022).China wishes to transform 80% of built-up areas to sponge cities by 2030.China's strategic plan at the national level regarding sponge city includes notices, commitments, systematic planning and implementation.Table According to the Ministry of Water Resources and the Ministry of Emergency Management of the People's Republic of China (PRC), flooding in China caused about 1,974 people die or missing annually on studied China's sponge city's ecosystem-based adaptation to address urbanisation and climate issues.Applying the Geodesign framework as an integrated
Table 2 .
Top Water is the most influential and popular keyword on Weibo.The topic of water in sponge city on Weibo can be classified into five aspects:
Table 4 .
Citizens suggested that "Jinan was blown up and rebuilt" and the Jinan Municipal Government replied] Since the second half of 2015, due to the sponge city project, rail transit construction and elevated road construction in Jinan City, roads have been frequently repaired, resulting in extreme traffic congestion for a long time that made many Jinan citizens angry On 6th Jan., the Municipal Government Public Information Network released the "Implementation Plan for the Improvement of "Clear Water and Green Banks" in the Main Urban Area of Chongqing. ..Meet the requirements of sponge city planning indicators, and the green coverage rate of the green buffer zone will reach more than 80% Shandong Provincial Provincial Engineering Construction Method.The PDS anti-siphon drainage collection system in this construction method collects infiltrated water and uses it as water for garden irrigation to save water, improve the urban ecological environment and improve urban flood control, drainage and disaster reduction. ..Sponge city follows the sixcharacter policy of "infiltration, stagnation, storage, purification, use, and drainage" . . .2022 Sheng** For example, some time ago, the central investigation team revealed that Zhengzhou spent 50 billion yuan on the "sponge city" project, but only 32% was used on related projects.During heavy rain, it did not seem to have any effect | 7,718.2 | 2023-05-26T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Thermal conductance between water and nm-thick WS2: extremely localized probing using nanosecond energy transport state-resolved Raman
Liquid–solid interface energy transport has been a long-term research topic. Past research mostly focused on theoretical studies while there are only a handful of experimental reports because of the extreme challenges faced in measuring such interfaces. Here, by constructing nanosecond energy transport state-resolved Raman spectroscopy (nET-Raman), we characterize thermal conductance across a liquid–solid interface: water–WS2 nm film. In the studied system, one side of a nm-thick WS2 film is in contact with water and the other side is isolated. WS2 samples are irradiated with 532 nm wavelength lasers and their temperature evolution is monitored by tracking the Raman shift variation in the E2g mode at several laser powers. Steady and transient heating states are created using continuous wave and nanosecond pulsed lasers, respectively. We find that the thermal conductance between water and WS2 is in the range of 2.5–11.8 MW m−2 K−1 for three measured samples (22, 33, and 88 nm thick). This is in agreement with molecular dynamics simulation results and previous experimental work. The slight differences are attributed mostly to the solid–liquid interaction at the boundary and the surface energies of different solid materials. Our detailed analysis confirms that nET-Raman is very robust in characterizing such interface thermal conductance. It completely eliminates the need for laser power absorption and Raman temperature coefficients, and is insensitive to the large uncertainties in 2D material properties input.
Introduction
Thermal transport across a solid-liquid interface is a topic of ongoing research due to its various applications in micro/ nanoscale thermal transport, such as evaporation cooling and energy conversion, 1-5 thermal management, 6-8 ultrafast ow delivery, 9 cancer treatment, 10 solar thermal heating, 11 and nanouids. 12,13 Continuum based interface thermal resistance (ITR) models describe this resistance as an irruption on phonon propagation in a crystalline lattice. This is due to the difference in the speed of sound between two materials which leads to a mismatch in acoustic impedance. 14 The Acoustic Mismatch Model (AMM) and the Diffuse Mismatch Model (DMM) are the main models to explain this mismatch across a solid-liquid interface and have been used widely for the theoretical calculation of interface thermal transport. 15 The AMM model neglects phonon scattering at the interface, while the DMM model considers their diffuse scattering across the interface. 16,17 AMM and DMM predict high and low interface thermal resistance, respectively, which provide upper and lower limits for the interface thermal resistance. However, these two models do not consider surface complexities and solid-liquid interaction strength. Molecular dynamic (MD) simulation is an alternative method for studying ITR theoretically without considering continuum based governing equations, and it is capable of studying several factors that can affect the ITR, such as surface wettability. Note that in some calculations the term Kapitza length l K is used to represent the ITR quantitatively. l K is dened as: l K ¼ R K k, where R K is ITR or Kapitza resistance and k is the thermal conductivity of one of the phases, usually the liquid. Barrat et al. studied the dependence of R K on wetting properties using non-equilibrium MD simulation as a function of the interaction coefficient (c 12 ) of the Lennard-Jones equation and under normal pressures. Their results showed relatively large values of R K when the liquid is not wetting the solid (small c 12 values). 18 They reported that l K decreased from 50 nm to less than 10 nm, as the c 12 coefficient increased from 0.5 to 1. Kim et al. investigated the interface thermal transport between parallel plates separated by a thin layer of liquid argon using a 3D MD simulation employing 6-12 Lennard-Jones potential interactions, and studied l K as a function of surface wettability, thermal oscillation frequency, wall temperature (from 80 to 160 K), and channel height. They assumed that the solid molecules had the same mass as the argon molecules. Their results indicated that l K varies from 1 to 10 nm under several scenarios. 19 Similar results were reported by Giri et al. and Vo et al. regarding the effect of interaction strength and thermal boundary conductance. 20,21 In another work, R K was reported in the range of 5 Â 10 À8 to 4 Â 10 À7 m 2 K W À1 using non-equilibrium MD simulations at liquid-vapor Ar mixtures adjacent to warmer Fe walls. 22 Murad et al. studied the ITR between Si and water using MD simulation, and they found that R K decreases with increasing temperature from 5 Â 10 À6 m 2 K W À1 to 3 Â 10 À9 m 2 K W À1 when temperature increases from $350 K to $550 K. 23 In the work by Shenogina et al., it is reported that the Kapitza conductance is proportional to the work of adhesion, and for a highly hydrophilic surface it can be up to $160 MW m À2 K À1 . 24 Barisik et al. performed MD simulations of heat conduction in liquid Ar that is conned in Ag nano-channels and reported that R K can vary from 0.8 Â 10 À9 to 5 Â 10 À9 m 2 K W À1 from cold to hot surface temperature, respectively. 25 In another work they utilized MD simulations to study ITR at Ar-Ag and Ar-graphite interfaces, and concluded that l K increases with increased wall temperature, and is three times larger at an Ar-graphite interface than that at an Ar-Ag interface which is due to the difference between the interaction potentials of the molecular pairs in the two cases. 16 While the last two works were conducted under generally low temperatures ($130 K), Barisik et al. conducted other MD simulations and reported that l K at Si-water in a higher temperature range (more than RT) decreases slightly with increased wall temperature, and is on average around 9 nm. 26 The pressure dependence of ITR at Au-water and Siwater interfaces was studied using MD simulations by Pham et al. 27 Their results revealed that the pressure dependence of l K depends on surface wettability. The l K of the Au-water (hydrophobic) interface was stable despite increasing water pressure, while it changed signicantly across an Si-water interface (hydrophilic). Han et al. drew the same conclusion that ITR increases with liquid pressure enhancement through an MD simulation of n-peruorohexane in contact with gold. 28 The ITRs of several linear alkane liquids in contact with gold were obtained using non-equilibrium MD by Bin Saleman et al. They found that ITR is directly proportional to the number of carbon atoms in an alkane molecule and on average is $1.5 Â 10 À7 m 2 K W À1 . 29 Past discussion was mostly focused on theoretical works, especially MD simulations. Unfortunately, there are only a few experimental works in the eld of solid-liquid ITR measurement to compare with those calculated values. In 2002, M.
Wilson et al. investigated thermal interface conductance between Au, Pt, and AuPd nanoparticles suspended in water or toluene. They found a thermal conductance (G) of 130 MW m À2 K À1 for a citrate-stabilized Pt nanoparticles and water interface by heating particles with a 770 nm optical laser and interrogating the decay of their temperature through time-resolved changes in optical absorption. 30 In their next work, the effect of the organic stabilizing group on the G of AuPd particle-water and AuPd particle-toluene interfaces was studied with a similar technique. 31 Two conclusions were arrived at in their work: (1) the values of G of the particle-water interface under different stabilizing groups were in the order of 100-300 MW m À2 K À1 , which means that G is large, regardless of the self-assembled stabilizing group, and (2) the G of an AuPd particle-water interface was larger than that of an AuPd particle-toluene interface, which indicates the effect of the liquid phase on ITR. In another work, Ge et al. performed a similar time-domain thermoreectance technique and studied the effects of surface wettability on l K using Au and Al based surfaces. The results indicated that l K at hydrophobic (Al) interfaces (10-12 nm) is a factor of 2-3 larger than l K at hydrophilic (Au) interfaces (3-6 nm), which is in agreement with MD simulations. 32 Park et al. reported ITR studies for a system of Au nanorods immobilized on a crystalline quartz support and immersed in various organic uids by heating the nanorods with a subpicosecond optical pulse and monitoring their cooling process by transient absorption. 33 They found the thermal conductances of the nanorod-uid interface at 36 AE 4 MW m À2 K À1 , 32 AE 6 MW m À2 K À1 , 30 AE 5 MW m À2 K À1 , and 35 AE 4 MW m À2 K À1 , for methanol, ethanol, toluene, and hexane, respectively. This indicated that G drops signicantly as water is replaced by an organic uid. Using a similar technique, it was reported that the G of Au nanodisks coated with a hydrophilic self-assembled monolayer varies over 90-190 MW m À2 K À1 , depending on the amount of water in the liquid mixture. For hydrophobic surfaces, G is in range of 70 AE 10 MW m À2 K À1 . This was attributed to the effects of the work of adhesion on interface thermal conductance. 34 Raman spectroscopy has proved to be a powerful tool for studying thermal transport at micro/nanoscales. Several works have been reported that show the potential of this tool to investigate the thermal conductivity and hot carrier diffusion coefficient of 2D materials, such as graphene 35,36 and transition metal dichalcogenides (TMD). [37][38][39][40] Raman spectroscopy is able to measure the ITR of solid-solid interfaces, as well as the aforementioned properties. Yuan et al. reported the interface thermal conductance between few-layered to multi-layered MoS 2 lms and Si, and showed that G increases with an increased number of layers of MoS 2 thin lm from 1 to 69 MW m À2 K À1 . 41 They reported other works that successfully measured the ITR between thin layers of TMD materials and a glass or Si substrate. [42][43][44] Raman spectroscopy based techniques have the advantage of being non-contact, non-invasive, and material-specic leading to higher accuracy of measured parameters.
In this work, for the rst time, the interfacial thermal conductance (G int ) between de-ionized (DI) water and WS 2 nmthick lm is measured using a novel nanosecond energy 5822 | Nanoscale Adv., 2020, 2, 5821-5832 This journal is © The Royal Society of Chemistry 2020 Nanoscale Advances Paper transport resolved Raman (nET-Raman) technique. Each WS 2 sample is suspended over a hole, and immersed in a water bath. Using this experimental structure, WS 2 lm is in contact with water from the top, while its other side is isolated thermally by air inside the hole. Interfacial thermal transport between solid and liquid is characterized here for three samples of different thicknesses. The measured G int is compared and veried with other literature values based on both experimental and MD methods. It is shown in detail that the accuracy of the measurement can be improved by using shorter laser pulses as the transient part of the Raman thermometry. Also, it is proved that uncertainties in the laser absorption coefficient, Raman temperature coefficient, and values of thermal properties of WS 2 lm in theoretical calculations do not downgrade the precision of characterization. In the following, the feasibility and capability of this method are explored in detail.
Sample preparation
Two different sizes of holes are made on an Si substrate using FIB to prepare the suspended samples. One of the holes is circular with a diameter of 10 mm and the other one is square with 22 mm side length. Fig. 1 shows the cross-sectional view of the hole that is used to suspend the sample on top of it. Then, three nm-thick WS 2 akes are prepared using the mechanical exfoliation method from bulk WS 2 , which guarantees the quality and crystallinity of the layers. Mechanical exfoliation makes it possible to prepare several samples of different thicknesses depending on the force applied to the bulk sample. Finally, these samples are transferred to the holes by gel-lms and a 3D micro-stage. More details of this process can be found in our previous work. 45,46 The Si substrate with the WS 2 lm on top of it is mounted on a stage inside a glass container. This container is lled with DI water. Using this setup, the WS 2 lm is in contact with air from the bottom, while touching the water on top (Fig. 1). Comparing the heat transfer on both sides of the WS 2 layer, this design guarantees that heat transfers to water as much as possible and maximizes the effect of the water-WS 2 interface on the temperature evolution of the lm. A glass slide is placed on top of the container to prevent water evaporation and to stabilize water inside the container. It should be noted that water will not penetrate underneath the WS 2 layer in the rst few hours during which the Raman experiment is being performed. We observe that aer 24 hours or more, a few micro-bubbles are formed beneath the WS 2 layer, which shows water penetration. As will be mentioned in the next section, the nET-Raman technique is based on the ratio of the temperature rise of the sample under two different heating states; therefore, any constant parameter that contributes equally under both states will have a negligible effect on the measured interface thermal resistance. Placing the glass substrate on top of the container obviously affects the laser power irradiating the sample, but since the transmission of the glass slide under two heating states is the same, it will not affect our measurement and is not considered in the characterization process.
This method can also be applied to other materials, such as bulk ones, by constructing an appropriate geometry. For instance, for bulk silicon with a thickness in the order of 100 s Fig. 1 Cross-sectional view of the experimental sample design to measure the interfacial thermal conductance (G int ) at a water-WS 2 nm-thick film interface. The nm-thick WS 2 film is suspended over a hole in an Si layer. The hole depth is 3 mm. A graphical illustration of the effects of relative contribution of total interface resistance (R int ) and water thermal resistance (R w ) under (a) CW and (b) ns heating states. Under each state, the WS 2 film is irradiated using a specific laser and the Raman signal is collected. A sample Raman spectrum of WS 2 is shown in the inset of figure (a). Under CW laser heating, R int is $4% of R w , showing it has a weak effect on total thermal resistance between the WS 2 sample and DI water, while under the ns laser this ratio is $20%. As a result, we expect to observe the effects of R int on the temperature evolution of WS 2 film under the ns heating state. Also, these two figures represent the thermal diffusion to the water and the fact that L w,ns is much shorter than L w,CW . The red thermal contour in each figure shows this effect. Also, the time-dependent temperature evolution under laser irradiation is represented schematically in the inset of each figure. For the CW case, the temperature rise (DT CW ) is constant due to the steady-state heating of this laser. The transient temperature rise and Raman weighted average temperature rise of the ns case are shown using red and orange curves in the inset of part (b). Also, the blue curve indicates a single ns laser pulse. of micrometers, it is possible to drill/cut a hole at a micrometer dimension from the bottom of the Si, in such a way that only a thin layer of Si remains on the top, and its bottom is totally in contact with air. Again, by putting this sample inside a DI water chamber, its top surface will touch the water, and the interfacial thermal conductance between the Si layer and water could be measured.
Physical principles of nET-Raman
The temperature rise of the suspended sample under laser irradiation is directly related to the thermal conductivity of the WS 2 lm (k), the thermal conductivity of water, and the interfacial thermal resistance at the water-WS 2 interface ðR 00 int Þ: Temperature changes of the sample could be investigated by studying the frequency variation of Raman-active optical phonons under laser heating. In the nET-Raman technique, two different energy transport states are constructed to analyze the thermal response of the material. Under the rst state, the thin sample is irradiated using a continuous-wave (CW) laser to construct steady-state heating. Under this state, the temperature rise of the sample is mainly controlled by the in-plane thermal conductivity of the sample (k) and the thermal conductivity of water. The second state, which is a transient state, is a nanosecond (ns) state. This state is constructed using a 300 kHz ns pulsed laser. Under this state, the temperature rise of the lm receives more effects from R 00 int : The contribution of R 00 int to the total thermal resistance between WS 2 and water is more signicant in the ns case than in CW. For the CW heating state and under the area of laser heating, the thermal resistance of water R w could be estimated as: R w ¼ 1/(2D CW k w ), where D CW and k w are the laser spot diameter of the CW laser under a 20Â objective lens and thermal conductivity of water, respectively. Taking k w x 0.6 W m À1 K À1 for water, and D CW ¼ 3.6 mm ( Table 2, see below), R w will be around 2.3 Â 10 5 K W À1 . The total interface resistance (R int ) can be estimated as: R int ¼ ð4R 00 int Þ=ðpD CW 2 Þ: Take R 00 int x1 Â 10 À7 m 2 K W À1 ; the total interface resistance will be around 9.8 Â 10 3 K W À1 , which is 4% of the total water resistance covering the WS 2 lm. Therefore, the interfacial thermal resistance plays a negligible role compared with R w in controlling the temperature of the WS 2 lm under the CW state and it is hard to detect its effects under this heating state [ Fig. 1(a)]. It should be noted that performing the Raman experiment using a CW laser is necessary in this method, since it leads to the cancelling of the effects of several known and unknown parameters, such as laser absorption and temperaturedependent Raman coefficients, on the nal results. This idea is represented in detail in the following paragraphs.
The laser pulse width (t 0 ) of the ns laser used in this work is 212 ns. During ns laser pulse heating, the thermal diffusion length to the water layer can be estimated as: L w;ns ¼ ffiffiffiffiffiffiffiffiffiffiffiffi ffi pa w t 0 p ; where a w is the thermal diffusivity of water. L w,ns is around 300 nm. The total thermal resistance caused by water under the ns state is estimated as: where D ns is the laser spot diameter of the ns laser under a 20Â objective lens, which is around 2.5 mm. R w under this state is $100 Â 10 3 K W À1 . While this time R int , using the same estimation as in the CW case and taking D ns as 2.5 mm (Table 2), is $20.3 Â 10 3 K W À1 , which is $20% of R w [ Fig. 1(b)]. Hence, we expect that R 00 int will play an important role under transient heating in the thermal response of the sample. Fig. 1(a) and (b) show a graphical representation of the relative effects of R w and R int under both states on total thermal resistance. Also, note that the thermal diffusion length to water under the CW state can be estimated as: L w,CW x 10D CW , which is $36 mm. This signicant difference between L w,CW and L w,ns is also schematically shown in these two gures by red thermal contours.
In both states, laser heating and Raman signal excitation take place simultaneously. Collecting this Raman signal under various laser powers could be used to track the temperature evolution of the sample. In fact, we can obtain the Raman shi power coefficient (RSC) under each state by irradiating the sample using several laser powers (P). RSC is dened as: , where a and vw/vT are the laser absorption coefficient and Raman shi temperature coefficient, respectively. Under an ns energy transport state, which is designed to probe localized heating, RSC can be obtained as: j ns ¼ vw=vP ¼ aðvw=vTÞgðk; R 00 int ; rc p Þ; where rc p is the volumetric heat capacity of each WS 2 thin lm. The thermal conductance at the water-WS 2 interface is dened as:G int ¼ 1=R 00 int : These denitions of R 00 int and G int are consistent in the rest of this article. As mentioned earlier, due to the localized heating of the ns state, the contribution of R 00 int to j CW is almost negligible in comparison to j ns ; therefore the Raman shi power coefficients are different under these two states. Note that the f and g functions depend on the thermal properties of the materials under each heating state, and are more complicated to solve analytically. Therefore, it is too complicated to show their analytical forms, and they have to be solved numerically.
Using the last two Raman shi power coefficients j CW and j ns , a new experimental parameter is dened as: Q exp ¼ j ns / j CW , which is called the normalized Raman shi power coefficient. It can easily be shown that Q exp is only a function of k, R 00 int ; and rc p . And it is no longer a function of the temperature dependent Raman shi coefficient or laser absorption coefficient. This is the beauty of the nET-Raman technique which makes it independent of the last two coefficients. a and vw/vT are generally the main sources of error in steady-state Raman thermometry. Using a 3D numerical model that calculates the temperature rise of the sample under CW (DT CW ) and ns (DT ns ) heating states, we can nd the theoretical value of the temperature rise ratio (Q th ) as: Q th ¼ DT ns /DT CW . Using known values for k and rc p for water and WS 2 , a relationship between Q th and R 00 int is found. Finally, this relationship is used to nd the R 00 int value that meets the condition: Q exp ¼ Q th . As mentioned earlier, known values of k and rc p are used here from the literature. 39,47,48 In the discussion part, it will be shown that both of these values have a negligible effect on the uncertainty and value of measured R 00 int : The rst part of the 3D heat conduction model deals with steady-state heating under a CW laser, which is governed by the following differential equation: 5824 | Nanoscale Adv., 2020, 2, 5821-5832 This journal is © The Royal Society of Chemistry 2020 Nanoscale Advances Paper where T CW (K) is the temperature in CW heating and _ q is the volumetric Gaussian beam heating, which is shown as: Here r is the radial direction that starts at the center of the hole all the way to the boundaries of the suspended area. z is the position in the thickness direction. I 0 (¼P/pr 0 2 ) and s L are laser power per unit area at the center of the laser spot and laser absorption depth, respectively. s L is calculated as: s L ¼ l/4pk L , where l and k L are the laser wavelength and extinction coefficient of WS 2 at corresponding l, respectively. In this work, l is 532 nm, and at this wavelength k L takes the value 0.903. 49,50 Therefore, s L will be $46.9 nm. Although this value of s L is used in our calculation, it should be noted that this parameter has a negligible effect on the measured R 00 int value using the nET-Raman technique, since it will be canceled out by dividing the temperature rise under two heating states. 48 Transient-state heating is generated using a 532 nm nanosecond laser with a 212 ns pulse width (t 0 ). It should be noted that t 0 should be smaller than the time needed for the sample to reach thermal equilibrium (t eq ). This time can be estimated as: t eq $ (10r 0,ns ) 2 /a water , where a water is the thermal diffusivity of water. In this work, t eq is around 25 ms, which is much larger than t 0 . Another point that is worth mentioning is the effect of hot carrier diffusion on thermal transport in this ns state. In short, as soon as the laser irradiates the WS 2 sample, electrons in the valence band gain enough energy (more than the Fermi energy) to leave this band, leaving holes behind. These hot carriers recombine within a very short period of time (t l ) which is in the order of 1 ns for WS 2 . 51 Since t l is very much shorter than t 0 , we can ignore the effects of hot carrier diffusion on thermal transport. Hot carrier transfer inside TMD materials, such as WS 2 , was well-studied in our previous work. 42,48,52 Regarding the thermal transport in the cross-plane direction of the WS 2 sample, it is assumed that the temperature distribution in this direction is uniform. In the thickness direction, heat diffusion length (L t ) under ns pulsed laser heating can be estimated as: L t x ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pk t t 0 =rc p p ; which is around 1 mm. Here, k t is the thermal conductivity of WS 2 in the cross-plane direction and is about 2 W m À1 K À1 . 53 This value is much larger than the thickness of all samples (Table 1, see below), which conrms the validity of this assumption. The governing equation of the ns laser heating state is: 54 where T ns is the temperature under the ns heating state. The heat source term in this state is written as: Here, I 0 (W m À2 ) is the peak laser intensity. Additionally, the temperature distribution at the water-WS 2 interface could be shown as: R 00 int ¼ ðT WS2 À T water Þ=q 00 ; where q 00 is the interface heat ux. Note that T water and T WS 2 are the temperature of the water and WS 2 lm just close to the interface. Using the abovementioned equations, the temperature rise of the sample under two heating states could be calculated for different R 00 int values. As mentioned earlier, the ratio of these calculated temperature rises of the two states is equal to the experimental normalized RSC for the objective R 00 int : It worth noting that the experimental RSC is based on Raman intensity-weighted temperature rises in both space and time domains and this point is considered in the theoretical calculation of the temperature rise under each state. Note that the temperature at the edge of the suspended area could be considered to be room temperature under both CW and ns cases for two main reasons. First, the interfacial thermal resistance at the WS 2 -Si interface at the edge is much smaller than the in-plane thermal resistance of the WS 2 lm. Second, the thermal resistance of Si is very low due to its high thermal conductivity. Therefore, it is reasonable to consider the room temperature boundary condition at the WS 2 -Si interface.
Sample characterization
Three suspended samples are prepared using the mechanical exfoliation method. Both AFM and SEM characterizations are performed to study the thickness and roughness proles, and structure of these lms. Fig. 2(a) shows the 2D AFM image of Sample 3 at the boundary of WS 2 and the Si substrate. AFM measurements are conducted over the supported area to prevent sample damage. The thickness prole of this sample is shown in the gure using a gray 3D thickness prole and corresponds to the average thickness over the dotted rectangle in the direction of the arrow. The thickness of this sample is 22 nm. Fig. 2(b) indicates the 3D AFM image of this sample over a 10 mm  10 mm area close to the suspended area. The root mean square (RMS) roughness of this sample is measured using this image and is 2.44 nm. Table 1 includes the thickness and roughness values of all samples, as well as the ratio of roughness over thickness. This ratio for all samples is less than 15%, which indicates good contact between the WS 2 lm and the Si substrate.
As will be discussed in the next section, sample roughness is one of the main parameters that can affect R 00 int : In order to further study the sample's structure, we performed SEM measurement over the suspended area. Fig. 2(c) shows the SEM image of Sample 3. It shows that the suspended area is almost uniform in all directions. Also, it indicates that the sample is not totally at over the hole, and is concave toward the bottom of the hole. This will affect the laser spot radius measurement This journal is © The Royal Society of Chemistry 2020 Nanoscale Adv., 2020, 2, 5821-5832 | 5825 Paper Nanoscale Advances and alter the actual size of the suspended area, and therefore the theoretical temperature rise calculation under both states will vary to some degree. This effect is discussed in detail in the next section.
Water-WS 2 interface thermal conductance
A room temperature (RT) Raman experiment is conducted using both CW and ns lasers for all three samples to obtain the Raman shi power coefficient. For each sample, based on the WS 2 lm's structure and thickness, optimum laser power is used to nd the Raman shi power coefficient with the highest accuracy. For both lasers, a 20Â objective lens is used to focus the laser spot onto the surface of the WS 2 lm. This objective is chosen to minimize the effects of hot carrier diffusion on thermal transport. The hot carrier diffusion length (Dr HC ) is estimated as: where D WS 2 and s WS 2 are the hot carrier diffusion coefficient and electron-hole recombination time, respectively. Using s WS 2 and D WS 2 from reference values, Dr HC is $0.1 mm, which is much smaller than the laser spot radius under a 20Â objective lens. 48 Therefore, the hot carriers' effects on thermal transport are negligible in our experiment. The radius of the laser spot (r 0 ) for each Raman experiment is measured by analyzing the optical images of the laser spots based on a Gaussian tting method. Insets to Fig. 3(c) and (d) show the laser spots of both states for the third sample captured by a CCD camera. As mentioned in the previous section, knowing r 0 for each heating state is necessary to simulate the heating process since it determines the _ q in eqn (1) and (3). The laser spot size determines the laser intensity distribution while heating the sample, and, subsequently, the temperature rise and Raman shi. In this work, laser irradiation and laser spot measurement are conducted simultaneously, and the measured r 0 is used directly in our numerical method. Therefore, any effects of laser spot size on our nal result are considered precisely. The measured values of r 0 at e À1 of the center intensity for all samples are shown in Table 2. Both lasers are operating at 532 nm wavelength. The ns pulsed laser's repetition rate is 300 kHz. For the ns laser, this repetition rate yields $4.7 W power at the peak of the laser pulse, and decreasing the repetition rate will increase this peak power and can cause sample damage. As will be shown in the next section, decreasing t 0 without burning the lm can reduce the uncertainty level of this technique. More information about the lasers and Raman system can be found in our previous work. 46,55,56 Also, similar consideration should be involved in choosing the optimum CW laser power to prevent sample damage. Table 2 includes the laser power range for each sample under both heating states.
As shown in Table 2, the laser spot under the ns laser is smaller than that under the CW laser. This is caused by the collimation difference between the two laser beams. Also, the slight difference between r 0 under each heating state is induced by a variation in focusing level. Note that all of these r 0 values are more than the phonon mean free path (MFP) of the WS 2 samples ($15 nm); 51,57,58 therefore, it is reasonable to assume that thermal transport is diffusive and under local-equilibrium. Additionally, these laser power ranges ensure the linear decrement of Raman shi against increased laser power with minimal local heating effects. Local heating effects induced at higher laser powers can alter the thermal properties of the WS 2 lm and reduce the quality of the experimental data. Note that these laser power ranges are for the laser beam before it reaches the glass cap on top of the substrate. The amount of laser power absorbed by each sample is even less than this and is $60%. All of these details are considered in the numerical calculation.
Sample 3 is used here to detail the data processing and the results. Fig. 3(a) and (b) show the Raman spectra of this sample under both heating states by varying the laser power. During the Raman experiment, we did not observe any signicant auto-uorescence in the background while collecting the Raman signal under both lasers. Each spectrum has two main Raman modes: E 2g and A 1g . E 2g relates to in-plane vibrations and A 1g represents the out-of-plane vibrations. Two dashed lines in this gure indicate the decrease in Raman shi of the E 2g mode with increased laser power. The E 2g mode is used in this work to nd the Raman shi power coefficient, because it is stronger and more suitable for Raman peak tting. Note that considering the A 1g peak and performing the Raman experiment to nd the RSC values will not affect the nal results. This is shown by conducting nET-Raman using another sample (Sample 4), and the results are reported in ESI. † Ihe insets of these two plots show the 3D Raman intensity contour of this sample under CW and ns states. Also, they indicate that the Raman intensity of both E 2g and A 1g peaks increases linearly with increased laser power. It can be seen from both contours that both Raman peaks are red-shied with increased laser power. 2D representations of these two contours are shown in Fig. S2 of ESI. † Note that each point's value in the 3D contour of the ns state follows the contour bar of the inset of Fig. 3(a). All representative Raman spectra of WS 2 , as shown in Fig. 3, are tted using the Lorentzian function to nd the exact Raman shi of the E 2g peak at each laser power. The results of this peak tting are shown in Fig. 3(c) and (d) for CW and ns heating states, respectively. The tting quality depends on the quality of the experimental data and the Raman peak intensity. Generally, for intensities larger than a certain amount, the tting quality will be almost intact. In this work, the integration time and laser power are chosen in such a way as to guarantee that the peak tting uncertainty for low and high power cases are similar and less than 0.02 cm À1 . Since this value is negligible, it is not included as the uncertainty of measured interface thermal conductance. As mentioned in the previous section, the slope of this line in the low power range indicates the RSC (j) value as: Du ¼ jDP, where u is the Raman shi and P is the laser power. The j of the E 2g mode under a CW laser is À(0.49 AE 0.01) cm À1 mW À1 , and under the ns laser is À(1.30 AE 0.04) cm À1 mW À1 . Similar results for all samples are included in Table 3. Note that j under the ns state is generally higher than the steady-state value. This is because for the same average power, the laser peak power of the ns laser is very high and induces a greater temperature rise. Also, the thermal diffusion length under this Fig. 3 Raman spectra of WS 2 nm-film (Sample 3) under (a) CW, and (b) ns heating states. Both plots show that the Raman intensity of the E 2g and A 1g modes increases with increased laser power, and the peak position redshifts with the increased laser power. Here the E 2g peak is used to perform the analysis and measure the interfacial thermal resistance R 00 int : Two dashed lines in both figures indicate the redshift of the E 2g peak. The insets of these two figures represent the 3D contour of Raman intensity as a function of peak position (u) and laser power (P). These two contours confirm the aforementioned trends, as well as the linear increase in Raman intensity (I) with increased P. Note that the I value of the 3D contour of the ns state corresponds to the contour bar that is shown in the inset of part (a). The Raman shift power coefficient (j) corresponding to the E 2g peak of WS 2 under (c) CW and (d) ns laser of Sample 3. Black dots indicate the experimental position of the E 2g peak at different laser powers, and the red line on each plot shows the fitted line to find the j value under each state. Note that the x-axis of both plots is the laser power just after the objective lens and before the laser beam enters the container. Hence, the absorbed laser power under each case is even lower. Since in the nET-Raman technique the ratio of these two RSCs is used to measure R 00 int ; the laser absorptions of the glass layer, DI water, and WS 2 sample for each sample are identical for both heating states. This will not affect the determined R 00 int : Ihe inset of each plot shows the laser spots that irradiate Sample 3 under a 20Â objective lens for both CW and ns cases. This journal is © The Royal Society of Chemistry 2020 Nanoscale Adv., 2020, 2, 5821-5832 | 5827 Paper Nanoscale Advances state is much smaller than the CW value. These two phenomena lead to the higher temperature rise under pulsed laser heating. It can be seen from this table that j generally increases with decreasing lm thickness under each heating state. This is due to the fact that the temperature rise of each sample depends on the amount of absorbed laser energy, k, and thickness. The thickness affects the heat conduction in the sample and the laser absorption (multiple reections in 2D samples and the optical interference effect). Note that for TMD materials, k increases gradually with increased thickness for samples of more than $5 nm and it reaches the bulk k value at larger thicknesses. 45,59,60 A 3D numerical calculation based on the nite volume method is conducted to nd R 00 int and consequently interfacial thermal conductance (G int ). The thermal properties of WS 2 are held constant at: k ¼ 32 W m À1 K À1 , and rc p ¼ 1.92 Â 10 6 J m À3 K À1 . 47,48 Also, the thermal properties of DI water and air are taken from reference values. It will be shown in the following part of this work that uncertainties in these parameters have negligible effects on the determined R 00 int or G int and their uncertainties. Using this simulation, the Raman intensity weighted average temperature rise over both space and time domains for the ns state (D T ns ), and only over space for the CW state (D T CW ), are calculated as: Ð V 0 I e Àz=sL dv; respectively. These two temperature rises are shown schematically in the insets of Fig. 1. The exponential terms (e Àz/s L ) in these equations are related to the attenuation of the Raman signal as it leaves each scattering location. In these equations, I, V and DT represent the laser intensity under each state, sample volume, and temperature rise of each point, respectively. To match the laser intensity with experimental laser heating, the real laser spot radius, as shown in Table 2, is used to perform the simulation. This calculation is conducted for a range of R 00 int values and Q th is calculated for each R 00 int : Finally, the resultant R 00 int is deduced by equating Q th to Q exp . This process in shown in Fig. 4(a) for Sample 3. Also, the green area represents the uncertainty of the measured R 00 int based on the uncertainty of Q exp , as indicated in Table 3. Measured R 00 int values, as well as G int , for all samples are summarized in Table 4.
It can be seen from this table that the G int values of the three samples are almost in the same order, especially for samples 1 and 3. The larger resistance at the WS 2 -water interface of the second sample compared with the other samples could be caused by several factors. First, although the roughness of this sample is in the same order as that of the other two (Table 1), R q is over the supported region close to the suspended area and the surface roughness of samples over the suspended region could be different than R q , especially for the second sample.
Discussion
The measured G int values in this work are in good agreement with the reference values of solid-water interface thermal transport measurements. Results from other work as well as the current work are summarized in Table 5.
Comparing our result with other experimental work, it is obvious that the G int of the WS 2 -water interface is an order of magnitude smaller than the G int at AuPd-water or Pt-water interfaces, as shown in Table 5. The main factor that could int of the default case where t 0 is 212 ns. This is due to the higher contribution of R 00 int to the total thermal resistance between WS 2 and DI water during shorter laser pulse heating.
5828 | Nanoscale Adv., 2020, 2, 5821-5832 This journal is © The Royal Society of Chemistry 2020 Nanoscale Advances Paper contribute to this is the difference between the surface wettability of these three solids. Generally, Au and Pt possess smaller water contact angles than WS 2 , which means that these surfaces are more hydrophilic than a WS 2 surface. For clean Au and Pt surfaces, the room temperature contact angle (q CA ) at atmospheric pressure is in the range of 5-40 . [61][62][63][64][65] While the q CA of multilayer WS 2 at RT is around 50-80 . 66,67 Also, in these works, the solid surfaces are more uniform and are in form of nanoparticles, and are smoother compared with the WS 2 samples used in our experiment. q CA depends signicantly on surface microscale roughness. As discussed in the introduction, hydrophobicity is one of the main parameters that affects the thermal transport at a solid-water interface and a lower q CA leads to stronger solid-water contact. A similar argument is valid regarding the G int of ref. 27. Regarding the MD simulation results, it should be noted that ref. 26 reports G int at several temperatures from 350 to 550 K, and at temperatures closer to RT, G int is of the same order as our results.
As mentioned earlier, one parameter that affects the accuracy of our measurement is ns laser pulse width t 0 . As t 0 takes smaller values, the thermal diffusion length in water will be shorter, and R 00 int contributes more to the thermal transport under the ns state compared with longer t 0 cases. To show this fact, the temperature rise of the 22 nm sample under ns is calculated versus R 00 int for several t 0 cases ranging from 10 to 212 ns, and subsequently, Q th is calculated for each t 0 case. Fig. 4(b) shows the result of this calculation. It is reasonable to see that Q th increases with decreased laser pulse width, since shorter pulses means higher pulse peak power that leads to a higher temperature rise. Also, as shown in Fig. 4(b), Q th is plotted for each case. This gure shows that the slope of each Q th À R 00 int curve increases with decreased t 0 . Now, considering R 00 int ¼ 1:02 Â 10 À7 m 2 K W À1 ; as indicated in Table 4, and assuming constant 5% uncertainty for each hypothetical Q exp value, we can nd the uncertainty in R 00 int for each t 0 case. This is shown by the shaded areas in Fig. 4(b) for two extreme cases when t 0 takes 212 ns and 10 ns. It is obvious that this area for smaller t 0 values is narrower that for larger t 0 values, which means higher accuracy in the measurement of R 00 int : As mentioned earlier, the R 00 int =R w ratio is $20% when t 0 ¼ 212 ns. Similar calculation shows that when t 0 is 10 ns, this ratio is $60%, which indicates a higher contribution of interfacial thermal resistance to total thermal resistance between the WS 2 lm and water under ns laser heating. Note that under the ns pulsed laser that is used in this work, when t 0 is 10 ns, the peak power of each laser pulse is $12 kW, and could damage the suspended lm. Another note worth mentioning is that depending on the increment in laser intensity, the light absorption could be linear or non-linear. As long as the laser intensity is not so high as to make the light absorption nonlinear, the laser pulse width could be decreased to increase the sensitivity of G int measurement. An alternative way to implement this experiment with smaller pulse widths is using an amplitude modulated frequency laser, with appropriate frequency and narrow pulse. Under such conditions, the pulse width can be short enough to measure R 00 int more accurately, while the laser power is kept below the damage threshold.
Another study is conducted to show that the nET-Raman technique does not depend on the known values of k and rc p of the WS 2 lm. To do so, the temperature rise of Sample 3 under both heating states, and consequently Q th , are calculated for a range of k and rc p . The results of this calculation are shown in Fig. 5. Using the Q exp of this sample (Table 3), R 00 int and its uncertainty are found for each case and represented by black solid line in each plot of Fig. 5. Two dashed lines show the uncertainty of measured R 00 int corresponding to the uncertainty of Q exp . These two contours indicate that if k and rc p of WS 2 change by 10% independently, the resulting values of R 00 int change by less than 2% and 4%, respectively. Also, the uncertainty of the measured R 00 int will be almost intact, since the dashed lines and black solid line in each contour are almost parallel regardless of k and rc p values. This gure indicates a critical fact that the effects of k and rc p are almost canceled out by introducing Q in this technique, and the three lines in each contour stay almost horizontal while k or rc p is varied.
As shown in Fig. 1 and 2(c), we can see that the suspended lm is slightly concave toward the hole. In all the aforementioned theoretical calculations that are used to determine R 00 int ; it is assumed that the suspended sample over the hole is completely at. To check the uncertainty caused by this assumption, a more realistic case is considered. Here, we assume that the center of the sample is concaved 1.5 mm inward, which is an exaggerated case. The new length of the This journal is © The Royal Society of Chemistry 2020 Nanoscale Adv., 2020, 2, 5821-5832 | 5829 Paper Nanoscale Advances sample (l arc ) which is the length of the WS 2 arc over the hole is $10.6 mm for a 10 mm hole. This value is used in our theoretical calculation to nd the corresponding interface resistance. Fig. 6(b) shows the results of this study. The measured R 00 int with similar Q exp for Sample 3 (Table 3) is $1.1 Â 10 À7 m 2 K W À1 . The uncertainty caused by this elongation in R 00 int is $7% (Table 4). Therefore, this lm elongation has a negligible effect on the determined R 00 int : The temperature rise of water at each point in the close vicinity of the WS 2 lm is calculated and plotted in Fig. 6(a), for both CW and ns cases. Here the normalized local temperature rise (DT*) is reported. To nd DT* at each point, the local temperature rise at that point (DT) is divided by the maximum local temperature rise under ns laser heating (DT ns ). This plot shows that DT* is mostly increased under the laser spot area, and at higher radii close to the boundaries of the suspended region it is a minimum, and in the case of the ns heating state it is almost zero. This further proves the fact that a minimal increase in the sample length will not affect the Raman weighted average temperature rise of the sample, since the thermal transport mostly occurs under the heating region and not in further away areas.
As mentioned in the main text of the paper, the sensitivity of our technique is mostly controlled by the ns state. The contributions of the thermal resistance of water (R w ) and interfacial resistance (R int ) under this state were elaborated in Section 2.2. The ratio of these two values could be written as: ; where k w , a w , and t 0 are the thermal Fig. 5 Effects of (a) in-plane thermal conductivity (k) and (b) volumetric heat capacity (rc p ) of WS 2 thin film on measured R 00 int in this work. Each contour shows the calculated Q th for a range of k and rc p of Sample 3, and the solid black line indicates the Q exp of this sample corresponding to Table 3. The two dashed lines on each figure are related to the uncertainty in measured R 00 int caused by uncertainty in Q exp . Both plots validate the idea that each of these parameters has a negligible effect on the measured R 00 int and DR 00 int in the nET-Raman method. Fig. 6 (a) Normalized local temperature rise under CW (left contour) and ns (right contour) cases. These contours show that the local temperature rise at the edge of the suspended area, especially in the ns case, is almost zero. And the area under the laser spot contributes most to the Raman weighted average temperature rise that is used in nET-Raman to find interfacial thermal conductance. (b) Determined R 00 int using the assumption that the suspended sample is not totally flat and is concave inward 1.5 mm toward the bottom of the hole. Under this situation, the heating area domain and r 0 under both states are altered, and updated values are used in the 3D numerical calculation to find R 00 int for Sample 3. The green dashed arrow in this plot shows the measured R 00 int for flat WS 2 film, as reported in Table 4. The error caused by this change in the sample diameter on measured R 00 int is less than 8%. | 12,378.8 | 2020-11-02T00:00:00.000 | [
"Physics",
"Materials Science"
] |
The Influence of the Different Repair Methods on the Electrical Properties of the Normally off p-GaN HEMT
The influence of the repair process on the electrical properties of the normally off p-GaN high-electron-mobility transistor (HEMT) is studied in detail in this paper. We find that the etching process will cause the two-dimensional electron gas (2DEG) and the mobility of the p-GaN HEMT to decrease. However, the repair process will gradually recover the electrical properties. We study different repair methods and different repair conditions, propose the best repair conditions, and further fabricate the p-GaN HEMTs devices. The threshold voltage of the fabricated device is 1.6 V, the maximum gate voltage is 7 V, and the on-resistance is 23 Ω·mm. The device has a good performance, which proves that the repair conditions can be successfully applied to the fabricate of the p-GaN HEMT devices.
Introduction
GaN high-electron-mobility transistors (HEMTs) are very suitable for power switching devices due to their high two-dimensional electron gas (2DEG) concentration, high breakdown voltage, and high electron mobility [1][2][3][4][5][6]. However, due to the polarization effect, the traditional AlGaN/GaN HEMTs generally are normally on (depletion-mode) [7]. In order to simplify the circuit and improve the safety, we need some methods to realize the normally off (enhancement-mode) GaN HEMT in practical applications [8]. At present, the main methods for realizing the normally off GaN HEMT include recessed gate [9][10][11], p-GaN cap layer [12][13][14], fluorine-plasma ion implantation [15,16], InGaN cap layer [17,18], and so on [19][20][21]. Among these methods, the most commonly used method is the p-GaN cap layer structure because of its high reliability [22]. In the fabrication processes of the p-GaN HEMT device, the important processes include selective etching of over-grown p-GaN layers and the repair process. The p-GaN HEMT etching process requires a high etching selectivity ratio. Both over-etching and under-etching will affect the performance of the device [23]. In order to improve the etching selectivity ratio, some measures had been proposed. Some researchers controlled the etching rate by changing the radio frequency (RF) bias power, inductively coupled plasma (ICP) power or the chamber pressure [24]. Some researchers had achieved self-terminating technology by changing the etching gas, i.e., [23,25]. Among these methods, the most commonly used method is adding O 2 into the etching gas (Cl 2 /O 2 /N 2 ). When the etching gas reaches the AlGaN layer, it will form the (Al,Ga)O x cluster with the AlGaN layer [25]. The bond energy of the (Al,Ga)O x cluster is relatively high and difficult to be etched away, so the etching selection ratio can be improved [25]. However, studies have found that the etching process will cause Cl ions to enter the epitaxial wafer (exist in (Al,Ga)O x or AlGaN layer), which affects the performance of the epitaxial wafer [23,25]. At the same time, the etching will produce damage, further affecting the performance of the epitaxial wafer [24,26]. Therefore, the repair process is required after the etching process. However, as far as we know, there are relatively few studies on the repairing process, and the mechanism of the repair process has not been studied very clearly. Based on previous studies, this paper first studied in detail the influence of different repair methods on the 2DEG concentration (N s ) and the mobility (µ) of the AlGaN/GaN HEMT. Afterwards, based on the optimized repair conditions, repair experiments were carried out on the p-GaN HEMT to verify the effectiveness of the conditions. Finally, we fabricated the repaired p-GaN HEMT devices and tested their performance. Figure 1a is the epi-structure of the AlGaN/GaN HEMT. The epitaxial structure is grown by the metal-organic chemical vapor deposition (MOCVD) on a 2-inch sapphire substrate. The structure consisted of a 2 µm GaN buffer layer, a 30 nm GaN channel layer, and a 15 nm AlGaN barrier layer (Al composition is 0.23). Figure 1b shows the epistructure of the p-GaN HEMT with an additional layer of 60 nm p-GaN layer. The doping concentration of the p-GaN layer is about 4 × 10 19 cm −3 and the hole concentration of the p-GaN layer is about 4 × 10 17 cm −3 (annealing at 850 • C in N 2 ambient for 10 min). The former AlGaN/GaN HEMTs are used to test different repair methods (in order to eliminate the influence of the growing p-GaN layer). Figure 1c shows the experimental steps of the AlGaN/GaN HEMTs. First, make the epitaxial wafer into multiple 10 mm square samples, and then the magnetron sputtering equipment is used to form ohmic contacts at the four corners of the square samples to perform the Hall-effect measurements. The ohmic metal layers are Ti/Al/Ni/Au (20/160/55/50 nm), and then annealing at 870 • C in N 2 ambient for 30 s. Then, separately perform the Hall-effect measurements on each sample. Afterwards, multiple samples are etched together with the ICP equipment, the etching gas is Cl 2 /N 2 /O 2 . The etching rate of the p-GaN and AlGaN layer is about 10 nm/min, and 1.5 nm/min, respectively. The etching time of the AlGaN/GaN HEMT is 1.5 min, which simulated the case of over-etching the p-GaN HEMT for 1.5 min. After etching, the Hall-effect measurements are separately performed on each sample. After the Hall-effect measurements, different repair methods are used to repair experiments. Then, perform the Hall-effect measurements again after each repair experiment. The p-GaN HEMTs are used to verify the effectiveness of the optimized repair conditions. The steps of the experiment are basically unchanged. The etching time is 7.5 min (over-etching time is 1.5 min).
Results and Discussion
Firstly, the influence of the buffered-oxide etchant (BOE) treatment method on the Ns and the μ of the AlGaN/GaN HEMT (sample A) is studied. As shown in Figure 1c, first carry out 1 min BOE treatment, and then increase the treatment time by 1 min. The halleffect measurements are required after each treatment. The red parts in Figure 2a,b show the influence of the etching process and the BOE treatment process on the Ns and the μ of sample A. It can be seen that the Ns and the μ of sample A after etching (as-etched) are significantly reduced. As the BOE treatment time increases, the Ns and the μ of sample A increase (repair-1, 1 min BOE treatment). When the BOE treatment time is 2 min, the Ns and the μ of sample A recover to the maximum values (repair-2, 2 min BOE treatment). As the BOE treatment time continues to increase, the Ns and the μ begin to decrease again (repair-3, 3 min BOE treatment). The reason may be that the etching will cause Cl ions to enter the epitaxial wafer, which may exist in the AlGaN layer [23] or (Al,Ga)Ox layer [25]. Due to the repulsive movement of electrons, the negatively charged Cl ions will affect the Ns of the epitaxial wafer. At the same time, due to the effect of Coulomb scattering, the Cl ions will affect the μ of the epitaxial wafer. Therefore, the Ns and the μ decrease after etching. After BOE treatment, the (Al,Ga)Ox layer of the epitaxial wafer will be removed, and a large amount of Cl ions will be removed, resulting in an increase in the Ns and the μ. However, if the BOE treatment time is too long, the hydrofluoric acid (HF) in the BOE will deteriorate the epitaxial wafer, further affecting the performance of the epitaxial wafer. The red parts in Figure 3 show the influence of the etching process and the BOE treatment process on the product of the Ns and the μ (the product is related to the device current). It can be seen that the BOE treatment can recover the product value to 84% (repair-2) of the product value before etching (as-grown).
Results and Discussion
Firstly, the influence of the buffered-oxide etchant (BOE) treatment method on the N s and the µ of the AlGaN/GaN HEMT (sample A) is studied. As shown in Figure 1c, first carry out 1 min BOE treatment, and then increase the treatment time by 1 min. The hall-effect measurements are required after each treatment. The red parts in Figure 2a,b show the influence of the etching process and the BOE treatment process on the N s and the µ of sample A. It can be seen that the N s and the µ of sample A after etching (as-etched) are significantly reduced. As the BOE treatment time increases, the N s and the µ of sample A increase (repair-1, 1 min BOE treatment). When the BOE treatment time is 2 min, the N s and the µ of sample A recover to the maximum values (repair-2, 2 min BOE treatment). As the BOE treatment time continues to increase, the N s and the µ begin to decrease again (repair-3, 3 min BOE treatment). The reason may be that the etching will cause Cl ions to enter the epitaxial wafer, which may exist in the AlGaN layer [23] or (Al,Ga)O x layer [25]. Due to the repulsive movement of electrons, the negatively charged Cl ions will affect the N s of the epitaxial wafer. At the same time, due to the effect of Coulomb scattering, the Cl ions will affect the µ of the epitaxial wafer. Therefore, the N s and the µ decrease after etching. After BOE treatment, the (Al,Ga)O x layer of the epitaxial wafer will be removed, and a large amount of Cl ions will be removed, resulting in an increase in the N s and the µ. However, if the BOE treatment time is too long, the hydrofluoric acid (HF) in the BOE will deteriorate the epitaxial wafer, further affecting the performance of the epitaxial wafer. The red parts in Figure 3 show the influence of the etching process and the BOE treatment process on the product of the N s and the µ (the product is related to the device current). It can be seen that the BOE treatment can recover the product value to 84% (repair-2) of the product value before etching (as-grown). Secondly, the influence of the annealing method on the Ns and the μ of AlGaN/GaN HEMT (sample B) is studied. The experimental steps are shown in Figure 1c. The experimental sample (sample B) and Sample A are small squares of 10 mm at different positions on the same epitaxial wafer. The annealing temperature is 500 °C, the annealing time starts from 1 min, and the annealing time is increased by 2 min. The green parts in Figure 2a,b show the influence of the etching process and the annealing treatment process on the Ns and the μ of sample B. It can be seen that the Ns and the μ of sample B also decrease to a greater extent after etching (as-etched). As the annealing time increases, the Ns and the μ concentration increase (repair-1, 1 min anneal treatment). When the annealing time is 3 min, the Ns and the μ of sample B recover to the maximum value (repair-2, 3 min anneal treatment). However, as the annealing time continues to increase, the Ns and the μ of the epitaxial wafer begin to decrease (repair-3, 5 min anneal treatment). This may contribute to the following two aspects. On the one hand, annealing can repair the lattice damage (reconstruction of surface stoichiometry) [26], and increase the Ns and the μ. On the other hand, at the annealing process, part of the Cl ions on the surface will diffuse into the Al-GaN layer (annealing may drive impurity diffusion [26]), further reducing the Ns and the μ. Two mechanisms result in a trade-off consideration, and thus an optimal annealing time. The green parts in Figure 3 show the influence of the etching process and the annealing treatment process on the product of the Ns and the μ. It can be seen that the annealing treatment can recover the product value to 89% (repair-2) of the product value before etching (as-grown). Then, combine the two methods mentioned above to repair the damage of the devices. We first conduct the BOE treatment for 2 min, and then annealing at 500 °C for 3 min. The red parts in Figure 4a,b show the Ns and the μ of the AlGaN/GaN HEMT (sample Secondly, the influence of the annealing method on the Ns and the μ of AlGaN/GaN HEMT (sample B) is studied. The experimental steps are shown in Figure 1c. The experimental sample (sample B) and Sample A are small squares of 10 mm at different positions on the same epitaxial wafer. The annealing temperature is 500 °C, the annealing time starts from 1 min, and the annealing time is increased by 2 min. The green parts in Figure 2a,b show the influence of the etching process and the annealing treatment process on the Ns and the μ of sample B. It can be seen that the Ns and the μ of sample B also decrease to a greater extent after etching (as-etched). As the annealing time increases, the Ns and the μ concentration increase (repair-1, 1 min anneal treatment). When the annealing time is 3 min, the Ns and the μ of sample B recover to the maximum value (repair-2, 3 min anneal treatment). However, as the annealing time continues to increase, the Ns and the μ of the epitaxial wafer begin to decrease (repair-3, 5 min anneal treatment). This may contribute to the following two aspects. On the one hand, annealing can repair the lattice damage (reconstruction of surface stoichiometry) [26], and increase the Ns and the μ. On the other hand, at the annealing process, part of the Cl ions on the surface will diffuse into the Al-GaN layer (annealing may drive impurity diffusion [26]), further reducing the Ns and the μ. Two mechanisms result in a trade-off consideration, and thus an optimal annealing time. The green parts in Figure 3 show the influence of the etching process and the annealing treatment process on the product of the Ns and the μ. It can be seen that the annealing treatment can recover the product value to 89% (repair-2) of the product value before etching (as-grown). Then, combine the two methods mentioned above to repair the damage of the devices. We first conduct the BOE treatment for 2 min, and then annealing at 500 °C for 3 min. The red parts in Figure 4a,b show the Ns and the μ of the AlGaN/GaN HEMT (sample Secondly, the influence of the annealing method on the N s and the µ of AlGaN/GaN HEMT (sample B) is studied. The experimental steps are shown in Figure 1c. The experimental sample (sample B) and Sample A are small squares of 10 mm at different positions on the same epitaxial wafer. The annealing temperature is 500 • C, the annealing time starts from 1 min, and the annealing time is increased by 2 min. The green parts in Figure 2a,b show the influence of the etching process and the annealing treatment process on the N s and the µ of sample B. It can be seen that the N s and the µ of sample B also decrease to a greater extent after etching (as-etched). As the annealing time increases, the N s and the µ concentration increase (repair-1, 1 min anneal treatment). When the annealing time is 3 min, the N s and the µ of sample B recover to the maximum value (repair-2, 3 min anneal treatment). However, as the annealing time continues to increase, the N s and the µ of the epitaxial wafer begin to decrease (repair-3, 5 min anneal treatment). This may contribute to the following two aspects. On the one hand, annealing can repair the lattice damage (reconstruction of surface stoichiometry) [26], and increase the N s and the µ. On the other hand, at the annealing process, part of the Cl ions on the surface will diffuse into the AlGaN layer (annealing may drive impurity diffusion [26]), further reducing the N s and the µ. Two mechanisms result in a trade-off consideration, and thus an optimal annealing time. The green parts in Figure 3 show the influence of the etching process and the annealing treatment process on the product of the N s and the µ. It can be seen that the annealing treatment can recover the product value to 89% (repair-2) of the product value before etching (as-grown).
Then, combine the two methods mentioned above to repair the damage of the devices. We first conduct the BOE treatment for 2 min, and then annealing at 500 • C for 3 min. The red parts in Figure 4a,b show the N s and the µ of the AlGaN/GaN HEMT (sample C) in different states (as-grown, as-etched, and as-repaired). It is obvious that there is the same trend as the above experiment, and after the two treatments, the electronic properties of the epitaxial wafer have increased to a large extent. The red parts of Figure 5 show the product of the N s and the µ of sample C in different states. The combined method can recover the product value to 93% of the product value before etching (as-grown).
Micromachines 2021, 12, x 5 of 9 C) in different states (as-grown, as-etched, and as-repaired). It is obvious that there is the same trend as the above experiment, and after the two treatments, the electronic properties of the epitaxial wafer have increased to a large extent. The red parts of Figure 5 show the product of the Ns and the μ of sample C in different states. The combined method can recover the product value to 93% of the product value before etching (as-grown). Furthermore, according to the experimental results of the AlGaN/GaN HEMTs (sample A, sample B, and sample C), we conduct a repair study on the p-GaN HEMT (sample D, the structure has been shown in Figure 1b). The etching time is 7.5 min (the over-etching time is 1.5 min, which is the same as the etching time of the AlGaN/GaN HEMTs). Under the same experimental steps and repair conditions (BOE for 2 min, annealing at 500 °C for 3 min), we study the influence of the etching process and the repair process on the Ns and the μ of sample D. It can be seen from the green parts of Figure 4a,b that, compared with sample C (as-grown), the Ns of the completely etched sample D (as-etched) is reduced by approximately 70%. In addition, the μ is reduced by approximately 47% (compare sample C (as-grown)). After repair (as-repaired), the Ns and the μ increased by 64% and 75% (compare sample C (as-grown)), respectively. The green parts of Figure 5 show the product of the Ns and the μ of sample D in different states. The product value decreases after etching and recovers to 61% after repairing (compare with sample C (as-grown)). It can be seen that under the same repairing conditions, the recovery degree of sample D is less than that of sample C (the recovery degree reaches 93%). The difference in the recovery degree may be caused by the different thickness of the remaining AlGaN layer after etching. However, the trends shown are consistent. Furthermore, according to the experimental results of the AlGaN/GaN HEMTs (sample A, sample B, and sample C), we conduct a repair study on the p-GaN HEMT (sample D, the structure has been shown in Figure 1b). The etching time is 7.5 min (the over-etching time is 1.5 min, which is the same as the etching time of the AlGaN/GaN HEMTs). Under the same experimental steps and repair conditions (BOE for 2 min, annealing at 500 °C for 3 min), we study the influence of the etching process and the repair process on the Ns and the μ of sample D. It can be seen from the green parts of Figure 4a,b that, compared with sample C (as-grown), the Ns of the completely etched sample D (as-etched) is reduced by approximately 70%. In addition, the μ is reduced by approximately 47% (compare sample C (as-grown)). After repair (as-repaired), the Ns and the μ increased by 64% and 75% (compare sample C (as-grown)), respectively. The green parts of Figure 5 show the product of the Ns and the μ of sample D in different states. The product value decreases after etching and recovers to 61% after repairing (compare with sample C (as-grown)). It can be seen that under the same repairing conditions, the recovery degree of sample D is less than that of sample C (the recovery degree reaches 93%). The difference in the recovery degree may be caused by the different thickness of the remaining AlGaN layer after etching. However, the trends shown are consistent. Furthermore, according to the experimental results of the AlGaN/GaN HEMTs (sample A, sample B, and sample C), we conduct a repair study on the p-GaN HEMT (sample D, the structure has been shown in Figure 1b). The etching time is 7.5 min (the over-etching time is 1.5 min, which is the same as the etching time of the AlGaN/GaN HEMTs). Under the same experimental steps and repair conditions (BOE for 2 min, annealing at 500 • C for 3 min), we study the influence of the etching process and the repair process on the N s and the µ of sample D. It can be seen from the green parts of Figure 4a,b that, compared with sample C (as-grown), the N s of the completely etched sample D (as-etched) is reduced by approximately 70%. In addition, the µ is reduced by approximately 47% (compare sample C (as-grown)). After repair (as-repaired), the N s and the µ increased by 64% and 75% (compare sample C (as-grown)), respectively. The green parts of Figure 5 show the product of the N s and the µ of sample D in different states. The product value decreases after etching and recovers to 61% after repairing (compare with sample C (as-grown)). It can be seen that under the same repairing conditions, the recovery degree of sample D is less than that of sample C (the recovery degree reaches 93%). The difference in the recovery degree may be caused by the different thickness of the remaining AlGaN layer after etching. However, the trends shown are consistent.
Finally, we fabricate the p-GaN HEMT device, the schematic cross-sectional structure of the p-GaN HEMT device is shown in Figure 6a. The device fabrication starts with the mesa isolation. Then, the ICP equipment is used to etch the p-GaN layer in non-gate area (p-GaN layer remains the length of 3 µm), and the etching gases are Cl 2 /N 2 /O 2 . The etching time is 7.5 min (over-etching 1.5 min). Figure 6b shows the cross section near the gate region of the p-GaN HEMT in the focused ion beam (FIB) after etching. It can be clearly seen that there is a step of about 60 nm. Figure 6c,d are the surface morphology of the p-GaN HEMT non-etched and etched area in the atomic-force microscope (AFM), respectively. It can be seen that the Root Mean Square (RMS) roughness of the etched area is significantly increased. Then, the BOE treatment is carried out for 2 min, and 500 • C annealing is carried out for 3 min. After that, the metal layers Ti/Al/Ni/Au (20/160/55/50 nm) are deposited by the magnetron sputtering, and then annealing at 870 • C in N 2 ambient for 30 s in order to form ohmic contacts. Then the plasma-enhanced chemical vapor deposition (PECVD) equipment is used to deposit the SiN x dielectric layer, and the reactive ion etching (RIE) equipment is used to define the source contact, the gate contact (etching length is 2 µm), and the drain contact. The gate metal is Ni/Au. The device has a gate length (L g ) of 3 µm, a gate width (W g ) of 100 µm, a gate-source spacing (L gs ) of 5 µm, and a gate-drain spacing (L gd ) of 10 µm.
Micromachines 2021, 12, x 6 of 9 Finally, we fabricate the p-GaN HEMT device, the schematic cross-sectional structure of the p-GaN HEMT device is shown in Figure 6a. The device fabrication starts with the mesa isolation. Then, the ICP equipment is used to etch the p-GaN layer in non-gate area (p-GaN layer remains the length of 3 μm), and the etching gases are Cl2/N2/O2. The etching time is 7.5 min (over-etching 1.5 min). Figure 6b shows the cross section near the gate region of the p-GaN HEMT in the focused ion beam (FIB) after etching. It can be clearly seen that there is a step of about 60 nm. Figure 6c,d are the surface morphology of the p-GaN HEMT non-etched and etched area in the atomic-force microscope (AFM), respectively. It can be seen that the Root Mean Square (RMS) roughness of the etched area is significantly increased. Then, the BOE treatment is carried out for 2 min, and 500 °C annealing is carried out for 3 min. After that, the metal layers Ti/Al/Ni/Au (20/160/55/50 nm) are deposited by the magnetron sputtering, and then annealing at 870 °C in N2 ambient for 30 s in order to form ohmic contacts. Then the plasma-enhanced chemical vapor deposition (PECVD) equipment is used to deposit the SiNx dielectric layer, and the reactive ion etching (RIE) equipment is used to define the source contact, the gate contact (etching length is 2 μm), and the drain contact. The gate metal is Ni/Au. The device has a gate length (Lg) of 3 μm, a gate width (Wg) of 100 μm, a gate-source spacing (Lgs) of 5 μm, and a gate-drain spacing (Lgd) of 10 μm. Figure 7a shows that the threshold voltage (Vth) of the repaired device is 1.6 V (defined as the Ids = 1 mA/mm [27]), and the max transconductance (gmax) is 68 mS/mm (at Vgs = 4.4 V). The Ion/Ioff ratio is about 10 7 . It can be seen from the Figure 7b that when the gate leakage current reaches 0.01 mA/mm, the maximum gate voltage (Vgs, max) is 7 V. The maximum current (Id, max) is Figure 7a shows that the threshold voltage (V th ) of the repaired device is 1.6 V (defined as the I ds = 1 mA/mm [27]), and the max transconductance (g max ) is 68 mS/mm (at V gs = 4.4 V). The I on /I off ratio is about 10 7 . It can be seen from the Figure 7b that when the gate leakage current reaches 0.01 mA/mm, the maximum gate voltage (V gs, max ) is 7 V. The maximum current (I d, max ) is 153 mA/mm (seen from the Figure 7c), the on-resistance (R on ) obtained from the slope of the output characteristics curves is 23 Ω·mm at V gs = 7 V. At the same time, it can be seen that when V gs = 7 V, the output current decreases, which may be due to the influence of self-heating effects [28]. Table 1 summarizes and compares the performance of the traditional p-GaN HEMTs fabricated in this paper and other research institutions. It can be seen that the fabricated device has a large V th and a large V gs, max . At the same time, it can be seen that the I d, max of the fabricated device is smaller than that of other research institutions. This is because the L g , L gs , and L gd of the fabricated device are relatively large.
Micromachines 2021, 12, x 7 of 9 153 mA/mm (seen from the Figure 7c), the on-resistance (Ron) obtained from the slope of the output characteristics curves is 23 Ω·mm at Vgs = 7 V. At the same time, it can be seen that when Vgs = 7 V, the output current decreases, which may be due to the influence of self-heating effects [28]. Table 1 summarizes and compares the performance of the traditional p-GaN HEMTs fabricated in this paper and other research institutions. It can be seen that the fabricated device has a large Vth and a large Vgs, max. At the same time, it can be seen that the Id, max of the fabricated device is smaller than that of other research institutions. This is because the Lg, Lgs, and Lgd of the fabricated device are relatively large.
Conclusions
In summary, this paper has thoroughly studied the influence of the repair process on the electrical properties of the AlGaN/GaN HEMTs and the p-GaN HEMT. We analyzed Samsung [30] L g = 4, L gs = 2, L gd = 9 0.93 8 16 309
Conclusions
In summary, this paper has thoroughly studied the influence of the repair process on the electrical properties of the AlGaN/GaN HEMTs and the p-GaN HEMT. We analyzed the possible mechanisms in different repair methods and optimized the repair conditions. Using the optimized conditions (BOE for 2 min and annealing 500 • C for 3 min), the product of the N s and the µ of the AlGaN/GaN HEMT can be recovered by 93% (compare with sample C (as-grown)), and the product of the N s and the µ of the p-GaN HEMT can be recovered by 61% (compare with sample C (as-grown)). Furthermore, we fabricated the p-GaN HEMTs, and the device (as-repaired) has a good performance. The repair research in this paper is of great significance for p-GaN HEMT device fabrication. | 6,865.6 | 2021-01-26T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Asymmetric Extraction Treatment in a Middle-Aged Patient with Dental Crowding and Protrusion using Clear Aligners
Frequently, orthodontic treatment involves symmetrically extracting premolars to correct severe crowding or protrusion. Nevertheless, in some cases, a more reasonable alternative may be to remove teeth with poor prognoses to improve protrusion and relieve crowding. A middle-aged woman sought treatment for dental protrusion and crowding. Her mandibular right first molar had been treated with root canal therapy due to pulpitis, but she still felt uncomfortable. In addition, her maxillary left second premolar had become carious. Extractions of the maxillary right first premolar and left second premolar, as well as mandibular right first molar and left first premolar were chosen to resolve the occlusion problems. The patient opted for clear aligners on the demands of esthetics as well as comfort. Following orthodontic treatment, the patient attained properly aligned teeth, a pleasing smile, and a facial profile that exhibited greater harmony. This case report demonstrates that, under proper planning, clear aligners are capable of handling challenging cases, including those involving middle-aged individuals and molar extractions.
Introduction
Dental crowding and protrusion are the most common malocclusions in the Asian population, and symmetric extraction of premolars and space closure are appropriate solutions.However, when the patient seeking treatment for severe crowding or protrusion has hopeless teeth other than premolars, an unusual treatment pattern of extracting these teeth can be preferentially considered.
The ongoing search for innovation in orthodontic treatment has boosted the evolution of clear aligners to offer patients more comfort, shorter treatment time, improved posttreatment stability, and fewer side effects [1].Although initially for relatively simple cases, the scope of clear aligner technology in orthodontics has expanded to more complex malocclusions in recent years, such as severe protrusion.Nevertheless, because the material of clear aligners are not rigid enough to retain the original shape, their applications in tooth extraction cases are challenging.This can cause the mesial tipping of adjacent teeth toward the extraction space, leading to non-parallelism of roots, reduction of extraction space and anchorage loss, and eventually increases the difficulties in the treatment [2][3][4].In other words, the virtual simulation of clear aligner treatment is more a process of changes in the shapes of clear aligners than predicting the final therapeutic effects.Thus, adequate perception of the properties of clear aligners and reasonable design for tooth movements are essential.
This case report portrays an asymmetric extraction treatment in a middle-aged patient using clear aligners.To correct the existing malocclusion characterized by dental protrusion and crowding, the patient was managed with extractions of three premolars and one hopeless first molar.
Case Presentation
The patient was a 46-year-old woman, with the chief complaint of protrusive incisors and lips.She was particularly Under extraoral examination (Figures 1(a), 1(b), and 1(c)), the patient exhibited a convex facial profile along with a reduced nasolabial angle.Additionally, there was a noticeable strain on her circumoral muscle when closing her mouth.There was a slight right deviation of the chin, and the right half of the face appeared larger compared with the left.Intraoral examination (Figures 1(d), 1(e), 1(f), 1(g), 1(h), and 1(i)) demonstrated Class I molar relationship and Class II canine relationship on both sides, with an overjet of 5.9 mm and an overbite of 4.0 mm.Moderate crowding as well as a deep curve of Spee could be noted.The Bolton ratio of anterior teeth was 79.0%.A large area of filling material in the mandibular right first molar and caries in the maxillary left second premolar could be observed.The periodontal status was not very fine with multifocal gingival inflammation, characterized by erythema, edema, and bleeding on probing.Probing depths ranging from 2 to 6 mm were detected with attachment loss measuring 2-5 mm, indicating the presence of periodontal pockets.Extensive gingival recession was noted, and grade III furcation involvement was detected in the mandibular right first molar.
In the initial cone-beam computed tomography (CBCT; Figures 2(a), 2(b), 2(c), 2(d), 2(e), and 2(f)), horizontal and vertical resorptions of the alveolar bone was visible.Interdental bone loss was evident, with vertical and horizontal alveolar defects observed between teeth.Radiographic evidence of furcation involvement in the mandibular right first molar was also identified.The maxillary third molars and mandibular left third molar were missing.All other teeth were present but the mandibular right third molar was horizontally impacted.The mandibular right first molar had been treated with root canal therapy several years previously due to severe caries but still exhibited obvious radiolucency in the furcation and apical region, and root resorption was observed.The patient also complained that discomfort and pain had often occurred in this area during mastication.The maxillary left second premolar exhibited a radiolucency in the distal regions of the crown.
A skeletal Class I relationship was confirmed by the lateral cephalometric analysis (Figures 2(g), 2(h); Table 1).The maxilla and mandible were normally placed with an average mandibular plane angle.Both the maxillary and mandibular incisors were proclined, resulting in the lips extending beyond the E-line.The treatment objectives included alleviating crowding and retracting anterior teeth to achieve optimal occlusion and improve the facial profile.Extractions were required to meet these therapeutic goals.
Before initiating orthodontic treatment, a comprehensive evaluation of the patient's condition and a thorough discussion was conducted.Multiple approaches were explored to improve dental and facial esthetics, taking into account the patient's chief complaint about the malocclusion.
Removal of four first premolars might have been the best course for treatment if all teeth were healthy.However, the mandibular right first molar with a large area of filling material exhibited images of root resorption, and radiolucency in the furcation and apical region could be observed.After consultation and discussion, even if this tooth had been successfully performed with endodontic retreatment, an unfavorable prognosis might have been its eventual downfall.Therefore, the mandibular right first molar was at risk of future extraction and implant restoration.The patient refused this option of symmetric extraction of four first premolars.
The second option was to extract the mandibular right first molar and the first premolars in the other three quadrants.Nevertheless, if this option was selected, the maxillary left second premolar should have been treated with endodontic therapy due to severe caries before orthodontic treatment.
The third option was to extract the maxillary right first premolar and left second premolar and mandibular right first molar and left first premolar.Due to the unusual extraction pattern and the large amount of space after the first molar extraction, the patient was informed that the treatment time might be extended accordingly.After being presented with all the benefits and drawbacks, the patient chose this treatment plan.Before the orthodontic treatment was initiated, complete periodontal treatment, including scaling and root planning of all but the teeth that needed to be extracted had been performed.Meanwhile, oral hygiene instructions were provided to the patient.The indications for initiating orthodontic treatment, included proper infection control, a full-mouth plaque index within 25%, a percentage of positive bleeding on probing sites less than 30%, and no residual pockets deeper than 5 mm.In the first phase of orthodontic treatment, 62 stages of aligners were designed, and the patient was instructed to change aligners every 10 days and to wear each for 22 hours per day.At stage 62 (Figures 3(a), 3(b), 3(c), 3(d), and 3(e)), all teeth had been aligned, with all the spaces completely closed.With the anterior teeth being retracted, the facial profile and smiling esthetics had been substantially improved.The overbite and overjet also gradually improved throughout this time.However, a large range of tooth movements had caused a disordered curve of Spee with loss of incisor torque, the distal tipping of canines, and the mesial tipping of molars.The first phase ended up with a bilateral posterior open bite.
Consequently, the first refinement was initiated and continued for about 8 months using 24 stages of aligners.This phase aimed to correct the torque of the incisors, upright canines, and molars, and close the open bite, along with Class II elastics from maxillary canines to mandibular molars to settle the occlusion.After that, the posterior open bite showed gradual improvements but was still present, without any improper aligner fitting.
The second refinement was started with 15 stages of aligners aiming at leveling the curve of Spee by intruding anterior teeth and extruding posterior teeth, and this phase lasted for 5 months.The third refinement was done with 10 stages of aligners, with the objective of final detailing of the occlusion (Figures 3(f), 3(g), 3(h), 3(i), and 3(j)).
After three refinements, proper occlusion was finally achieved, orthodontic treatment was terminated and all attachments were removed.Vacuum-formed retainers were provided for retention.The patient was instructed on fullday retention for 1 year followed by nighttime retention for at least 1 year.The total treatment duration was 39 months, with excellent patient compliance.
The posttreatment records (Figures 4 and 5; Table 1) demonstrated all the treatment objectives were accomplished.The intraoral photographs and dental casts (Figures 4(d), 4(e), 4(f), 4(g), 4(h), and 4(i))indicated satisfactory dental alignment, the harmonious relationship of dental arch widths, and symmetric arches, and the excessive overjet and overbite had been relieved.Coordinated intercuspal occlusal contact was achieved.Although the overjet and overbite were still slightly deeper than normal, considering the patient's periodontal status was not very fine, further vertical and torque control of the anterior teeth had not been performed.However, because of gingival recession caused by aging and periodontitis, and the triangular-shaped crown form, black triangular spaces between anterior teeth formed inevitably.To mitigate the esthetic damage caused by the black triangles, following the completion of the orthodontic treatment, several treatment approaches were recommended to the patient.The first option was periodontal plastic surgery, which involved gingival papilla reconstruction through soft tissue grafting to overcome unsightly black triangles.The second option was the tooth recontouring procedure, which included ceramic veneer or composite resin restorations to reshape the teeth.The third option was tissue volumising, which involved injecting tissue volumisers, such as hyaluronic acid, to augment the gingival papilla and reduce the black triangles in the esthetic zone.However, the patient reported no significant concerns regarding the gingival recession, and therefore, suggestions for further esthetic enhancement were declined.The facial photographs (Figures 4(a 1) and superimposition (Figure 6) revealed that the protrusive incisors had been considerably retracted.The retraction and proper incisor position (Figures 5(b), 5(c), and 5(d)) contributed to an improved lip posture accordingly.The mandibular plane angle was maintained after the orthodontic treatment.
Discussion
In recent years, the proportion of adults among people seeking orthodontic treatment is increasing.It was reported that the ratio of adults in orthodontic patients grew from 15.4% to 21.0% between 1981 and 2017in the United States [5].A survey made by the British Orthodontic Society reported that the number of adult orthodontic patients in private practice in 2018 was 5% more than that in 2016 [6].The number of those who are middle-aged and above seeking orthodontic treatment is continuously increasing as well.An investigation pointed out that the ratio of middle-aged orthodontic patients in Asian countries with aging populations doubled between 2008 and 2012 [7].
It is widely approved that orthodontic treatment for middle-aged or elderly people is generally more challenging than that for adolescents or young adults due to various limitations [8].This is partly because of the reality that the prevalence rate of chronic periodontal diseases in middle-aged patients is higher [9,10].On the other hand, middle-aged patients' surrounding period ontium is generally more hypoactive in responding to orthodontic force owing to the aging changes.However, mounting evidence has been documented proving that when the preexisting period ontitis is under control and good habits of oral hygiene are developed, a history of periodontal disease with alveolar bone loss is not a contraindication to orthodontic treatment [11,12].Furthermore, positive effects of periodontal treatment followed by orthodontic treatment on increasing levels of periodontal clinical attachment and improving surrounding marginal bone height have also been reported [13].Although the initial periodontal status of middle-aged adults is generally unfavorable owing to greater loss of marginal bone, it has been reported that after orthodontic treatment, middleaged adults presented periodontal changes and outcomes similar to those of young adults [8].In this patient, due to the thin gingival biotype as well as the preexisting periodontal disease, the gradually exposed black triangle spaces The non-professionals have relatively low aesthetic sensitivity to the anterior teeth, and the patient was very satisfied with the treatment results and refused surgical treatment options for gingival recession.
In orthodontic treatments, premolars are the most frequently extracted teeth for correcting malocclusions [14].However, when a patient has teeth with poor prognosis other than premolars and extractions are ineluctable in correcting the malocclusion, the aforementioned teeth can be considered to be extracted other than healthy teeth [15].This patient had frequently felt pain and discomfort in the region of the mandibular right first molar during mastication before the initial visit, and diagnoses of root resorption and furcation lesion of the mandibular right first molar were made.In addition, a diagnosis of caries in her maxillary left second premolar was made.After a comprehensive assessment of the patient's condition, extractions of the maxillary right first premolar and left second premolar, and mandibular right first molar and left first premolar were determined to correct the malocclusion.
Adults often reject the use of fixed labial orthodontic appliances because of esthetic impairment.Clear aligners emerge as the times require.Clear aligners have currently been used for more complex orthodontic tooth movements, including tooth rotation, molar distalization, and dental expansion with the advancement of attachments and materials, and their application scope has been extended from non-extraction to extraction cases [4,[16][17][18].Therefore, according to the requirement of this patient, we chose clear aligners to overcome the aesthetic defects in the orthodontic treatment process.Due to the poor prognosis of the mandibular right first molar and maxillary left second premolar, a treatment plan of asymmetric extraction was indicated for this patient.Because asymmetric extraction produced different amounts of space in each quadrant, more accurate control of anchorage was required in designing the plan of (f) Case Reports in Dentistry aligner treatment.In the maxillary dentition, because the extraction sites were the right premolar and left second premolar, our design of tooth movements was to first distalize the left first premolar to the final position, and then start the aligning and retraction processes of anterior teeth.In the mandibular dentition, the right premolars were firstly distalized to the target position, then anterior teeth began to be aligned and retracted, and at the same time, mesial movement of the right second molar got started.From the perspective of the treatment results, our designs of anchorage control and sequence of tooth movements achieved ideal effects.Without skeletal anchorage devices, the processes of dentition aligning and space closure were completed, and no obvious loss of anchorage was observed.However, it is well known that the virtual simulation of clear aligner treatment is more a design of force application than modeling the final tooth position.Therefore, the actual tooth movements achieved after clear aligner treatment differs from those planned by the virtual setup.Even by using traditional fixed appliances, the treatment of a case with first molar extraction confronts great difficulties, and it is more challenging by using clear aligners.In this patient, roller-coaster effects including anterior interference, and posterior open bite occurred in the process of space closure, even if we had designed the control of anterior teeth torque and antitipping of posterior teeth.Research studies have demonstrated that the tooth movements designed in a virtual simulation of clear aligner treatment cannot be fully accomplished, ranging from 28% to 88% of the planned depending on the modes of tooth movements and the tooth types [18][19][20][21].Thus, additional clear aligners (refinements) aiming at torqueing and intruding anterior teeth, and uprighting posterior teeth were prescribed to this patient.From the perspective of biomechanics, the retraction force (on anterior teeth) and protraction force (on posterior teeth) exerted by clear aligners are applied on crowns and pass through the occlusal side of the center of resistances, leading to distal tipping of canines, mesial tipping of posterior teeth, and lingual tipping and extrusion of incisors.Therefore, for extraction cases, additional control of anterior teeth torque and antitipping of teeth adjacent to the extraction space should be designed to increase the expression rate of expected tooth movements [22].More specifically, to prevent unwanted tooth movements in extraction patients, designs of distal crown tipping of posterior teeth and mesial crown tipping of canines are necessary during space closure in clear aligner treatment [4].
Conclusion
Clear aligner treatment is a novel strategy to treat cases with asymmetric tooth extraction, and even if applied in middleaged patients or molar extraction cases, its treatment effects are still reliable.
Figure 2 :
Figure 2: Pretreatment radiographs.(a) Panoramic radiograph sectioned from CBCT.(b-d) Three-dimensional reconstruction of CBCT.(e) The mandibular right first molar exhibited obvious radiolucency in the furcation and apical region, with root resorption.(f) The maxillary left second premolar exhibited a radiolucency in the distal regions of the crown.(g) Lateral cephalogram.(h) Lateral cephalometric tracing.
), 4(b), and 4(c)) revealed a more harmonious and balanced soft-tissue profile, with favorable incisor exposure during a smile, and the dentition midlines were aligned to the facial midline.No change in breathing or discomfort of the temporomandibular joint was
Figure 6 :
Figure 6: Superimpositions of the pretreatment (red) and posttreatment (blue) cephalometric tracings (on the S-N plane at the S point). | 3,974.6 | 2023-08-30T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
The Effect of High Salt Diet in Renal Fibrosis Through CHOP Protein Stimulated Apoptosis in Rat Model
І. Background: Prolonged excessive salt intake is an important risk factor for development of renal brosis. In the onset of renal tubular destruction, KIM-1 appears in urine. CHOP is an important apoptosis stimulator protein. The aim of present study was to investigate the effect of high salt diet in development of renal brosis through apoptosis. ІІ. Methods and results: The 25 male Wistar rats were divided randomly into ve groups and treated with 0%, 0.5%, 1%, 1.2%, 1.5% of NaCl dissolved in distilled water for 8 weeks. For conrmation of renal tubular destruction, the urinary KIM-1 was measured. The slides of renal tissue were prepared and stained with Hematoxylin and Eosin and Masson´s Trichrome for brosis detection. To investigate the role of CHOP protein in development of renal tubulointerstitial destruction, the relative gene expression of CHOP in renal tissue was analyzed using qRT-PCR method.There was no signicant differences in urea, creatinine and total protein concentration of rats received different concentrations of NaCl compared to the control group. Urinary KIM-1 and mRNA level of CHOP was found to be increased signicantly in rats treated with 1.5% NaCl compared to the control group. Mild renal brosis was observed in the same group too. III. Conclusion: Excessive salt intake can lead to brosis through increasing the expression of apoptotic CHOP gene in renal tissue. KIM-1 can be detectable in urine long before the development of renal brosis. 1.2% w/v NaCl in distilled water as drinking water Group5: 1.5% w/v NaCl in distilled water as drinking water Treatment was continued for 8 weeks. At the end of the treatment; 24 hour urine with the aid of metabolic cage was collected. The collected urine stored at -80 °C for measuring urinary KIM-1, creatinine, urea and total protein concentration. Animals were anesthetized by diethyl ether and then; kidney samples were separated. The part of kidney samples were immediately frozen in liquid nitrogen and transferred into -80˚C freezer for qRT-PCR assay; and the other part was xed with 10% formalin for histopathology examination. The blood samples were taken from the cardiac puncture; serum was separated and stored at -20˚C for measuring creatinine and urea concentration.
Introduction
Chronic kidney disease (CKD) is a global threat to public health and if not treated in time, it will lead to renal failure (RF). In the early stages of CKD, there are usually no clinical symptoms and the disease does not appear unless there is a signi cant reduction in renal function. The total number of adults affected with CKD is 220 million men and 270 million women worldwide [1,2]. CKD is de ned as tubulointerstitial destruction with a glomerular ltration rate (GFR) of less than 60 ml / min / 1.73 m 2 for at least 3 months [3,4].Prolonged excessive salt intake has been identi ed as a risk factor for the development of renal brosis and CKD [5]. Excessive salt intake increases the osmotic pressure inside the renal tubulointerstitial cells. High osmotic pressure inside the nucleus; destroys the chromatin structure, which will alter the expression of genes; like reducing the expression of genes involved in DNA repair. Additionally, high level of unfolded proteins in the cytoplasm of renal tubulointerstitial cells results in osmotic stress in the endoplasmic reticulum (ER) [6]. In other words, high osmotic pressure, causes DNA damage, disruption of DNA repair systems and ER osmotic stress. Due to destruction of DNA structure, the cell remains in the G 2 phase of the cell cycle and cannot enter mitosis [7][8][9].
It has been demonstrated that, inhibition of ER osmotic stress in the salt-sensitive rats, prevented from developing renal tissue brosis. It means that; prolonged excessive salt intake may cause in ammation and brosis in kidneys through ER osmotic stress [10]. Some in vitro studies indicated that ER osmotic stress increased the expression of the pre-apoptotic molecule CHOP (C/EBP Homologous Protein GADD 153), which promoted apoptosis through inhibiting the anti-apoptotic molecule BCL-2. It seems that, CHOP is one of the key proteins in stimulating apoptosis [11][12][13]. Recent in vivo studies con rmed that rats with a defect in the CHOP gene did not develop in ammation and brosis in kidneys [7,11].
CHOP is a 29 kDa protein consisting of 169 amino acids in humans and 168 amino acids in rodents.
BCL-2 is an important inhibitor protein in apoptosis pathway. CHOP binds to BCL-2 and in this way; stimulates apoptosis [14,15]. In apoptotic tissue, the in ammatory process will lead to brosis [7,12,16] . Kidney Injury Molecule 1 (KIM-1) is a 90 kDa transmembrane protein found in the membrane of renal tubular cells. The outer domain of KIM-1 separates from the membrane and enters the lumen of the renal tubules during apoptosis [17]. Urinary KIM-1 is identi ed as a diagnostic marker for renal tissue destruction [1].
In an attempt to gain further insights into the effects of prolonged excessive salt intake on renal function; this study was carried out in rat model. We hypothesized that high salt diet could a) increase relative gene expression of CHOP in renal tissue , b) increase urinary KIM-1, c) develop renal tissue brosis. Our study focused on the association between renal tissue brosis and urinary diagnostic markers and could be applied as a strategy to prevent from developing progressive CKD and RF.
Material And Methods
Research design 8-Weeks old male Wistar rats, body weight of 200-250g, were purchased from Pasteur institute of Iran (IPI). Animals were housed in 12-Hour light/dark period in a stable temperature (21-23°C) and 55%±10% relative humidity. Rats were nourished by standard chow and water ad libitum. 25 animals were divided into 5 groups randomly each as given below. NaCl ACS reagent was dissolved in distilled water and provided for animals as drinking water: Group 1: distilled water as drinking waterGroup 2: 0.5% w/v NaCl in distilled water as drinking waterGroup 3: 1% w/v NaCl in distilled water as drinking waterGroup 4: 1.2% w/v NaCl in distilled water as drinking water Group5: 1.5% w/v NaCl in distilled water as drinking water Treatment was continued for 8 weeks. At the end of the treatment; 24 hour urine with the aid of metabolic cage was collected. The collected urine stored at -80 °C for measuring urinary KIM-1, creatinine, urea and total protein concentration. Animals were anesthetized by diethyl ether and then; kidney samples were separated. The part of kidney samples were immediately frozen in liquid nitrogen and transferred into -80˚C freezer for qRT-PCR assay; and the other part was xed with 10% formalin for histopathology examination. The blood samples were taken from the cardiac puncture; serum was separated and stored at -20˚C for measuring creatinine and urea concentration.
Urea, creatinine, total protein and KIM-1 assay in serum and urine Creatinine concentration was measured in urine and serum using Pars Azmon kit, based on the Jaffa colorimetric method. Urea concentration was tested in urine and serum with the aid of Pars Azmon kit, based on the Kinetic Urease method. Total protein concentration in urine was determined using Grainer kit, based on the Biuret colorimetric method. KIM-1 concentration in urine was measured by Crystalday ELIZA kit.
RNA extraction and qRT-PCR analysis
Total RNA from the excised kidney tissues were isolated using the TRIzol extraction reagent (Invitrogen, 15596026), according to the manufacturer's recommendations. The integrity of mRNA was con rmed by electrophoresis in a denaturing 1% agarose gel. First strand cDNA was synthesized from total RNA with random hexamer primers using the RevertAid H Minus cDNA synthesis kit (Biofact, W2569-100).
Quantitative Real time-PCR of GAPDH (reference gene) and CHOP were carried out using speci c primers are listed in Table 1 The prepared slides are stained with Hematoxylin and Eosin (H and E); and Masson´s Trichrome; which are the speci c staining method for tissue brosis detection. In Masson´s Trichrome staining, the brotic tissue areas are blue and normal tissue areas are red. From each microscopic slide, 5 of them were randomly selected and the ratio of blue to red tissue areas was calculated by Image J software, and the average of these 5 was considered as brosis severity. They were divided into 4 degrees according to the severity of the brosis: grade 0 was interpreted if brosis was not observed and grades 1, 2, 3 were interpreted according to the severity of the brosis.
Statistical Analysis
Data are expressed as means ± SD. Using SPSS version 16 software; the data were statistically computed by one-way ANOVA procedure and subsequently Duncan's test. A P value < 0.05 was considered statistically signi cant.
Blood and urinary biochemical variables
According to data in Table 2 (page 12), there was no statistically signi cant difference in serum and urine urea, creatinine and total protein concentration in groups 2-5 in comparison with normal control group. necrosis, and mild focal brosis. In animals of group 4, hyperemia, hemorrhagic foci, and degenerative changes were observed in the tubules. In some cases; evidence of Bowman's dilatation, brosis, and mild in ammatory cell accumulation was also observed. In animals of group 5, evidence of epithelial cell membrane destruction, hydropic degeneration, glomerular brosis, accumulation of in ammatory cells, decreased glomerular space, coagulation necrosis, brosis foci, degenerative changes in some glomeruli, hyperemia and mild to moderate bleeding were observed. Outstanding features of histopathology in most cases belonged to this group.As shown in Fig.4 and Fig.5 (page 17-18); the brotic score of kidney tissue in group 5 had a signi cant increase compared to normal control group (p-value < 0. 05).
It is noteworthy that the degree of brosis is mild to moderate, which is expected to have no signi cant effect on kidney function.
Discussion
The results of the present study indicated that consumption of NaCl in group 5 caused a statistically signi cant increase in the expression of the CHOP gene as well as mild brosis in kidney tissue. Biochemical indicators of renal function such as urea, creatinine in blood and urine, and total protein in urine were not signi cantly different in comparison with normal control group; however, the concentration of KIM-1 increased signi cantly in the urine.
Results of present study indicated that osmotic stress due to prolonged excessive salt consumption increased expression of apoptotic CHOP gene in kidney tissue. In 1998, a study was designed on the mIMCD cell line. The cells were cultured in two isosmotic media (300 mosmol / kg) and hyperosmotic media (300 mosmol / kg + 150 mM NaCl) for 48 hours. Consistent with results of our study; in cells exposed to hyperosmotic media, the expression of GADD45 and GADD153 (CHOP) genes increased signi cantly [21].
Findings of present research demonstrated that urea, creatinine in blood and urine, and total protein in urine of rats received different concentrations of NaCl were not signi cantly different in comparison with normal control group. In the study carried out in 2007, spontaneously hypertensive rats (SHR), were divided into four groups with 0.6% (normal control), 4%, 6%, and 8% NaCl in the diet for 8 weeks. In contrast to results of our research; in animal groups received 6% and 8% NaCl in the diet; serum creatinine concentration, proteinuria and albuminuria increased signi cantly compared to the normal control group. Additionally; statistically signi cant decreased GFR was observed due to mild renal tubular degeneration in the same groups [22].
Based on results of present study, mild renal brosis was observed in rats treated with 1.5% NaCl compared to the control group. According to the study designed in 2011 on SHR rats; in animal groups that received 8% NaCl in the diet for 4 weeks, consistent with results of our study, signi cant glomerular damage and interstitial brosis were observed in comparison with group that received 8% NaCl in the diet with Losartan. It means that modulating osmotic stress with Losartan; could prevent from developing renal tissue brosis [23].
Results of present study demonstrated that, 1.5% NaCl in drinking water for 8 weeks in male Wistar rats, increased the expression of CHOP gene in kidney tissue. In 2015, researchers carried out a study on Sprague Dawley rats; weighing 250 to 300 g. To induce osmotic stress; the animals were deprived of drinking water for 3 days. In the next step; For 7 days, the animals consumed drinking water containing 2% NaCl. In both conditions of water depriving and consumption of water containing 2% NaCl, consistent with ndings of our research, expression of CHOP gene in hypothalamus tissue was signi cantly increased in comparison with the normal control group [10].
In present research we observed mild renal brosis in rats that received 1.5% NaCl in drinking water for 8 weeks. In 2015, the study conducted on mice that had 5/6 kidneys removed. In this study, the animals were divided into three groups and exposed to high-salt (4% NaCl) with Hydralazine, low-salt (0.02% NaCl), and normal (0.4%Nacl) diet for two weeks. At the end of the study, consistent with results of our research, it was found that salt induces stable renal brosis while blood pressure is normalized with Hydralazine. As a result, a prolonged high salt diet causes renal tissue brosis and chronic progressive renal disease independent of blood pressure [24]. According to the study carried out in 2017, Dahl saltsensitive (DSS) rats were divided into two groups with 2% NaCl and 8% NaCl in the diet for 5 weeks. In contrast to results of our research, a statistically considerable kidney tissue brosis was not seen and there was no statistically signi cant difference in serum creatinine and urea levels between the two animal groups. After 15 weeks statistically signi cant difference in serum creatinine and urea levels and notable kidney tissue brosis was observed between two animal groups. Also, at the end of the 15 th week, consistent with results of our study, the expression of the KIM-1 gene in kinney tissue of group received 8% NaCl was signi cantly higher than the group received 2% NaCl [25]. In studies conducted in 2017 and 2018 on male Wistar albino rats; consistent with results of our research, signi cant tubular degeneration and renal tissue brosis were observed in the group received 8% NaCl diet for 8 weeks; compared to the normal control group [26,27].
Conclusion
Osmotic stress due to prolonged excessive salt consumption can result in brosis through increased expression of apoptotic CHOP gene in kidney tissue. At the onset of apoptosis; KIM-1 protein appears in urine. In other words; KIM-1 can be detectable in urine before brosis of signi cant part of kidney tissue and impaired renal function. prior publication: Neither this manuscript nor one with substantially similar content under our authorship has been published or is being considered for publication elsewhere. W certify that all the data collected during the study is presented in this manuscript and no data from the study has been or will be published separately.
Con icting Interest: All contributing authors declare no con icts of interest. | 3,493 | 2021-02-19T00:00:00.000 | [
"Medicine",
"Biology"
] |
Many-body chemical reactions in a quantum degenerate gas
Chemical reactions in the quantum degenerate regime are described by the mixing of matter-wave fields. In many-body reactions involving bosonic reactants and products, such as coupled atomic and molecular Bose–Einstein condensates, quantum coherence and bosonic enhancement are key features of the reaction dynamics. However, the observation of these many-body phenomena, also known as ‘superchemistry’, has been elusive so far. Here we report the observation of coherent and collective reactive coupling between Bose-condensed atoms and molecules near a Feshbach resonance. Starting from an atomic condensate, the reaction begins with the rapid formation of molecules, followed by oscillations of their populations during the equilibration process. We observe faster oscillations in samples with higher densities, indicating bosonic enhancement. We present a quantum field model that captures the dynamics well and allows us to identify three-body recombination as the dominant reaction process. Our findings deepen our understanding of quantum many-body chemistry and offer insights into the control of chemical reactions at quantum degeneracy. The study and control of chemical reactions between atoms and molecules at quantum degeneracy is an outstanding problem in quantum chemistry. An experiment now reports the coherent and collective reactions of atomic and molecular Bose–Einstein condensates.
Ultracold atoms and molecules form an ideal platform toward controlling chemical reactions at the level of single internal and external quantum states.Ultracold molecules can be prepared in an individual internal state by, e.g., magneto- [1] and photoassociation [2] of ultracold atoms and direct laser cooling [3].External motion of molecules can be constrained by loading them into optical lattices [4] or tweezers [5].These experiments lead to the realization of state-to-state ultracold chemistry [6][7][8].
A number of experiments on cold molecules have reached the regime of quantum degeneracy, which promise new forms of molecular quantum matter and reaction dynamics.
For instance, molecular Bose-Einstein condensates (BECs) formed in atomic Fermi gases have stimulated tremendous interest in the BEC-BCS (Bardeen-Cooper-Schrieffer) crossover [9,10].Degenerate fermionic molecules are created by magnetoassociation of bosonic and fermionic atoms and optical transitions to the lowest rovibrational state [11].Here quantum degeneracy suppresses chemical reactions due to the fermion anti-bunching effect [11].
Recently, molecular BECs are realized based on atomic BECs near a Fesbach resonance [12].The reactive coupling between condensed atoms and molecules promises a new regime of quantum chemistry, dubbed ?quantum super-chemistry?, which highlights the coherent coupling of macroscopic matterwaves and Bose stimulation of the reaction process [13,14].A key feature of the coherence is the collective oscillations between the reactant and product populations.Because of Bose statistics, enhancement of the reaction dynamics is anticipated to significantly modify the branching ratio [15].At quantum degeneracy, reaction dynamics come fundamentally from mixing of the matterwave fields of reactants and products.For instance, consider Feshbach coupling which converts two atoms into one molecule and vice versa, described by the chemical equation: A + A ←→ A 2 .In a quantum gas, the reaction is described by the many-body Hamiltonian with reaction or- Reactive coupling between atomic and molecular quantum fields.a, Bose condensed atoms described by a single wavefunction ψa are coupled to molecules condensed in the state ψm.The coupling synthesizes and decomposes molecules.Wavy lines represent dissipation.b, We introduce a reaction potential V to describe the many-body dynamics of the atomic and molecular fields.A pure sample of atoms or molecules first relaxes towards lower potential and then equilibrates near the potential minimum.Due to bosonic stimulation, the potential scales as V ∝ N α , where N is the total particle number and α is the reaction order, see text.
Synthesis
der α = 3: where ψa ( ψm ) is the atomic (molecular) field operator, g 2 is the Feshbach coupling strength and ε m is the energy of one bare molecule relative to two bare atoms.Here we define the reaction order α as the maximum number of arXiv:2207.08295v2[physics.atom-ph]10 Aug 2022 field operators in the reaction terms.
Another prominent example that couples ultracold atoms and molecules is three-body recombination, where three colliding atoms are converted into a diatomic molecule and another atom, and vice versa.This process is described by the chemical equation A + A + A ←→ A 2 + A. At quantum degeneracy, the recombination process can resonantly couple atomic and molecular fields as where g 3 is the recombination coupling strength.Here the reaction order is α = 5.
To understand the dynamics of the coupled quantum fields, we present the following picture.We show that the molecular population Nm = ψ † m ψm follows the form of an "energy conservation" law as where 2 Ṅ 2 m /2 resembles the kinetic energy and we introduce the many-body reaction potential V = [ Nm , Ĥ] 2 /2 + const.(see Supplement).In this picture, the system tends towards lower potential.Quantum fluctuations of the nonlinear field coupling, however, can effectively damp the dynamics of the populations [16,17].In experiments, damping can also come from inelastic scattering and coupling to a thermal field.Thus one expects that the system first relaxes towards the potential minimum, and then equilibrates near the minimum with small amplitude coherent oscillations (see Fig. 1).In the thermodynamic limit with total particle number N 1, the reaction potential and the oscillation frequency near the minimum scale with the particle number as V ∝ N α and ω 0 ∝ N α/2−1 .The dependence on the particle number signals bosonic enhancement of the reaction dynamics [13,16].
In this paper, we report the observation of coherent and Bose stimulated reactions between Bose condensed Cs atoms and Cs 2 molecules.The reaction is initiated by tuning the magnetic field near a narrow g−wave Feshbach resonance, which couples scattering atoms and diatomic molecules in a single high-lying rovibrational state (See Supplement).Near the resonance, atomic and molecular populations quickly relax toward a dynamical equilibrium, followed by coherent oscillations between atoms and molecules in the equilibration process.We show that the oscillation frequency strongly depends on the particle number.From the dependence, we conclude that three-body recombination is the dominant reaction process that couples the atomic and molecular fields near the Feshbach resonance.
Our experiment starts with an ultracold Bose gas of 6 × 10 3 to 5 × 10 5 cesium atoms in an optical trap.The atoms can form a pure BEC either in a three-dimensional (3D) harmonic potential or a 2D square well potential [18].We induce the reaction by switching the magnetic field near the g-wave Feshbach resonance, which can convert an atomic BEC into a molecular BEC [12].We determine the resonance position B 0 = 19.849(2)G, resonance width ∆B = 8.3(5) mG, and the relative magnetic moment δµ = h × 0.76(3) MHz/G, where h is the Planck constant, from measurements of the molecular binding energy ε m ≈ δµ(B − B 0 ) and the scattering length (see Supplement).After the reaction, we decouple the atoms and molecules by quickly tuning the magnetic field far off the resonance and image each independently [12].To show that chemical reactions follow different rules in a degenerate quantum gas versus in a normal gas, we compare the molecule production rate for samples prepared above and below the BEC critical temperature T c .We extract the molecule production rate coefficient β = Ṅm /N 0 n 0 right after the magnetic field switch, where N 0 and n 0 are the initial total atom number and mean atomic density, respectively, see Fig. 2.
The measured molecule formation rate shows distinct behaviour in the two regimes.In a thermal gas with temperature T > T c , the molecule formation rate is β = b Cl Γ Cl , where Γ Cl and b Cl are the classical atomic collision rate coefficient and the branching ratio into the molecular state, respectively.Near the resonance, the collision rate coefficient is unitarity limited as , where m is the atomic mass and k B is the Boltzmann constant.Our measurement in the thermal regime is consistent with the T −1/2 scaling.From the fit we extract the branching ratio b Cl = 7(1)% (see Fig. 2b).
Entering the quantum degenerate regime T < T c , we observe a steep drop in the rate coefficient, see Fig. 2b.At low temperatures, we model the rate coefficient as are the branching ratio and the rate coefficient predicted by the universal theory in the quantum regime [21,22].The model fits the measurement well, and we extract the branching ratio to be b Q = 3.9(3)%.The sharp transi- .In the inset, we show the frequency dependence on the mean atomic density n0 and the associated fits that yields the scaling ω0 ∝ n 1.7(4) 0 . c, The reaction potential V ≡ V3 ≈ −4g 2 3 N 5 fm(1 − fm) 4 of the three-body process described in Eq. (2) for different total particle number N (green solid lines).In the thermodynamic limit Close examination of molecule formation dynamics in atomic BECs reveals additional interesting features of quantum many-body reactions.To understand the underlying reaction processes, we study the atom loss rate γ a = − Ṅa /N 0 right after switching the magnetic field, where N a is the atom number (see Fig. 3c).Far from the resonance |B − B 0 | ∆B, atoms decay slowly and the loss rate follows a symmetric Lorentzian profile centered at the resonance γ a ∝ (B − B 0 ) −2 .We attribute the lineshape to the Feshbach coupling [19].
Near the resonance, the loss rate greatly exceeds the expectation from the Lorentzian profile.This rapid atom loss only lasts for a few 100 µs and is accompanied with fast molecule production and heating of both atoms and molecules.We identify this fast process as the relaxation dynamics described in Fig. 1b.In order to characterize the enhanced reaction rate, we fit the loss rate near the resonance as γ a ∝ [1+|(B −B 0 )/δB| ± ] −1 from which we extract the exponents + = 2.9(4) above the resonance and − = 6(2) below the resonance.The exponents ± larger than 2 are consistent with the enhanced atom loss near the resonance beyond the Lorentzian profile.
The relaxation dynamics stem from three-body recombination, evidenced by the fast heating of both species in the relaxation phase, see Fig. 3b [20].In addition, the measured exponent + = 2.9(4) from the enhanced atom loss is consistent with the predicted value of 3.5 for three-body recombination near a narrow Feshbach resonance [23].We attribute the even larger exponent − = 6(2) below the resonance to bosonic enhancement of the three-body process.
Following the relaxation, both atomic and molecular populations oscillate for several ms before they slowly decay over a much longer time scale (see Fig. 3a,b and Supplement).The oscillation is consistent with the equilibration dynamics near the reaction potential minimum described in Fig. 1b.The frequency ω of the oscilla-tion depends on the magnetic field and is well fit to ω = ε 2 m / 2 + ω 2 0 (see Fig. 3d).Far from the resonance, the frequency approaches the molecular binding energy |ε m |.On resonance with ε m = 0, the frequency ω = ω 0 is given by the collective reactive coupling between the atomic and molecular fields.Large-amplitude oscillations are also observed in samples with magnetic field modulation, see Supplement.
To demonstrate the many-body nature of the reactive coupling, we probe the atom-molecule oscillations right on the Feshbach resonance with different initial atom number N 0 and mean density n 0 .After quenching the magnetic field, we observe that samples with higher populations and densities display faster oscillations, see Fig. for BECs in a harmonic trap [25].The particle number dependence of the reactive coupling supports bosonic enhancement of the reaction process.
The scaling with respect to the particle number also reveals the underlying reaction mechanism.For the twobody process described in Eq. ( 1), we derive the effective potential , where f m = 2N m /N is the molecule fraction, from which the resonant oscillation frequency is calculated to be ω 0 ∝ N 1/5 in a harmonic trap (see Supplement).For the three-body recombination process described in Eq. ( 2), the effective potential is ), which yields the scaling ω 0 ∝ N 3/5 .Our measurement agrees well with the three-body model, see Fig. 4b.
Moreover, we find the molecule fraction oscillates around 20(1)% in the equilibration phase, which is consistent with the minimum position of the reaction potential V 3 at f m = 1/5 (see Fig. 4c,d).Two-body Feshbach process, on the other hand, predicts a different minimum of V 2 at f m = 1/3.
To conclude, we observe collective many-body chemical reactions in an atomic BEC near a Feshbach resonance.The dynamics are well described by a quantum field model derived from three-body recombination.In particular, the coherent oscillations of atomic and molecular fields in the equilibration phase support quantum coherence and Bose enhancement of the reaction process.The observation of coherent and collective chemical reactions in the quantum degenerate regime paves the way to explore the interplay between many-body physics and ultracold chemistry.
Our experiment starts with a ultracold Bose gas of 6,000 to 470,000 133 Cs atoms at a temperature of 2 to 232 nK in a 3D harmonic trap.We tune the temperature and atom number by changing the trap depth at the end of evaporation process [26].The harmonic trap frequencies are (ω x ,ω y ,ω z ) = 2π× (24,13,74) to 2π× (36,15,91) Hz.The atoms are polarized into the hyperfine ground state |F = 3, m F = 3 , where F and m F are quantum numbers for the total spin and its projection along the magnetic field direction, respectively.The narrow g-wave Feshbach resonance couples Cs atoms into Cs 2 molecules at |f = 4, m f = 4; l = 4, m l = 2 , where f and l represent quantum numbers for the sum of the spins of two individual atoms and the orbital angular momentum of a molecule, m f and m l are projections of f and l along the magnetic field direction [27].
To induce the molecule formation dynamics, we quench the magnetic field close to the resonance position B 0 from 19.5 G where the samples are prepared.After holding for variable times, we switch the field back to either 19.5 G or 17.17 G to decouple atoms and molecules.We can image the remaining atoms at this field by absorption imaging.We can also wait for the remaining atoms to fly away after a resonant light pulse and image the molecules by jumping the field up to 20.4 G to dissociate them into atoms and then image the atoms from the dissociation [12].For the atom loss measurements shown in Fig. 3c, BECs with ∼40,000 atoms are transferred from the harmonic trap to a 2D flat-bottomed optical potential before we quench the field to different values near the resonance [18].For the rest of the data shown in Figs.2-4, we start from atomic samples in the 3D harmonic dipole trap.
To measure the temperature of atoms or molecules (e.g. as is shown in Fig. 3b), we release them into a horizontally isotropic harmonic trap for a quarter of the trap period, which converts the particle distribution from the real space to the momentum space [28].We extract the temperature T by fitting the momentum distribution with the condensate around zero momentum excluded using the Gaussian function n where k r is the radial wave number and k B is the Boltzmann constant.
II. DETERMINATION OF THE FESHBACH RESONANCE POSITION AND WIDTH
To determine the position of the narrow g-wave Feshbach resonance in our system, we perform measurements of molecular binding energy at different offset magnetic fields using magnetic field modulation spectroscopy [29,30] and find the field value where the binding energy reaches zero.the resonance and simultaneously modulate the field sinusoidally with an amplitude B ac = 5 mG for 5 ms.We scan the modulation frequency and measure the spectrum of the remaining atom number.From the atom loss peak of the spectrum due to the conversion from atoms into molecules, we extract the resonant frequency that corresponds to molecular energy at an offset magnetic field B dc near the g-wave Feshbach resonance [28,29], see Fig. S1.We have confirmed that the resulting atom loss peak position is not sensitive to the modulation amplitude and modulation time.
Magnetic eld (G)
A linear fit to the data in Fig. S1 gives the resonance position B 0 = 19.849(1)G where the molecular energy goes to zero.The slope of the linear fit gives the magnetic moment difference between two bare atoms and one bare molecule as δµ = h×0.76(3)MHz/G, which is consistent with Ref. [27].We emphasize that for the narrow resonance we are using, the molecular energy approaches zero quadratically only within a small fraction of the resonance width.Our linear fit to the molecular energy data underestimates the resonance position by ∼ 0.3 mG based on our calculation using the resonance width from the following scattering length measurements [19].The systematic error of our calibration of the absolute magnetic field is less than 20 mG.Throughout this work, we perform the magnetic field calibration based on the same procedure to ensure a constant systematic error.
Next we measure the s-wave scattering length near the resonance to obtain the resonance width.Here the scattering length is inferred from the expansion of a quasi-2D BEC prepared with trap frequencies (ω x , ω y , ω z ) = 2π× (11,13,895) Hz.During the expansion, the mean field interaction energy is converted into kinetic energy.We first prepare the BEC at an initial magnetic field B i = 20.481G or 19.498 G where the scattering is a i .The column density distribution of atoms in the Thomas-Fermi regime is [32]: where g 2D = ( 2 /m) √ 8πa i /l z is the coupling strength, l z = /mω z is the harmonic oscillator length in the tightly confined z direction and µ = g 2D N mω x ω y /π is the chemical potential determined by g 2D , the total atom number N and the initial trap frequencies ω x and ω y in the horizontal plane.Then we quench the magnetic field to a different value B f where the scattering length is a f and simultaneously switch off the harmonic trap in the horizontal plane.According to Ref. [33], the dynamics of a BEC after the release follow a simple dilation with scaling parameters λ x (t) and λ y (t), which determine the density distribution at time t as: where the scaling parameters evolve according to: We scan the magnetic field and measure the Thomas-Fermi radii R j = 2µλ 2 j (t)/mω 2 j where j = x, y after 20 ms expansion.Eventually we extract a f based on its one-to-one correspondence to the Thomas-Fermi radii according to Eq. (S3).The results are summarized in Fig. S2 and we fit the scattering length data using the formula [19] where we obtain the resonance width ∆B = 8.3(5) mG, the resonance position B 0 = 19.861(1)G, the background scattering length on resonance a bg = 163(1)a 0 and the slope of the background scattering length η = 0.31(2)/G.The background scattering length a bg and the slope η are consistent with Ref. [34] and the resonance width ∆B is consistent with Ref. [35], where a different method is used.The fitted resonance position deviates from that in the binding energy measurement by ∼10 mG, which we attribute to the heating of atoms near the resonance.The binding energy measurement, however, suffers less from the heating issue [29].Throughout the whole paper, we adopt the resonance position B 0 = 19.849(1)G from the binding energy measurement.
III. MANY-BODY CHEMICAL REACTIONS IN QUANTUM DEGENERATE REGIME
Here we derive the effective potential V for three-body and two-body processes and calculate the minimum position of V and the oscillation frequency of the system near the minimum.We start from the many-body Hamiltonian for the three-body recombination process A + A + A ←→ A 2 + A near a Feshbach resonance, where we only consider the ground states for atoms and molecules since their associated coupling terms dominate over others that involve excited states, due to the macroscopic population in the ground state of atoms that we begin with.
To characterize the evolution of the system, we derive the equation of motion for the atomic population Na = ψ † a ψa and molecular population Nm = ψ † m ψm , which only depends on the number operators and conserved quantities of the system as [36,37] where Ĥ is the total energy.We introduce the many-body reaction potential V3 through 2 Ṅ 2 m /2 + V ( Nm ) = const.as shown in Eq. ( 3) and obtain it under the boundary condition V3 | Nm=0,Na=N = 0 as: where the three-body polynomial is P 3 (x, y) = x[(y + 1)(x − 1)(x − 2) 2 + yx(x + 1)(x + 2)] and N = Na + 2 Nm is the conserved total population that commutes with the Hamiltonian.The region below the dashed line at V3 = 0 is where the system is allowed to reach based on the conservation law in Eq. ( 3).b, The oscillation frequency ω at different detunings εm (solid line).The dashed line represents the asymptote |εm| in the large detuning limit.On resonance = 0, the frequency is ω = 16g3N 3/2 /5.
In our experiment, the atoms and molecules have macroscopic population in the reaction process N a , N m 1.We may replace all the operators by their expectation values and express the reaction potential in terms of the total particle number N and the molecular fraction f m = 2N m /N as where the dimensionless detuning and total energy are = ε m /g 3 N 3/2 and H = H/g 3 N 3/2 , respectively.In the thermodynamic limit where the particle number N 1, the potential is reduced to: The potential curves at different detunings are shown in Fig. S3a.The potential minimum f m0 satisfies ∂ fm V 3 | fm0 = 0. On resonance = 0, the molecule fraction at the minimum is and the oscillation frequency ω 0 of the system around the minimum determined by the curvature of the potential V 3 at f m0 is At finite detuning, the minimum position f m0 and the oscillation frequency ω can be solved numerically from (1 − In the large detuning limit 1, the frequency approaches the absolute value of the molecular energy |ε m |.For the nonuniform Thomas-Fermi density distribution of a BEC in a harmonic trap, the coupling strength g 3 depends on the particle number and is given by g 3 = g 3 /Ω 3/2 , where g 3 is a coupling constant, the effective trap volume is Ω = (14π/15 2/5 )(N aā 4 ) 3/5 and the oscillator length ā is determined by the trap frequencies as ā = /m(ω x ω y ω z ) 1/3 [25].Thus on resonance the frequency ω 0 ∝ N 3/5 .Note that in our experiment, we measure the dependence of the oscillation frequency ω 0 on the initial total atom number N 0 and we assume that the atom number in the equilibration phase is proportional to N 0 .By fitting the data in Fig. 4b, we extract the three-body coupling constant g 3 /h = 5.5(5) × 10 −18 cm 9/2 /s.For a uniform system, the weight is reduced to w B = f 2 BEC where f BEC = N B /N is the BEC fraction.Since we perform the rate coefficient measurements in a harmonic trap, the density distributions are nonuniform and the weight is enhanced to be w B > f 2 BEC .Our two-component model captures the transition of the measured rate coefficients around T c very well, see Fig. S4b.
V. EXTRACTION OF MOLECULE OSCILLATION FREQUENCY AND ATOM LOSS RATE
We use the following function to fit the data in the equilibration phase at 1 ms < t < 3 ms shown in Fig. 3 for the extraction of molecule oscillation frequencies [39]: where the fitting parameters are the molecule number N m (0) extrapolated to time t = 0, decay rates γ 1 and γ 2 , oscillation amplitude ∆N m , oscillation frequency ω and phase φ.Here the two decay rates γ 1 and γ 2 characterize the decay of molecule number and the damping of molecule oscillation amplitude, which are generally different.
For the data shown in Fig. 4, we fit the data at 0.3 ms < t < 3 ms using the function where we find the single decay rate γ is enough to describe the data very well.For each fit, we subtract from the time t a delay time of 0.15 ms due to the finite speed of our magnetic field switch.To prevent the fits from getting stuck in a local optimum, we vary the initial guess of the frequency ω for the fits and use the result that has the minimum root mean square error.
For the atom loss rate measurement shown in Fig. 3c, we present example time traces of the averaged atomic density in the 2D flat-bottomed trap in Fig. S5.Far from the resonance, see Fig. S5a and b, the atomic density decays slowly and we fit the data using where n a (0) is the initial atomic density and γ a is the atom loss rate.The fit is applied to the data above half of the initial density.
Below and near the resonance, see Fig. S5c, we find the density first decays rapidly and then settles around some equilibrium value before a slow decay kicks in at a longer time scale than 3 ms.In this case, we use the following fit function: n a (t) = n a (0) θ(t 0 − t) + [(1 − s)e −γa(t−t0) + s]θ(t − t 0 ) , (S30) where t 0 is the time when the decay begins and s represents the fractional density the system settles to after the initial fast decay.On the other hand, above and near the resonance, see Fig. S5d, the data is fit well by a single exponential decay as in Eq. (S29).
VI. AMPLIFICATION OF THE MOLECULE OSCILLATION THROUGH MAGNETIC FIELD MODULATION
We observe large amplitude molecule number oscillations by applying additional external driving to atomic BECs near the Feshbach resonance.The coupled atomic and molecular BECs in our system effectively form a bosonic Josephson junction [40], in close analogy to Cooper-pair Josephson junctions in superconducting devices [41].Inspired by the Shapiro resonance effect where adding a small resonant ac component to an applied voltage enhances dc tunneling current in the superconducting Josephson junction, we modulate the magnetic field at a frequency close to the free molecule oscillation frequency at a static field, with the hope to facilitate the reaction.At 2 mG below the resonance, the molecule number oscillates around 4,000 at 1.4 kHz, with a contrast of ∼9% (see Fig. S6b).After adding a magnetic field modulation with an amplitude of 4 mG around the same field, we find the oscillation contrast increases by a factor of 3 to 4, see Fig. S6c.We also see clear damping in the oscillations with the driving field, likely due to additional heating introduced by the driving.
FIG. 1.Reactive coupling between atomic and molecular quantum fields.a, Bose condensed atoms described by a single wavefunction ψa are coupled to molecules condensed in the state ψm.The coupling synthesizes and decomposes molecules.Wavy lines represent dissipation.b, We introduce a reaction potential V to describe the many-body dynamics of the atomic and molecular fields.A pure sample of atoms or molecules first relaxes towards lower potential and then equilibrates near the potential minimum.Due to bosonic stimulation, the potential scales as V ∝ N α , where N is the total particle number and α is the reaction order, see text.
FIG. 2 .
FIG. 2. Comparison of molecule formation rate in classical and quantum degenerate regimes.a, Dynamics of molecule formation in an atomic gas after quenching the magnetic field 3(2) mG above the Feshbach resonance at B0 = 19.849(2)G.The solid lines are fits to the data in the initial growth stage for extraction of the molecule formation rate Ṅm.b, The extracted molecule formation rate coefficient β above and below the critical temperature Tc.The red line is a fit to the data based on the classical kinetic theory prediction β = b Cl Γ Cl , from which we obtain the classical branching ratio b Cl = 7(1)% (see text).The blue line fits the data in the quantum regime with β = bQΓQ, which gives the quantum branching ratio bQ = 3.9(3)% (see text).The inset shows the rate coefficient normalized to the classical gas expectation Γ Cl .In panel a error bars represent one standard deviation of the mean, estimated from 4-8 measurements.In panel b error bars represent 95% confidence intervals of the mean.
FIG. 3 .
FIG. 3. Coherent reaction dynamics in quantum gases of atoms and molecules across a Feshbach resonance.a, Evolution of atomic and molecular populations in an atomic BEC after quenching the magnetic field 2(1) mG below the resonance.Solid lines are fits to capture the dynamics in relaxation and equilibration processes.b, Effective temperatures determined from time-of-flight measurements of the atoms and molecules at the same field.Solid lines are guides to the eye.Insets in panels a and b are sample images of atoms and molecules after the time-of-flight (see Supplement).c, Loss rate of atoms immediately after the quench.Solid (empty) circles represent samples prepared below (above) the resonance.Green line is a Lorentzian fit with center at B0 and width ∆B.Magenta solid (dashed) line is a fit near and below (above) the resonance based on γa = γ0/[1 + |(B − B0)/δB| ± ] (see text), from which we obtain the exponent − = 6(2) ( + = 2.9(4)) below (above) the resonance.Inset is a zoomed-in view near the resonance.d, Oscillation frequency of molecular populations from atomic samples at mean BEC density of 2.9 × 10 13 cm −3 and BEC fraction of 80% (red) and 60% (purple).Solid lines are empirical fits based on ω = δµ 2 (B − Bm) 2 + ω 2 0 , where Bm and ω0 are fitting parameters.The values of Bm from the fits are consistent with the resonance position B0 within our measurement uncertainty.Dashed lines are the asymptotes ω = |δµ(B − Bm)|.Data in panels a and b are averages of 3-4 measurements, and error bars represent one standard deviation of the mean.Data in panels c and d are obtained from the fits, see Supplement, and error bars represent 95% confidence intervals.
FIG. 4 .
FIG.4.Bose-enhanced atom-molecule reaction dynamics on Feshbach resonance.a, Molecules formed in atomic BECs with different initial atom numbers N0 following the magnetic field quench to 1(1) mG below the Feshbach resonance.Solid lines are fits to the data.b, The extracted oscillation frequencies for different initial atom number N0.The red (magenta) solid line is a power law fit with exponent given by the two-(three-)body model.The blue dashed line is a power law fit with a free exponent, which yields the scaling ω0 ∝ N 0.7(2) 0
N 1 ,
the minimum occurs at molecule fraction fm ≡ 2Nm/N = 1/5.d, Evolution of molecule fraction fm for different initial atom number N0.The mean molecule fractions in the equilibration phase fm = 19(2)% and fm = 21(1)% are consistent with the predicted minimum position of V3 at fm = 1/5.Here the uncertainties represent 95% confidence intervals.Data in panels a and d are averages of 3-5 measurements and error bars represent one standard deviation of the mean.Error bars in panel b represent 95% confidence intervals.tion of molecule formation rate around the critical temperature T c indicates different laws in the classical and quantum degenerate regimes.
,
4a. Fitting the data, we obtain the scaling ω 0 ∝ N see Fig. 4b.Note that the two scalings are linked by n 0 ∝ N 2/5 0 FIG. S1.Bound state energy diagram for cesium atoms in the hyperfine ground state |F = 3, mF = 3 and molecular energy measurement near the g-wave Feshbach resonance around 20 G using modulation spectroscopy.a, Energy diagram for Cs2 molecular states close to the atomic scattering continuum adapted from Fig. 22 in Ref. [19].b, Molecular energy εm obtained from modulation spectroscopy at different offset magnetic fields.The solid line is a linear fit which reaches 0 at B0 = 19.849(1)G.
FIG. S2 .
FIG. S2.Scattering length measurement near the narrow g-wave Feshbach resonance by time-of-flight.a, Atomic density distributions after 20 ms time-of-flight at different magnetic fields near the Feshbach resonance.The images with B < 19.865 G (B > 19.865 G) come from initial BECs prepared below (above) the Feshbach resonance.b, Scattering length extracted from the Thomas-Fermi radii in the time-of-flight images, see text.The circular (diamond) data points come from initial BECs prepared below (above) the resonance.The solid line is a fit to the data excluding the points at 19.858 G < B < 19.909 G based on Eq. (S4), from which we obtain the resonance width ∆B = 8.3(5) mG.The points at 19.855 G < B < 19.909 G are excluded because of the heating effect near the resonance.c, Total atom number extracted from the time-of-flight images.
FIG.S3.The reaction potential V3 for the three-body process and the oscillation frequency ω near the minimum of V3 at different detuning .a, The reaction potential calculated based on Eq. (S9) at = 0 (blue), = 0.5 (orange) and = 1 (yellow).The region below the dashed line at V3 = 0 is where the system is allowed to reach based on the conservation law in Eq. (3).b, The oscillation frequency ω at different detunings εm (solid line).The dashed line represents the asymptote |εm| in the large detuning limit.On resonance = 0, the frequency is ω = 16g3N 3/2 /5.
3 FIG. S5 .
FIG. S5.Examples of atomic density evolution in a 2D flat-bottomed optical potential for the data presented in Fig. 3c.For data below the resonance, BECs are initially prepared at 19.5 G and magnetic field is quenched to values between 0.05 and 1 G (panel a) and between 5 and 50 mG (panel c) below the resonance.Relaxation and equilibration phases are marked with different background colors in panel c.For data above the resonance, BECs are initially prepared at 20.4 G and magnetic field is quenched to values between 0.1 and 1 G (panel b) and between 10 and 50 mG (panel d) above the resonance.Solid lines are fits for extracting the atom loss rates, see text.
FIG. S6.Amplification of coherent molecule oscillations through magnetic field modulation.a, Schematic diagram of molecule formation near the Feshbach resonance with additional sinusoidal magnetic field modulation, where the black wavy arrow represents the RF photon from the modulation.b, Evolution of the molecular population after quench to a static magnetic field 2(1) mG below the resonance, where the contrast C of the molecule oscillation is 9(3)%.c, Time traces of molecule number with magnetic field modulation at 0.85 (red), 1.71 (green) and 3.42 kHz (magenta) with modulation amplitude Bac ≈ 4 mG around the same field as in panel b.The solid lines are fits based on Eq. (S27).The contrast is defined as C = Nm(0)/∆Nm, see text.d, Frequency of the molecule oscillations extracted from the fits in panel c.Blue solid line is a linear fit without an offset, which gives a slope 0.97(3).The purple dashed line represents the molecule oscillation frequency in panel b. | 8,243.2 | 2022-07-17T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Human Breast Cancer Stem Cells Have Significantly Higher Rate of Clathrin-Independent and Caveolin-Independent Endocytosis than the Differentiated Breast Cancer Cells
Abbreviations: ABC: ATP Binding Cassette; ABCG2: ABC Transporter group G number 2; ABCG2/A12: the cloned ABCG2specific aptamer #12; ALDH: Aldehyde Dehydrogenase; APC: Allo Phyco Cyanin; BCS: Breast Cancer Stem; BCS/A35: the cloned BCS cell-binding aptamer # 35; BHK: Baby Hamster kidney; BHK/ ABCG2: human ABCG2 cDNA transfected BHK cells; CDE: ClathrinDependent Endocytosis; CVDE: Caveolin-Dependent Endocytosis; DAPI: 4’,6-diamidino-2-phenylindole; FACS: FluorescenceActivated Cell Sorting; FBS: Fetal Bovine Serum; FITC: Fluorescein Isothiocyanate; hTF: Human Transferrin; IRB: Institutional Review Board; LacCer: Lactosylceramide; mAb: Monoclonal Antibody; MCD: Methyl-β-Cyclodextrin; MDC: Monodansylcadaverine; MDR: Multi Drug Resistance; PBS: Phosphate Buffered Saline; PCR: Polymerase Chain Reaction; PI: Propidium Iodide; ssDNA: Single Stranded DNA; TR: Texas Red
Introduction
Breast cancer is the most common malignancy in women [1,2]. Although surgical removal of breast cancer plays a very important role in treating the patients, the treatments with chemotherapy often fail to eradicate the tumor. In fact, many cancers, including breast cancer, initially respond well to chemotherapy. Very often the tumors become resistant to anticancer drugs during or shortly after the chemotherapy [3], presumably because of their inabilities to eradicate the MDR cancer stem cells [4][5][6].
Cancer stem cells play very important roles in self-renewal, MDR and generation of secondary tumors [7][8][9][10]. Their self-renewal, growth, differentiation and epithelial to mesenchymal transition are regulated by Wnt, Notch, Hedgehog and transforming growth factor β signaling pathways [11][12][13][14][15][16][17]. For example, Wnt enhances C-myc and cyclin D1 expression [18,19] and induces oncogenic proliferation [20,21]. Wnt and Notch signaling pathways also play a very important role in regulating the expression of ABC transporters [22]. Down-regulating the Wnt signaling significantly decreased the expression of ABCB1 and ABCG2 [23][24][25]. In contrast, Wnt agonist enhances the expression of β-catenin and increases the protein levels of ABCB1 and ABCG2 [23][24][25]. ABCG2 [26][27][28] is universally expressed in the undifferentiated cancer stem cells [29,30] and its expression is shut down in many differentiated cells [29,31]. Thus, conventional chemotherapy may efficiently kill the bulk of differentiated drug sensitive breast cancer cells, but not the MDR self-renewable BCS cells, leading to enrichment of the MDR BCS cells. The enriched MDR BCS cells have abilities to renew themselves and to differentiate to cancer cells. Thus, without eliminating the MDR BCS cells, it is impossible to eradicate breast cancers.
In order to target the ABCG2-expressing MDR BCS cells, we thought about to isolate ABCG2-specific ligands that recognize the extracellular portions of this trans-membrane protein so that we can selectively deliver therapeutic agents into BCS cells by using ABCG2specific ligand-coated liposomes harboring therapeutic agents. However, since normal stem cells and other tissues, such as hepatic stem cells [32], lung stem cells [33], and placenta cells [34] etc, also express ABCG2, the treatment with ABCG2-specific ligand-coated liposomes harboring therapeutic agents might cause side effects. Thus, we thought about to isolate BCS cell-specific ligands. In this report, we have enriched and cloned ABCG2-specific aptamers and BCS cellbinding aptamers and found that BCS cells have significantly higher rate of clathrin-independent and caveolin-independent endocytosis than the differentiated breast cancer cells.
Materials
Aptamer library, primers, Texas Red (TR)-labeled primers and Fluorescein Iso Thio Cyanate (FITC) labeled primers were derived from Mayo Clinic Molecular Biology Core. Fresh human breast cancer specimens were derived from Mayo Clinic Arizona [collected by Dr. Pockaj upon written informed consent, based on her Institutional Review Board (IRB) # 2130-00 00 (Titled "Cancer Tissue Study")]. Since isolation of BCS cells was performed in Dr. Chang's laboratory, Dr. Chang was also required to get approval from Mayo Clinic IRB. Dr. Chang's IRB number for this study is # 10-005974 (Titled "Roles of Wnt Signaling in Breast Cancer Stem Cells Self-renewal and Multidrug Resistance") and there was a special note to indicate that "Patients must consent under IRB # 2130-00 00 in order to participate in this study. In addition, "the IRB approves waiver of specific informed consent in accordance with 45CFR 46.116 (d) as justified by the investigator and waiver of HIPAA authorization in accordance with applicable HIPAA regulations." Furthermore, patients' information was de-identified. Fetal bovine serum (FBS) was from Gemini Bio-Products (Sacramento, California, USA). Collagenase IV, Red-taq DNA polymerase, propidium iodide (PI), endocytosis inhibitors monodansylcadaverine (MDC), sucrose, genistein, methyl-β-cyclodextrin (MCD) and routine chemicals were from Sigma (St. Louis, Missouri, USA). Mounting media with 4',6-diamidino-2-phenylindole (DAPI) was from Vector Laboratory (Burlingame, California, USA). Aldefluor kit, MammoCult media, supplementary bullet and 100 µm cell strainer were from StemCell Technologies (Vancouver, BC, Canada). HAM's nutrient mixture F12 was from JRH Biosciences (Lenexa, Kansas, USA). DMEM/F-12, RPMI-1640 and OptiMEM, TOPO TA cloning vector, Alexa fluor 633-conjugated human transferrin (hTF) and BODIPYlabeled lactosylceramide (LacCer) were from Invitrogen (Grand Island, New York, USA). Allophycocyanin (APC)-labeled ABCG2-specific monoclonal antibody (mAb) 5D3 was from Santa Cruz Biotechnology (Santa Cruz, California, USA). APC-labeled CD44 mAb was from BD Biosciences (San Jose, California, USA). Ultralow attachment plates were from Corning (Corning, New York, USA). Millicell chambered cell-culture glass slides were purchased from Millipore (Billerica, Massachusetts, USA).
Aldefluor assay and isolation of BCS cells
The isolation of BCS cells from breast cancer cell lines, including MCF-7 and MDA-MB-231, was performed by using Aldefluorassay kit [36]. The isolation of BCS cells from fresh human breast cancer specimens was performed according to the method described [36]. Briefly, human primary breast cancer specimens were chopped into small pieces, washed with HAM's nutrient mixture F12 supplemented with 1% penicillin/streptomycin and 1% fungizone and then digested with collagenase type IV (100 U/ml in RPMI 1640 media supplemented with 5% FBS) and hyaluronidase (100 U/ml in RPMI 1640 media supplemented with 5% FBS) for approximately 2 h at 37°C. The dissociated cells were filtered through a sterile 100 µm cell strainer. The cells were suspended in Aldefluor-assay buffer containing 1.5 μM Aldehyde Dehydrogenase (ALDH) substrate BODIPY-aminoacetaldehyde in the presence or absence of 50 mM ALDH-specific inhibitor diethylaminobenzaldehyde, incubated for 30 min in the dark at 37°C, washed with Aldefluor-assay buffer for three times and added 1 μg/ml PI to check the cell viability. The sorting gates were established based on the corresponding cells treated with ALDH inhibitor. The ALDH positive (ALDH + ) and ALDH negative (ALDH -) cells sorted out by BD FACSAria flow cytometer were cultured in MammoCult media with supplementary bullet in ultralow attachment plates at low density (5,000 viable cells/ml).
Enrichment and cloning of ABCG2-specific aptamers or BCS cell-binding aptamers
The systematic evolution of ligands by exponential enriched DNA or RNA aptamers can specifically recognize their corresponding targets [37][38][39][40][41][42]. Based on the fact that BHK cells do not express human ABCG2 protein, whereas BHK/ABCG2 cells do [35], we have enriched the ABCG2-specific aptamers by using the protocol described [43][44][45]. Briefly, BHK/ABCG2 cells were washed with wash buffer [(25 mM glucose, 5 mM MgCl 2 and 1 mg/ml bovine serum albumin in phosphate buffered saline (PBS)] before incubation with aptamer library 5'-ACGCTCGGATGCCACTACAG-60 randomized nucleotides-CTCATGGACGTGCTGGTGAC-3'. The aptamer library was heated to 95°C for 5 minutes and cooled down in dry-ice to get single stranded DNA (ssDNA). The ssDNA (10 nM) was mixed with aptamer binding buffer (0.1 mg/ml of yeast tRNA in wash buffer) and used to bind to the target BHK/ABCG2 cells on ice for 30 min. Upon washing away of the unbound aptamers with wash buffer (3 times), the bound aptamers were eluted with aptamer binding buffer by incubating the samples at 95°C for 5 minutes. The eluates, after centrifugation at 13,000×g for 5 minutes, were used to bind to the parental BHK cells (counter-selection). The unbound aptamers were amplified by polymerase chain reaction (PCR) with forward primer 5'-ACGCTCGGATGCCACTACAG-3', reverse primer 5'-GTCACCAGCACGTCCATGAG-3' and Redtaq DNA polymerase. The PCR products were used to do another cycle of binding and counter-selection. To enrich BCS cell-binding aptamers, the mammosphere cells derived from MCF-7 were used to do the binding (with aptamer library) whereas the differentiated breast cancer MCF-7 cells were used to do the counter-selection. Once the PCR amplified aptamers reached the stage that they cannot bind to the negative control cells, these aptamers were cloned into TOPO TA cloning vector and sequenced.
Aptamer binding and antibody staining
ABCG2-specific aptamers or BCS cell-binding aptamers were amplified by PCR with either FITC-or TR-labeled primers, denatured at 95°C for 5 minutes and cooled down on dry-ice to make ssDNA. The target cells were washed three times with PBS, incubated with aptamer binding buffer containing human Fc blocker on ice for 5 minutes and then mixed with FITC-or TR-labeled aptamers (125 ng/ ml) and/or ABCG2 or CD44 specific mAbs on ice for 30 minutes in the dark. The stained cells were washed three times with ice-cold aptamer wash buffer, performed cytospin (for mammosphere cells grown in suspension) to attach the cells to the charged slides, mounted with antifade mounting media containing DAPI and then evaluated with Carl-Zeiss confocal microscope. To separate the FITC-ABCG2/A12 aptamer bound and un-bound BHK/ABCG2 cells ( Figure 1C and 1D), BHK and BHK/ABCG2 cells grown up in 100 mm plates were detached from the plates by rubber-policeman scratch, stained with FITC-ABCG2/ A12 aptamer and sorted out by BD FACSAria flow cytometer. To test whether trypsin digestion (to destroy the extracellular portions of the membrane surface proteins) would abrogate the aptamer or antibody binding, cells were digested with 0.25% trypsin at 37°C for 5 minutes and then stained them with aptamers or antibodies as described above.
Inhibition of endocytosis
Cells were washed three times with PBS and incubated in optiMEM culture media in the absence or presence of clathrin dependent endocytosis (CDE) inhibitors, such as sucrose (250 mM) or MDC (50 µM), or caveolin-dependent endocytosis (CVDE) inhibitors, such as genistein (25 µM) or MCD (2.5 µM), at 37°C for 30 minutes. The cells were washed with ice-cold aptamer binding buffer once, incubated in the same buffer for 5 min, stained with aptamers on ice for 30 minutes in dark and then incubated with endocytosis markers, such as Alexa fluor 633-conjugated hTF (500 ng/liter) or BODIPY-labeled LacCer (500 nM), at 37°C in dark for 15 minutes. The stained cells were washed three times with ice-cold aptamer wash buffer, performed cytospin to attach the cells to the charged slides, mounted with anti-fade mounting media containing DAPI and then evaluated with Carl-Zeiss confocal microscope.
Mammosphere formation
Mammosphere formation assay was performed according to the method described [46]. Briefly, the cells sorted out by fluorescenceactivated cell sorting (FACS) were plated in ultra-low attachment 96well plates at a density of 1,000 cells per well in MammoCult media supplemented with growth factors. Numbers of mammosphere formation, after 10-14 days culture, were counted under a light microscopy.
Confocal microscopy
Confocal imaging was carried out on a Carl-Zeiss laser scanning microscope equipped with Aprochromat 63×1.40 oil immersion objective lens that is suitable to evaluate fluorescence and differential interference contrast images. Excitation and emission wavelengths (nm) used for the following fluorophores were: DAPI, 360/405; BODIPY and FITC, 488/500-530; Alexa fluor 633, APC and TR, 543/633. Representative confocal images were acquired, analyzed by laser scanning microscope image browser and arranged in Adobe photoshop CS3 software.
Statistical analysis
The results in Figure 3K and Figure S3C were presented as means ± SD. The two-tailed P values were calculated based on the unpaired t test from GraphPad Software Quick Calcs. By conventional criteria, if P value is less than 0.05, the difference between two samples is considered to be statistically significant.
Enrichment and cloning of ABCG2-specific aptamers
We had found that the parental BHK cells did not have ability to transport methotrexate across the biological membranes whereas the human ABCG2 cDNA transfected BHK cells did [35], suggesting that: 1) ABCG2 protein is located on the membrane surface; 2) the parental BHK cells do not express endogenous counterpart transporter. Indeed, it had been proved that the majority of human ABCG2 protein expressed heterogeneously was located on the cell surface [47]. Therefore, the pair of BHK cells, i.e., the parental BHK cells and the BHK/ABCG2 cells, provided good source of materials for us to enrich the ABCG2specific aptamers. In order to do so, the ABCG2-expressing BHK/ ABCG2 cells (used as binding materials) and the parental BHK cells (used as counter-selection materials) were used to enrich the human ABCG2-specific aptamers by employing the aptamer library and the protocol mentioned in Materials and Methods. After 5, 10 or 15 cycles of binding and counter-selection, the enriched aptamers were labeled with FITC and used to stain the parental BHK cells and the ABCG2expressing BHK/ABCG2 cells. The results in Figure S1 indicated that, after 15 cycles of binding and counter-selection, the enriched aptamers bound to the ABCG2-expressing BHK/ABCG2 cells ( Figure S1B), but not to the parental BHK cells ( Figure S1A). In addition, upon short time trypsin digestion, the enriched ABCG2-specific aptamers cannot stain the ABCG2-expressing BHK/ABCG2 cells any more ( Figure S1C), suggesting that trypsin digestion destroyed the aptamers' binding sites.
The enriched ABCG2-specific aptamers were cloned into TOPO TA cloning vector and used to stain the parental BHK cells and the ABCG2-expressing BHK/ABCG2 cells. The results in Figure 1 clearly indicated that ABCG2-specific aptamer clone #12 (ABCG2/A12, its nucleotide sequence is shown in Figure S2) bound to the surface of the ABCG2-expressing BHK/ABCG2 cells ( Figure 1B), but not to the parental BHK cells ( Figure 1A). Flow cytometry analysis of the ABCG2/ A12-labeled cells indicated that the majority of the ABCG2-expressing BHK/ABCG2 cells were labeled with this aptamer (Figure 1D), whereas the parental BHK cells were not ( Figure 1C).
Enrichment and cloning of BCS cell-binding aptamers
In order to enrich BCS cell-binding aptamers, we should have BCS cells. BCS cells had been enriched from breast cancer cell lines, such as MCF-7 or MDA-MB-231, or fresh breast cancer specimens by using Aldefluor assay which is based on the intracellular ALDH activity. The results shown in Figure S3 indicated that the ALDH negative (ALDH -) cells cannot form mammospheres in vitro ( Figure S3A) whereas the ALDH positive (ALDH + ) cells can ( Figure S3B), indicating that ALDH + cells contain BCS cells.
Since we planned to use the cells derived from mammospheres as binding materials for enrichment of BCS cell-binding aptamers, we'd like to know what percentage of the cells in mammospheres were ALDH + cells. Interestingly, we found that only approximately 18% of the cells derived from mammospheres were ALDH + cells ( Figure S3C), implying that the majority of the cells in mammospheres were differentiated breast cancer cells. This result also suggested that we should separate the ALDH + cells from the ALDHcells and then use them to do the enrichment of BCS cell-binding aptamers. However, since we had found that MCF-7 cells have only approximately 0.7% of ALDH + cells ( Figure S3C), we could use them as counter-selection materials to bypass the FACS step. Indeed, by using the same protocol as the one used in the enrichment of ABCG2-specific aptamers, except that the cells derived from mammospheres were used as binding materials whereas the cells derived from MCF-7 were used as counterselection materials, we had enriched (after 15 cycles of binding and counter-selection) the BCS cell-binding aptamers that bind to some of the cells derived from mammospheres ( Figure S4B), but not to the cells derived from MCF-7 ( Figure S4A). In addition, some of the cells recognized by BCS cell-binding aptamers also detected by mAb against ABCG2 ( Figure S4B), suggesting that they might be the ABCG2-expressing BCS cells. Furthermore, the enriched BCS cellbinding aptamers detected more cells (indicated by the arrowheads in Figure S4B) than the human ABCG2-specific mAb 5D3, implying that some of the mammosphere cell-enriched aptamers might detect the early progenitor cells that did not over-express ABCG2, but were distinguishable from the differentiated breast cancer cells. Of note, the aptamers bound to the cells or the mAb 5D3 bound to the cells were efficiently internalized ( Figure S4B), suggesting that BCS cells might have high rate of endocytosis.
Regardless of whether they were BCS cells or early progenitor cells, the treatment with trypsin for short time completely abolished the aptamer binding and the ABCG2-specific mAb binding ( Figure S4C), suggesting that trypsin digestion completely destroyed the mAb 5D3 binding site and the aptamers binding sites located on the BCS cell surface.
It has been demonstrated that human breast cancers contain a cell population with stem cell properties bearing the surface markers CD44 + /CD24 -/lin - [48]. In order to test whether these aptamers would bind to the CD44 + BCS cells or not, the FITC-labeled BCS cell-enriched aptamers and the APC-conjugated CD44-specific mAb were used to co-stain the cells derived from mammospheres. As shown in Figure S5, the cells stained with BCS cell-enriched aptamers were also detected by CD44-specific mAb, suggesting that the BCS cell-enriched aptamers might bind to human BCS cells.
The enriched BCS cell-binding aptamers were cloned into TOPO TA cloning vector and used to stain the control MCF-7 cells and the cells derived from mammospheres. The results in Figure 2A clearly indicated that BCS cell-binding aptamer # 35 (BCS/A35, its nucleotide sequence is shown in Figure S6) stained some cells (internalized) derived from mammospheres ( Figure 2B), but not the cells derived from MCF-7 (Figure 2A). In addition, the results shown in Figure 2 indicate that BCS/A35 can neither stain the parental BHK cells ( Figure 2C) nor the ABCG2-expressing BHK/ABCG2 cells ( Figure 2D), suggesting that BCS/A35 recognized a BCS cell surface protein that is distinguishable from ABCG2.
Aptamer-stained cells can form mammospheres
Since ABCG2 is universally expressed in the undifferentiated cancer stem cells [29,30], we speculated that our ABCG2-specific aptamers should recognize the undifferentiated BCS cells. In order to test this hypothesis, FITC-labeled ABCG2/A12 was used to stain the control MCF-7 cells and the cells derived from mammospheres. The results in Figure 3A showed that ABCG2/A12 cannot stain the differentiated breast cancer MCF-7 cells, but clearly detected low percentage of the BCS cells derived from mammospheres ( Figure 3B). In addition, those cells stained with ABCG2/A12 were also recognized by ABCG2-specific mAb 5D3 ( Figure 3C). Of note, in contrast to the surface labeling of BHK/ABCG2 cells with ABCG2/A12 ( Figure 1B), the bound ABCG2/A12 aptamers were efficiently internalized into the BCS cells ( Figure 3B and 3C), suggesting that human BCS cells might have much higher endocytosis rate than the ABCG2-expressing BHK/ ABCG2 cells.
In order to test whether this internalization is via the binding of ABCG2/A12 or ABCG2-specific mAb 5D3 to the extra-cellular loops of ABCG2 protein, the cells derived from mammospheres were digested with trypsin for a short time and then did the binding analysis. As shown in Figure 3D, short time trypsin digestion completely abrogated the co-staining of ABCG2/A12 and ABCG2-specific mAb 5D3, suggesting that trypsin digestion completely destroyed the ABCG2/ A12 and the ABCG2-specific mAb 5D3 binding sites.
If the cells recognized by aptamers and/or ABCG2-specific mAb 5D3 are BCS cells, they should form mammospheres in vitro. Indeed, ABCG2/A12 negative cells ( Figure 3G and 3K) or 5D3 negative cells ( Figure 3I and 3K) sorted out by FACS barely formed mammospheres, whereas the ABCG2/A12 positive cells ( Figure 3H and 3K) or the ABCG2-specific mAb 5D3 positive cells ( Figure 3J and 3K) clearly formed multiple mammospheres, suggesting that the ABCG2/A12 bound cells or the ABCG2-specific mAb 5D3 bound cells are the undifferentiated BCS cells.
CDE inhibitors can inhibit the endocytosis in differentiated breast cancer cells, but not in BCS cells
The results in Figures 2, 3, S4B and S5 suggested that BCS cells might have high rate of endocytosis. In order to test whether this high endocytosis rate undergoes CDE or not, Alexa fluor 633-conjugated hTF and FITC-labeled BCS/A35 or FITC-labeled ABCG2/A12 were used to stain the cells derived from either MCF-7 or mammospheres in the presence or absence of CDE inhibitors. In the absence of CDE inhibitors, hTF clearly labeled the surface of MCF-7 cells, with certain amount of hTF located in cytoplasmic portion ( Figure 4A). However, the labeling intensity in the cytoplasmic portion of MCF-7 cells in the presence of CDE inhibitors, such as MDC (amplified images in Figure 4B and S7B ) or sucrose (amplified images in Figure 4C and S7C), was significantly lower than in the absence of CDE inhibitors (amplified images in Figure 4A and S7A). In contrast, the labeling of the cells derived from mammospheres with FITC-labeled BCS/ A35 or ABCG2/A12 in the presence of CDE inhibitors, such as MDC (amplified images in Figure 4E or S7E) or sucrose (amplified images in Figure 4F or S7F), was similar to that the labeling in the absence of CDE inhibitor (amplified images in Figure 4D or S7D), suggesting that the internalization of the FITC-labeled aptamers in BCS cells may not be mediated by CDE. This conclusion was further confirmed by the co-staining of the cells derived from mammospheres with Alexa fluor 633-conjugated hTF and FITC-labeled BCS/A35 or ABCG2/ A12 in the presence ( Figure 4H and 4I or S7H and S7I) or absence of CDE inhibitors ( Figure 4G or S7G) in which the cells heavily stained with aptamer BCS/A35 or ABCG2/A12 were, regardless of whether the CDE inhibitor is present or not, also heavily stained with hTF. In addition, the cells not stained with FITC-labeled aptamer BCS/A35 or ABCG2/A12 were lightly labeled with hTF ( Figure 4G, 4H and 4I or S7G, S7H and S7I), suggesting that BCS cells have significantly higher rate of clathrin-independent endocytosis than the corresponding differentiated breast cancer cells.
CVDE inhibitors can inhibit the endocytosis in differentiated breast cancer cells, but not in BCS cells
In order to test whether the high rate of endocytosis in BCS cells undergoes CVDE or not, BODIPY-labeled LacCer and TR-conjugated BCS/A35 or ABCG2/A12 were used to stain the cells derived from either MCF-7 or mammospheres in the presence or absence of CVDE inhibitors. In the absence of CVDE inhibitors, LacCer clearly labeled the surface and intracellular portion of MCF-7 cells (amplified images in Figure 5A or S8A). However, in the presence of CVDE inhibitors, such as MCD (amplified images in Figure 5B or S8B) or genistein (amplified images in Figure 5C or S8C), the labeling intensity in the cytoplasmic portion of MCF-7 cells was significantly lower than in the absence of CVDE inhibitors (amplified images in Figure 5A or S8A). Interestingly, the labeling of the cells derived from mammospheres with TR-conjugated aptamer BCS/A35 or ABCG2/A12 in the presence of CVDE inhibitors, such as MCD (amplified images in Figure 5E or S8E) or genistein (amplified images in Figure 5F or S8F), was similar to that the labeling in the absence of CVDE inhibitor (amplified images in Figure 5D or S8D), suggesting that the internalization of the TR-conjugated aptamers in BCS cells was not mediated by CVDE. This conclusion was further confirmed by the labeling of the cells derived from mammospheres with BODIPY-labeled LacCer and TRconjugated BCS/A35 or ABCG2/A12 in the presence ( Figure 5H and 5I or S8H and S8I) or absence of CVDE inhibitors ( Figure 5G or S8G) in which the cells heavily stained with aptamer BCS/A35 or ABCG2/A12 were, regardless of whether the CVDE inhibitor is present or not, also heavily stained with LacCer. In addition, the cells not stained with TRconjugated aptamer BCS/A35 or ABCG2/A12 were lightly labeled with LacCer ( Figure 5G, 5H and 5I or S8G, S8H and S8I), suggesting that BCS cells may have significantly higher rate of caveolin-independent endocytosis than the corresponding differentiated breast cancer cells.
Discussion
As mentioned in introduction, cancer stem cells over-express ABC transporters, such as ABCB1 and ABCG2, and play very important roles in self-renewal, growth and generation of secondary tumors. Due to over-expression of ABC transporters, they become resistant to multiple anticancer drugs. Thus, conventional chemotherapy may kill the bulk of drug sensitive differentiated breast cancer cells, but not the MDR BCS cells. Their self-renewal, growth, differentiation and epithelial to mesenchymal transition are regulated by Wnt, Notch, Hedgehog and transforming growth factor β signaling pathways. Thus, knocking down, such as using small interfering siRNA, the crucial factor(s) in these signaling pathways might prevent their self-renewal and growth. However, the question is how to deliver these therapeutic agents selectively into cancer stem cells. The report described here is to develop BCS cell-specific ligands so that we can selectively deliver therapeutic agents into the MDR BCS cells.
The following results clearly demonstrated that both BCS cellbinding aptamers and ABCG2-specific aptamers can be considered to be BCS cell-binding ligands: 1) BCS cell-enriched aptamers co-stained the potential BCS cells derived from mammospheres with either ABCG2specific mAb 5D3 ( Figure S4B) or CD44-specific mAb ( Figure S5); 2) BCS/A35 ( Figure 2B) or ABCG2/A12 ( Figure 3B) can bind to some of the cells derived from mammospheres, but not to the differentiated MCF-7 breast cancer cells (Figure 2A or 3A); 3) ABCG2/A12 or BCS/ A35 and ABCG2-specific mAb 5D3 recognized the same cells derived from mammospheres ( Figure 3C or 3E); 4) ABCG2/A12 positive cells or ABCG2-specific mAb 5D3 positive cells, sorted by FACS, can form mammospheres in vitro ( Figure 3H, 3J and 3K), whereas ABCG2/A12 negative cells or 5D3 negative cells cannot ( Figure 3G, 3I and 3K). Thus, these aptamers could be used as ligands to make the aptamercoated liposomes to selectively deliver therapeutic agents into the MDR BCS cells.
Interestingly, the bound ABCG2-specific mAb 5D3 or the bound aptamers, regardless of whether ABCG2/A12 or BCS/A35 aptamer was used, were efficiently internalized into the BCS cells derived from mammospheres ( Figure 2B, 3B, 3C, 3E, S4B and S5). In addition, hTF or LacCer mainly bind to the surface of the differentiated breast cancer cells ( Figure 4A and S7A or 5A and S8A), whereas the bound hTF or LacCer were efficiently internalized into the BCS cells derived from mammospheres ( Figure 4G and S7G or 5G and S8G). These results strongly suggest that the un-differentiated BCS cells may have a significantly higher endocytosis rate than the differentiated breast cancer cells. This might be a common feature in cancer stem cells regardless of whether the cancer stem cells were derived from cancer cell line (Figure 2-5), fresh breast cancer specimen ( Figure S5) or BT-12 neurospheres (data not shown). Unlike the endocytosis occurred in murine L cells [49] or in retinal pigment epithelial D407 cells [50], the treatments of the BCS cells with either CDE inhibitors, such as MDC ( Figure 4E, 4H, S7E and S7H) or sucrose ( Figure 4F, 4I, S7F and S7I), or CVDE inhibitors, such as MCD ( Figure 5E, 5H, S8E and S8H) or genistein ( Figure 5F, 5I, S8F and S8I), did not inhibit the efficient internalization of the bound aptamers, hTF or LacCer, suggesting that BCS cells may have a high rate of clathrin-independent and caveolinindependent endocytosis [51].
ABCG2-specific aptamers, such as ABCG2/A12, clearly recognized the ABCG2-expressing BCS cells (Figure 3, S7 and S8). Thus, they could be used to make ABCG2-specific aptamer-coated liposomes to selectively deliver therapeutic agents (harbored in the liposomes) into the ABCG2-expressing cells, such as BCS cells. However, since normal stem cells and other tissues, such as hepatic stem cells [32], lung stem cells [33], cardiac side population cells [52], mammary epithelia side population cells [53], skeletal muscle side population cells [54], neural stem cells [55], corneal side population cells [55], and placenta [34], also express ABCG2, the treatment with ABCG2-specific aptamercoated liposomes harboring therapeutic agents might cause side effects. In contrast, if the aptamers, such as BCS/A35, enriched from human BCS cells ( Figure S4B) mainly bind to human BCS cells, but not other cells (this point needs to be further confirmed), the treatment with BCS cell-specific aptamer-coated liposomes harboring therapeutic agents might have minimal side effect. Taken together, we have cloned BCS cell-binding aptamers and proved that BCS cells efficiently internalized the bound ABCG2-specific mAb, aptamers, hTF protein and LacCer. Thus, our findings opened the door for us to develop a novel therapeutic approach to target the pluripotent and MDR BCS cells. | 6,511.4 | 2012-07-26T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Integrating systematic biological and proteomics strategies to explore the pharmacological mechanism of danshen yin modified on atherosclerosis
Abstract This research utilized the systematic biological and proteomics strategies to explore the regulatory mechanism of Danshen Yin Modified (DSYM) on atherosclerosis (AS) biological network. The traditional Chinese medicine database and HPLC was used to find the active compounds of DSYM, Pharmmapper database was used to predict potential targets, and OMIM database and GeneCards database were used to collect AS targets. String database was utilized to obtain the other protein of proteomics proteins and the protein‐protein interaction (PPI) data of DSYM targets, AS genes, proteomics proteins and other proteins. The Cytoscape 3.7.1 software was utilized to construct and analyse the network. The DAVID database is used to discover the biological processes and signalling pathways that these proteins aggregate. Finally, animal experiments and proteomics analysis were used to further verify the prediction results. The results showed that 140 active compounds, 405 DSYM targets and 590 AS genes were obtained, and 51 differentially expressed proteins were identified in the DSYM‐treated ApoE‐/‐ mouse AS model. A total of 4 major networks and a number of their derivative networks were constructed and analysed. The prediction results showed that DSYM can regulate AS‐related biological processes and signalling pathways. Animal experiments have also shown that DSYM has a therapeutic effect on ApoE‐/‐mouse AS model (P < .05). Therefore, this study proposed a new method based on systems biology, proteomics, and experimental pharmacology, and analysed the pharmacological mechanism of DSYM. DSYM may achieve therapeutic effects by regulating AS‐related signalling pathways and biological processes found in this research.
proteomics proteins and other proteins. The Cytoscape 3.7.1 software was utilized to construct and analyse the network. The DAVID database is used to discover the biological processes and signalling pathways that these proteins aggregate. Finally, animal experiments and proteomics analysis were used to further verify the prediction results. The results showed that 140 active compounds, 405 DSYM targets and 590 AS genes were obtained, and 51 differentially expressed proteins were identified in the DSYM-treated ApoE-/-mouse AS model. A total of 4 major networks and a number of their derivative networks were constructed and analysed. The prediction results showed that DSYM can regulate AS-related biological processes and signalling pathways. Animal experiments have also shown that DSYM has a therapeutic effect on ApoE-/-mouse AS model (P < .05). Therefore, this study proposed a new method based on systems biology, proteomics, and experimental pharmacology, and analysed the pharmacological mechanism of DSYM. DSYM may achieve therapeutic effects by regulating AS-related signalling pathways and biological processes found in this research.
| INTRODUC TI ON
Atherosclerosis (AS) is the most common disease in arteriosclerosis, mainly involving the elastic arteries and the more muscular fibres of the elastic fibres. The typical atherosclerotic lesion contains a large amount of 'ather-like' components of lipids and necrotic cells. 1 AS is a major disease that seriously endangers human health. The main clinical events of death are coronary heart disease and stroke.
Cardiovascular and cerebrovascular diseases are the leading causes of death worldwide. 1,2 In 2008, more than 17 million people worldwide died of cardiovascular and cerebrovascular diseases, of which 7.3 million died of coronary heart disease and 6.2 million died of stroke. It is estimated that by 2030, the number of deaths from cardiovascular and cerebrovascular diseases worldwide will reach 23.3 million. [1][2][3] Currently, it is believed that atherosclerotic disease is the result of a variety of complex factors interacting. 4 The current study found that it is mainly related to lipid metabolism disorders, endothelial damage, inflammatory response, wall shear stress and intestinal microflora imbalance. 5,6 The therapeutic drugs for AS include hypolipidemic drugs, antiplatelet drugs, dilated vascular drugs and treatment for coronary heart disease caused by ischaemia. [7][8][9] However, as these drugs require patients to take their medications for life, their side effects result in low patient compliance and reduced quality of life. 10,11 In addition, the treatment of AS-related cardiovascular and cerebrovascular diseases has entered a stage of diversified comprehensive treatment. Complementary and alternative medicine (CAM) has gradually entered mainstream medicine and has become a popular choice for patients. 12 Among them, Danshen Yin (DSY) is from the Shi Ge Fang Kuo, which has the effect of promoting blood circulation and relieving pain in Chinese medicine theory. As a classic prescription, it has been treating coronary heart disease and angina as its establishment. 13 Our previous clinical studies 13,14 and other reports 15 showed that DSY Modified (DSYM) can reduce the total myocardial ischaemic load in patients with unstable angina (low-risk group, intermediate-risk group) and has anti-ischaemic effects. Its specific mechanism may be related to lowering serum myocardial enzyme level, reducing myocardial infarct size, improving myocardial cell ultrastructure and inhibiting cardiomyocyte apoptosis. 16 Our previous work also showed that the mechanism of DSYM intervention in coronary heart disease may be related to autophagy and oxidative stress. 17,18 More importantly, previous researches have only studied single signal pathways or a few targets; and it is not easy to reveal the synergistic effect of 'multi-component-multi-target' herbs and the regulation effects of herbs on biological network of the disease from a holistic and comprehensive perspective. However, the mechanism of DSYM intervention in coronary heart disease has not been elaborated, especially its role in AS, the basic lesions of coronary heart disease. Our previous research successfully used systematic biological methods (such as network pharmacology and systematic pharmacology) to explore the mechanism of herbal formulae for treating diseases. [19][20][21][22][23] Hence, this study will explore the mechanism of DSYM regulating AS's biological network by an integrated systematic biological and proteomics strategies, and provide new ideas for drug development.
The research processes were shown in Figure S1. 29 The compounds with OB ≥ 30%, Caco-2> −0.4 and DL ≥ 0.18 were considered to be orally absorbable and pharmacologically active (namely, potential compounds). [19][20][21][22][23][24][25][26][27]29 Due to the limitation of predicting potential components based on pharmacokinetic parameters only, 30 in order to avoid omission of components, we searched a large number of literatures to supplement orally absorbable compounds with bioactive. Finally, after searching, a total of 34 components were added from the reference. 31-42
| DSYM's potential targets prediction and as genes collection
The molecular structures of DSYM's potential components were collected from the SciFinder (http://scifi nder.cas.org) and PubChem (https://pubch em.ncbi.nlm.nih.gov/), and were drawn by ChemBioDraw and saved as 'mol2' file format. Then, they were input into PharmMapper (http://lilab -ecust.cn/pharm mapper) to predict the potential targets of DSYM. 43 The UniProtKB (http:// www.unipr ot.org/) was utilized to collect the official symbols of the
K E Y W O R D S
ApoE-/-mouse, atherosclerosis, Danshen Yin Modified, proteomics, reverse transcription-PCR, systematic biology potential target's proteins with the species limited to 'Homo sapiens'. (see Table S1). Meanwhile, OMIM database (http://omim.org/) and Genecards (http://www.genec ards.org) were utilized to collect the AS-related disease genes and targets. 44,45 The AS-related genes and their relevance scores are shown in Table S2.
| Network Construction and Analysis Methods
The protein-protein interaction (PPI) data were obtained from the String database (http://strin g-db.org/). 46 The networks were constructed by Cytoscape 3.7.1 software (https://cytos cape. org/), a software to graphically display, analyse and edit the network. 47 Degree refers to the number of connections to the node.
Betweenness refers to the number of edges passing through the nodes. Degree and betweenness can determine the importance of the topology of the nodes in the network. The larger the value, the more important it is. 47 The networks were further analysed by the plug-in of Cytoscape, MCODE to found the clusters. 48 The definition and the methodology of acquisition of clusters were described in our previous work. [19][20][21][22][23]
| Gene ontology (GO) and pathway enrichment analysis
The genes and targets in clusters were input into DAVID (https:// david -d.ncifc rf.gov, ver. 6.8) to perform GO enrichment analysis. All of the genes and targets in the networks were also input into DAVID for Kyoto Encyclopedia of Genes and Genomes (KEGG) signalling pathway enrichment analysis. 48 P-value is Modified Fisher Exact P-Value, EASE score. 48 The smaller, the more enriched. after 10-20 minutes after boiling start). The decoctions were combined, filtered and concentrated to 1 g of original medicinal material/ mL. Finally, they were stored at 4°C.
| Experimental animal
Simvastatin + aspirin suspension: The tablets are dissolved in distilled water and concentrated to the solution containing simvastatin 0.2 mg/ mL and aspirin 1 mg/ mL.
| Animal models, grouping and drug administration
Twelve (12) C57BL/6L mice were set to blank group and fed with normal diet. Fifty (50) male ApoE (-/-) mice were fed a standard Western diet (containing 0.15% cholesterol and 21% fat wt/ wt). 49 After 12 weeks of modelling, they were randomly divided into model group, simvastatin + aspirin control group, DSYM low dose group, and DSYM high dose group. Among them, DSYM low dose group and DSYM high dose group contained 13 male mice per group, while the remaining groups contained 12 male mice per group.
After the successful modelling, the drug intervention was started. The dose was calculated according to the weight ratio of 60 kg human and 30 g mouse: the control group was administered with simvastatin at 2.52 mg/(kg·d) and aspirin at 12.60 mg/(kg·d); DSYM low dose group was administered with a solution containing 8.31 g/kg/d of crude drug; DSYM high dose group was administered with a solution containing 24.93 g/kg/d of crude drug. The blank group and the model group were given equal doses of ultrapure water per day according to the bodyweight ratio. The bodyweight was weighed once a week to adjust the dose, and the intervention was continued for 8 weeks.
| Specimen preparation method
After 0.5 h of the last administration, 2% sodium pentobarbital solution was prepared, and the mice were anaesthetized by intraperitoneal injection at a dose of 180 mg/kg. After the anaesthesia was effective, the abdominal cavity of the mouse was opened, and 1.5 mL of blood was taken through the abdominal aorta and allowed to stand for 30 minutes. The blood samples were centrifuged for 10 minutes at 4°C, 3 000 r/min within 1 hour and the supernatant was aspirated with a pipette, stored in a dry EP tube at −20°C for the detection of blood lipids, NO, ET, hs-CRP, VEGF, MMP-9, etc The mice were killed by cervical dislocation, and a small section of aortic specimen connected to the heart was quickly taken out about 1-2 cm under sterile conditions. Some specimens were fixed with 4% paraformaldehyde for HE staining, some were fixed by 2.5% glutaraldehyde for electron microscopy, and some were stored in liquid nitrogen for Western Blot, RT-PCR, etc
| Determination of Serum NO and ET
The NO is determined by the nitrate reductase method and is per- was performed according to the instructions of the ET kits. The radioactivity of the precipitate was measured on an automatic gamma counter, and the concentration of ET was calculated from the standard curve, and the result was automatically obtained by a computer.
The experiment was repeated three times, and the average value was taken.
| Determination of blood lipid
Serum total cholesterol (TC), triglyceride (TG), low-density lipoprotein (LDL-C) and high-density lipoprotein (HDL-C) levels were measured by an automatic biochemical analyzer. It is responsible by the biochemical room of the First Affiliated Hospital of Hunan University of Chinese Medicine. The experiment was repeated three times, and the average value was taken.
| Determination of serum hs-CRP
The serum hs-CRP level was determined by chemiluminescence method. It is responsible by the biochemical immunization room of the First Affiliated Hospital of Hunan University of Chinese Medicine. The experiment was repeated three times, and the average value was taken.
| Determination of serum VEGF and MMP-9
The serum VEGF and MMP-9 levels were determined by doubleantibody sandwich enzyme-linked immunosorbent assay (ELISA) and performed in strict accordance with the instructions in the kit.
Allow the kit to warm to room temperature for at least 20 minutes before use. All reagents and samples are configured in advance.
The sample is provided with 3 duplicate holes. Blank wells, standard wells and sample wells were set; 100 μL of the sample dilution was added to the blank wells, and 100 μL of the standard or the sample was added to the remaining wells. The plate was coated with a cover and incubated at 37°C for 120 minutes; after the incubation is complete, the liquid is discarded. Then, 100 μL of biotin-labelled antibody working solution was added to each well and incubated at 37°C for 60 min; after the incubation, the liquid was discarded and the plate was washed. The horseradish peroxidase labelled avidin working solution (with biotinylated antibody working solution) 100ul was then added into each well and incubated at 37°C for 60 minutes; after the incubation, the liquid was discarded and the plate was washed. After that, the substrate solution was added into each well the colour was developed at 37°C in the dark for 30 minutes. Finally, the reaction was stopped by adding 50 ul of the stop solution to each well (the blue colour turned yellow at this time); and the optical density (OD value) of each well was measured sequentially at 450 nm using an enzyme-linked instrument (The reference wavelength is 630 nm).
| Morphological changes of aortic roots in mice observed by HE staining
The specimen was fixed with 4% paraformaldehyde, dehydrated with an upstream gradient, and paraffin embedded after hyalinized by xylene. The specimen was continuously cross-sectioned from the heart to the ascending aorta 5 μm up to the aortic root (100 μm above the aortic valve). 50 Approximately 20 slices were taken in succession, and one was taken every 5 sheets. Then, HE staining is performed.
Under the X100, X204 and X400 light microscope, it was observed that whether the intima of the vessel wall is bulged, whether there is accumulation of foam cells under the intima, whether the internal elastic membrane is intact, and photographed.
| Ultrastructural observation of aortic endothelial cells
One (1) mm 3 aortic root tissue in the mice of each group were taken out and fixed with 2.5% glutaraldehyde phosphate buffer for 2 hours or longer; Then, the tissue was rinsed with 0.1 mol/L phosphoric acid rinsing solution and fixed with 1% citrate fixative for 1 ~ 2 h. After dehydration, soaking, embedding, and curing, the specimen was cut into thin slices of 50-100 nm (70 nm) by LKB-III ultrathin microtome, and double-stained with 3% uranyl acetate and lead nitrate. Finally, FEI Tecnai G2 Spirit transmission electron microscope was used to observe and film.
| Tissue protein extraction
The frozen tissue was taken out from the liquid nitrogen, and lysis solution was added at a ratio of 500 ul of lysis solution per 0.1 mg of tissue. The tissue was broken up on a high-speed dispersing cutter on ice. Centrifuge at 12 000 rpm for 20 minutes in a refrigerated centrifuge and take the supernatant.
| Complete drying of the slide chip
Remove the slide chip and recombine at room temperature for 20-30 min. Open the package, uncover the seal and place the chip in a vacuum desiccator or dry at room temperature for 1-2 h.
| Chip operation
This process is carried out in strict accordance with the instructions in the QAM-CAA-4000 Antibody Protein Chip Kit: 100 μL of sample dilution was added to each well (final protein concentration of 500 μg/mL), and it was incubated for 30 minutes at room temperature on a shaker to block the quantitative antibody chip. After incubation, the buffer in each well was removed, 100 μl of standard and sample (diluted to 500 μg/mL) was added to the wells and incubated overnight at 4°C on the shaker. Then, 1.4 mL of the sample dilution was added to antibody mixture tubules, mixed well and the tubules were centrifuged; after that, 80 μL of detection antibody was added to each well and incubated for 2 hours on a shaker at room temperature. Then, 1.4 mL of the sample dilution was added to Cy3-streptavidin tubules, mixed well and the tubules were centrifuged; after that, 80 μL of Cy3-streptavidin was added to each well and incubated for 2 hours on a shaker in the dark. Finally, the Axon genePix were utilized to scan the signal using Cy3 or a green channel. The scanning parameters are as follows: PMT: 600, Wavelength: 532 nm, resolution: 10 μm. The data analysis was performed using QAM-CAA-4000 data analysis software.
| Validation of VEGF, MMP-9 and bFGF by RT-PCR
Aortic tissue samples were taken, and total RNA was extracted as described in the instructions of Trizol kit. After dilution with 2 μL of RNA, the RNA purity and concentration were determined by UV spectrophotometry. Using total RNA as a template, cDNA was synthesized according to the instructions of the reverse transcription kit, and the obtained DNA was amplified in a 25 μL system. The PCR amplification conditions were as follows: pre-denaturation at 95°C for 5 minutes, 95°C for 10 seconds, 60°C for 1 minutes, and 40 cycles. The Ct value was read after the reaction was completed. The specificity of the PCR reaction of each sample was monitored by melting curve, and β-actin was used as an internal reference gene for data analysis. The primers were shown in Table 1. The experiment was repeated three times, and the average value was taken.
| DSYM sample preparation
According to the DSYM prescription, 66.28 g of dry herbal pieces was accurately weighed and placed in a 500 mL flat-bottomed flask.
The mixture was condensed and refluxed with 10 times and 8 times of water for 1 hour, filtered, and the filtrate was combined. The volume of filtrate was adjusted to 1000ml with distilled water, and the filtrate was filtered through a 0.45 μm PTEF microporous filter membrane, shaved with tin foil and placed in the refrigerator for later use.
| Standard sample preparation
According to the method in reference, 51 the tanshinone IIA, salvianolic acid B, tanshinol, rosmarinic acid, protocatechuic aldehyde were accurately weigh and distilled water were added to prepare a mixed solution per ml containing 160 μg of tanshinone IIA, 140 μg of salvianolic acid B, 16 μg of tanshinol, 10 μg of protocatechuic aldehyde and 23 μg of rosmarinic acid.
| Statistical analysis
All data were processed using SPSS 22.0 statistical software. Oneway analysis of variance was used for comparison of multiple groups, and the measurement data were expressed as mean ± standard deviation. The measurement data were measured by mean ± standard deviation. The test level P < .05 indicated that the difference was statistically significant.
| Atherosclerosis PPI network
Three thousand and seven hundred and twenty-nine (3729) ASrelated genes were required from GeneCards and OMIM database.
Primers
Forward primer Reverse primer Product length The PPI data of 590 genes whose relevance score ≥ 3 were obtained to construct the AS PPI network, and this network contains 531 nodes and 11 853 edges ( Figure 1A). These genes are arranged in descending order of the relevance score, the top 10 are as follows:
| Biological Processes of AS PPI Network
The AS PPI network was analysed by MCODE and returns 18 clusters (Table 2 and Figure S2). The genes in the clusters were input into DAVID for GO enrichment analysis. (Table S3). The main biological processes in cluster 1 were used as an example shown in Figure 1B.
| DSYM fingerprint
After comparing with the retention time of the standard and the spectrum, the five chemical components contained in the DSYM were determined: tanshinone IIA, salvianolic acid B, tanshinol, rosmarinic acid and protocatechuic aldehyde ( Figure S3). and so on ( Figure S4).
| DSYM-AS PPI network
The DSYM-AS PPI network was constructed based on DSYM tar-
| Biological Processes of DSYM-AS PPI Network
The DSYM-AS PPI network was analysed by MCODE and returns 18 clusters (Table 3; Figure S5). The genes in the clusters were input into DAVID for GO enrichment analysis. ( Table S5). The main biological processes in cluster 1 were used as an example shown in Figure 2B.
| Pathway of DSYM-AS PPI network
After the pathway enrichment analysis, twenty-seven AS-related signalling pathways are obtained ( Figure 2C). The P values, fold enrichment and count of these signalling pathways were shown in Figure 2D. The details are described in Table S6.
| General observation
Before the experiment, there was no significant difference in the bodyweight between ApoE-/-mice and the same strain C57BL/6L mice (P > .05), which was comparable. During the modelling process, the colour, activity, and eating and drinking of the mice in each group were normal and there was no difference. At the 12th week and the 20th week, the bodyweight of each group was significantly increased (P < .01), and the body mass growth of ApoE-/-mice was more obvious than that of C57BL/6L mice (P < .01). At the end of the 20th week, the quality of the blank group and the model group still increased significantly (P < .01). Compared with the 12th weekend, the trend of body mass changes in the other experimental groups was not obvious. During the administration period after the model establishment, the mice in each group had less food intake than before, the hair colour was less lustrous, and the activity and drinking water were normal. In the low dose and high dose groups of DSYM, one mouse in each group died due to improper intragastric administration. Immediately after death, the thoracic and abdominal aorta was dissected, and pale yellow lipid streaks were observed, and then oxidized and melted soon. The number of mice that eventually entered the statistics was 12 in each group (see Table 4).
| Effect of DSYM on serum NO and ET levels in ApOE-/-mice
The serum NO level in the model group was significantly lower than that in the other group, and the ET content was significantly higher than that in the other group (P < .01). Drug The DSYM high dose group was comparable to the simvastatin + aspirin control group in increasing NO (P > .05), but the DSYM group was superior to the western medicine group in reducing ET (P < .05). ( Figure 3A).
| Effect of DSYM on serum lipid in ApOE-/-mice
In the model group, the TC, TG, LDL-C increased significantly, HDL-C decreased significantly (P < .01), and the standard Western diet induced ApoE knockout mice to form severe hyperlipidemia. Compared with the model group, DSYM and simvastatin + aspirin can reduce TC and LDL-C and increase HDL-C (P < .05), but the lipid-lowering effect of the three groups is not statistically significant (P > .05). This indicates that the DSYM group is equivalent to the conventional western medicine treatment group. The increases of DSYM dose had no significant effect on its curative effect of lipid-lowering (P > .05).
| Effect of DSYM on serum hs-CRP in ApOE-/-mice
The hs-CRP level in the model group was significantly higher than that in the blank group (P < .01), which was also higher than the other groups (P < .05). The hs-CRP levels in the drug intervention F I G U R E 6 Results of Experimental Protein Network Analysis A, Experimental Protein Network (The larger the node size, the higher the degree of the node. The thicker the line, the greater the Edge Betweenness of the node.). B, Bubble chart of biological processes (X-axis is fold enrichment analysis). C, Pathway of Experimental Protein Network (Blue circle stands for proteins. Yellow circle stands for pathway. The larger the node size, the higher the degree of the node. The thicker the line, the greater the Edge Betweenness of the node.). D, Bubble chart of signalling pathway (X-axis is fold enrichment analysis) group decreased, and that in the DSYM high dose group and the simvastatin + aspirin group decreased. There was no significant difference between DSYM high dose group and the simvastatin + aspirin group (P > .05) ( Figure 3D).
| Effect of DSYM on serum VEGF and MMP-9 in ApOE-/-MIce
Serum VEGF and MMP-9 levels in the model group were significantly higher than those in the group (P < .01); drug intervention could down-regulate VEGF and MMP-9 levels (P < .05 or P < .01). In the reduction of VEGF and MMP-9, the efficacy of the DSYM high dose group was comparable to that of the simvastatin + aspirin group (P > .05), but better than the DSYM low dose group (P < .05). The efficacy of DSYM was dose dependent.
| Effect of DSYM on pathomorphology of aortic root in ApOE-/-mice
Blank group: the aortic wall was smooth, the thickness was uniform and no bulge, the endometrium, the media and the adventitia were not abnormal, and there was no AS lesion. Model group: the aortic wall is not smooth, the movement is not natural, the thickness is uneven, the intima is thickened, there is no protruding bulge and the lumen is narrowed, no obvious plaques are seen; Foam cell formation, smooth muscle cell proliferation, and inflammatory cell infiltration can be seen. The degree of change in the aorta of the remaining groups under HE staining was between that of the blank group and the model group ( Figure 4A).
| Effect of DSYM on ultrastructure of aortic vascular endothelial in ApOE-/-mice
In the model group, the endometrial endothelial cells of the aorta were detached and necrotic, the structure of the internal elastic membrane was irregular; the smooth muscle cells proliferated obviously, the mitochondria were extensively oedematous, vacuolated, the sputum was significantly reduced or disappeared, the nuclear structure was blurred, and the cytoplasm contains a large number of lipid droplets or even a string, suggesting a lipid-line stage, indicating successful modelling.
After drug intervention, endothelial cell injury was alleviated to varying degrees, indicating that DSYM, simvastatin + aspirin can improve the ultrastructure of aortic endothelial cells in ApoE-/-mice, and protect the injured endothelial cells; The DSYM high dose group was superior to the control group, while the control group was superior to the DSYM low dose group. (Figure 4B).
| Arrangement of 200 cytokines on membrane chips and expression profile of antibody chips
The arrangement and results of the 200 cytokines in the model group and the DSYM group on the membrane chip are shown in Figure 5.
| Antibody protein chip results data analysis
The results of the RayBio. Cytokine Antibody Arrays protein chip were analysed using the data of Normalization 2 Positive Control Normalization without Background. The signal value ratio method was used for chip analysis, and the difference between the signal value, Fold change and t test was selected. Select the signal value greater than 500, fold change greater than 2, less than 0.5 as the difference factor.
The analysis results of the two groups of sample chips showed that 51 models with significant differences were found in the model group/ DSYM group, see Table S7. The 4-1BB with the largest fold ratio has a low signal value in the model group, so it is rounded off. This study selected bFGF (whose signal value is maximum), Pro-MMP-9 (whose fold change ratio is maximum) and VEGF (which is closely related to vascular endothelial cell angiogenesis) for further study. (Table S7).
| Expression of VEGF, MMP-9 and bFGF mRNA in aortic lysates of each group
The expression of VEGF, MMP-9 and bFGF mRNA in C57BL/6L mice was weak, and the expression levels of VEGF, MMP-9 and bFGF mRNA in ApoE-/-mice were enhanced; and the expression of model group was significantly higher than that of other groups (P < .01).
The expression of MMP-9 mRNA between DSYM low dose group and high dose group was statistically significant (P < .05). There was Figure S6).
To perform deep mining of proteomics data, we used systematic biological methods to further analyse proteomics data.
| Experimental Protein Network
The experimental proteins and their PPI network were utilized to construct the experimental protein network. This network contains 46 nodes and 350 edges ( Figure 6A).
| Biological Processes of Experimental Protein Network
The experimental protein network was analysed by MCODE and returns 2 clusters (Table 5 and Figure S7). The genes in the clusters were input into DAVID for GO enrichment analysis.
Cluster 1 is mainly involved in immune response (such as regulatory T cells, neutrophil chemotaxis), inflammatory response, angiogenesis and vascular endothelial barrier. Cluster 2 is associated with inflammatory responses, angiogenesis, cellular hypoxia and immune responses (Table S8). The main biological processes in cluster 1 were used as an example shown in Figure 6B.
| Biological processes of experimental protein-other proteins' PPI network
The experimental protein-other proteins' PPI Network was analysed by MCODE and returns 13 clusters (Table 6; Figure S8). The genes in the clusters were input into DAVID for GO enrichment analysis.
Cluster 1 is associated with immune reactions, inflammatory responses, angiogenesis, neutrophil chemotaxis, proliferation of monocyte macrophages and smooth muscle. Cluster 2 is associated with Proliferation and migration of smooth muscle, angiogenesis, and proliferation of endothelial cells and their signalling pathways. Cluster 3 is mainly related to inflammatory responses.
Cluster 6 is also involved in the inflammatory response. Cluster 7 is associated with Wnt signalling pathway. Cluster 8 is primarily involved in lipid metabolism. Cluster 5 and 13 do not return any human's biological processes. Cluster 4, 9, 10, 11, 12 failed to return AS-related biological processes (see Table S10). The main biological processes in cluster 1 were used as an example shown in Figure 7B.
| Pathway of experimental protein-other proteins' PPI network
After the pathway enrichment analysis, twenty-two AS-related signalling pathways are obtained ( Figure 7C). The P values, fold enrichment and count of these signalling pathways were shown in Figure 7D. The details are described in Table S11. The process of erosion or rupture of AS plaques involves a variety of inflammatory mechanisms including endothelial dysfunction, leucocyte migration, extracellular matrix degradation and platelet activation. 87 The main mechanism is: (1) Cytokines that regulate leucocyte activity in acute phase reactants, such as interleukin Platelets are also activated before significant ACS appears, which directly leads to the development of AS. [96][97][98] Our research showed that DSYM can interfere with these inflammatory molecules and their mediated biological processes.
| CON CLUS ION
This study proposed a new method based on systems biology, proteomics, and experimental pharmacology, and analysed the pharmacological mechanism of DSYM. DSYM may achieve therapeutic effects by regulating AS-related biological processes (such as coagulation pathway, inflammatory reaction, NO metabolism, vascular remodelling, lipid metabolism, neutrophil chemotaxis, and proliferation of macrophages and smooth muscle cells) and signalling pathways (such as PI3K-Akt, HIF-1, TNF, neurotrophin, adipocytokine signalling pathways, and complement and coagulation cascades).
ACK N OWLED G EM ENTS
This work is supported by the National Natural Science Foundation of China (No. 81774174).
CO N FLI C T O F I NTE R E S T
We declare no competing interests.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are openly available in supplementary materials. | 7,253.4 | 2020-11-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
The fate of the Konishi multiplet in the \beta-deformed Quantum Spectral Curve
We investigate the solution space of the $\beta$-deformed Quantum Spectral Curve by studying a sample of solutions corresponding to single-trace operators that in the undeformed theory belong to the Konishi multiplet. We discuss how to set the precise boundary conditions for the leading Q-system for a given state, how to solve it, and how to build perturbative corrections to the $\mathbf{P}\mu$-system. We confirm and add several loop orders to known results in the literature.
The spectrum of single-trace operators in β-deformed planar N = 4 SYM has previously been studied, both using conventional quantum field theory methods and integrability. The anomalous dimensions of one-magnon of two-magnon su(2) states were found from QFTcalculations to four loops in [32], and for one-magnon states to the first wrapping order in [33]. The complete one-loop dilatation operator was studied in [34], see also [35,36]. Twist-2 and twist-3 operators in the sl(2) sector were treated up to four loops in [37] by using the asymptotic Bethe ansatz and Lüscher corrections. The su(2) Konishi operator was studied up to four loops using Lüscher corrections in [38], in agreement with [32]. In this paper we will reproduce some of these results and demonstrate the power of the QSC by going well beyond in loop order. Section 2 is an informal discussion of the (broken) symmetries of the β-deformed theory and the resulting splitting of the symmetry multiplets of single-trace operators in the undeformed theory. Section 3 contains a short recap of the QSC and the features that are relevant for our purposes. In section 4, we explain how to set the precise boundary conditions for the leading solution of the Q-system and a strategy for how to solve it. Section 5 gives a summary of the algorithm used to construct perturbative corrections to the leading solutions. Section 6 presents a sample of solutions for different parts of the broken Konishi multiplet.
Symmetry and β-deformation
The field content and the multiplet structure of single-trace operators in N = 4 SYM is dictated by the global psu(2, 2|4) superconformal symmetry. By multiplet, we refer to an irreducible representation formed by a vector space of operators that are connected by the generators of the symmetry. In the deformations of the theory, the field content remains the same, though the multiplets split into smaller pieces due to the breaking of some of the symmetries.
In this section we briefly recall the basics of the full N = 4 superconformal symmetry and discuss the splitting of the Konishi multiplet in the β-deformation.
Symmetry of N = 4 SYM
Similarly to [12], we use the oscillator language used to describe the symmetry and its representations. We start with a short recap of the basic concepts.
Oscillator construction for psu (2, 2|4). At zero coupling, oscillators provide a convenient way to parametrize the psu(2, 2|4) generators E mn : The supersymmetry generators are of the form f † a b †α and f † a a α , the su(4) R-symmetry is generated by f † a f b , while the non-compact su(2, 2) conformal symmetry is generated by combinations of a's and b's.
Field content. The field content of the theory can be constructed according to Quantum numbers. We use the conventions of [12], and describe single-trace operators by the oscillator content needed to construct them, i.e. n = [n b 1 , n b 2 |n f 1 , n f 2 , n f 3 , n f 4 |n a 1 , n a 2 ] , (2.2) where n • are number operators, e.g. n a 2 ≡ a † 2 a 2 . We will also use the su(4) and su(2, 2) weights λ a and ν i given by
3)
Young diagram. Following the practice of [12,39], one can use non-compact Young diagrams to characterize multiplets at g = 0 in the undeformed theory. We refer to figure 4 in [12] for the definition and to figure 2 below for the Young diagrams corresponding to the Konishi multiplet.
Leftover symmetry in the β-deformation
The β-deformation breaks the off-diagonal part of the R-symmetry and 12 out of the 16 supercharges. An overview of the psu(2, 2|4) generators (2.1) that correspond to (un)broken symmetries is given in figure 1. The leftover continuous symmetry is su(2, 2|1) ⊕ u(1) 3 . As the oscillators f 1 , f 2 and f 3 are treated on an equal footing, there is an additional discrete S 3 symmetry that permutes these three oscillators. Diagonal twist of su (4). The β-deformation twists the su(4)-symmetry with twist parameters [26] x a = e i β(n f 2 −n f 3 ) , e i β(n f 3 −n f 1 ) , e i β(n f 1 −n f 2 ) , 1 .
(2.4)
Notice that the twist depends on the quantum numbers, i.e. it depends on the operator in question. Throughout the paper, we will use the shorthand notation x ≡ e i β .
Shifted weights. The concept of shifted weights,λ andν, is important, because they govern the asymptotics of the QSC. They are given by [26] λ a = λ a − b≺a δ xa,x b + i≺a δ xa,1 + Λ (2.5a) where ≺ means that the oscillator corresponding to the left index is placed before the one corresponding to the one on the right side in the grading for which the operator is a HWS. Λ is an arbitrary integer shift that we will return to.
The Konishi multiplet
The Konishi multiplet is the archetypical example in the study of N = 4 SYM. We here review some facts about this multiplet in the undeformed theory and look at how it splits up due to the β-deformation.
Undeformed theory
In the undeformed theory, the Konishi multiplet contains the simplest operators not protected from quantum corrections. The operator of lowest dimension (∆ 0 = 2) within the multiplet is the two-scalar state often referred to as the "su(4) Konishi". It is the highest weight state in the grading 12123434/2222. We can act on the state (2.6) with the symmetry generators (2.1) to build an infinite tower of states. Throughout the paper, we will leave out the symbol "Tr[...]", so the reader should keep in mind that a trace is always implicit when discussing operators. Also, we will loosely refer to the states by a representative of its field content, e.g. we will refer to (2.6) simply as ZZ.
Supercharges and gradings. Acting once on (2.6) with the supercharges a † f or b † f † produces operators containing a scalar and a fermion, i.e. of the type ΦΨ, with ∆ 0 = 5 2 . These states are of highest weight in different gradings. For example, acting with a † 1 f 4 results in a state with content of the type ZΨ 31 and takes us to the HWS grading 12123344, i.e. simply the replacement43 → 34. Acting with a † 2 f 2 , we get a state of the kindZΨ 12 , which is a HWS in 12134423, where we also needed to make rearrangements within the fermionic and bosonic oscillators, respectively.
Shortening. An important feature of the Konishi multiplet is that it is composed of operators of different lengths. The superconformal algebra at g = 0 does not connect the full Konishi multiplet. This effect is known as shortening. Only when quantum corrections to the superconformal algebra are taken into account is it possible to connect the complete Konishi multiplet. At zero coupling the multiplet splits up into four short multiplets, one with L = 2, two with L = 3, and one with L = 4. The Young diagrams for each of these four short multiplets are given in figure 2. For example, if we act on the state (2.6) with first a † 1 f 4 and then a † 2 f 4 , we annihilate the state. But the quantum corrections to the generators would in fact produce a state of length three with field content ZXȲ. Figure 3 gives a sample of states in the Konishi multipet. Besides the "su(4) Konishi" (2.6), other popular members of the Konishi multiplet are the "su(2) Konishi" with field content Z 2 X 2 , being the HWS in 0224, and the "sl(2) Konishi" with content D 2 12 Z 2 , being the HWS in 1133.
R-symmetry structure. The su(4) Konishi (2.6) is a singlet under the su(4) R-symmetry. However, e.g. the su(2) and sl(2) Konishi are not. We can act on the sl(2) Konishi, D 2 12 Z 2 , with f † 3 f 2 once to produce states with field content D 2 12 ZX and twice to get D 2 12 X 2 , and similarly for the other R-symmetry generators.
Conformal generators. The conformal generators are composed by a and b oscillators. Those of type a † a and b † b act similarly to the R-symmetry generators, and their action can lead to highest weight states in different gradings, corresponding to the permutations 12 ↔ 21 and 34 ↔ 43. The generators of type a † b † , corresponding to derivatives, are different. You can act with these generators infinitely many times, and they only produce descendants.
Deformed theory
The β-deformation breaks the Konishi multiplet into a large number of smaller symmetry multiplets, which we will refer to as submultiplets. The operators belonging to a submultiplet are related by the unbroken supercharges (a † α f 4 and bαf 4 at zero coupling) and the conformal generators. Due to the non-compact conformal symmetry, the submultiplets are all infinitedimensional.
For example, the state (2.6) will remain in the same multiplet as the operator that is generated by acting with a † 1 f 4 (of the type ZΨ 31 ), but not with the ones generated by acting with a 1 f a<4 (e.g. of type ZΨ 41 ) as these are no longer symmetries.
In this paper, we refrain from classifying all submultiplets of the Konishi multiplet and instead simply consider a sample of submultiplets that together illustrate some of the features of the solution space of the β-deformed QSC. The eight examples that we discuss are listed in table 1. Most notably, we will consider the submultiplets containing the su(4), su (2) and sl(2) Konishi operators (first, eighth and fifth entry in table 1, respectively), which can be compared to known results in the literature.
In fact, two of the eight operators in table 1 are in the same submultiplet: the third and eighth operator, ZX Y and the su(2) state Z 2 X 2 , are related by the action of the unbroken generators a † 1 f 4 and a † 2 f 4 (to be more precise by the quantum corrected versions of these generators). Using the freedom to choose any Λ in the shifted weights (2.5), we see that they have identical shifted weightsλ andν, and the same twists x a . Notice furthermore that the operator in the second row, ZΨ 22 , also has the sameλ andν, but twists differing by the replacement x → x 1 2 .
Example: Ψ 11 F 11 -the HWS in 2333 Throughout the paper, we will exemplify our approach by considering the submultiplet containing the HWS of the undeformed Konishi multiplet in the 12132344 (2333) grading, with oscillator numbers n = [0, 0|1, 0, 0, 0|3, 0] and consequently field content This is the fourth example in table 1. The grading path and the Young diagram corresponding to this operator in the undeformed theory are For this operator (and the submultiplet that it belongs to) the twist (2.4) is while the shifted weights (2.5) arê {1, x −2 , x 2 , 1} Table 1. Representative operators from the submultiplets that we consider in this paper. We will refer to the operators by the highlighted examples listed in the possible field content column. Note that this does not refer to the precise structure of the operator.
QSC essentials
The Quantum Spectral Curve is a Riemann-Hilbert problem whose solutions, among other things, capture the spectrum of anomalous dimensions of single-trace operators. The generalization of the QSC to the twisted case does not change its algebraic structure, only the boundary conditions. In the following discussion of the twisted QSC, we closely follow the results and conventions of [26].
Q-system
A very elegant aspect of the QSC is the gl(4|4) Q-system [2]. We will consider it when finding the leading solutions of the QSC in section 4. It consists of a set of Q-functions, Q ab...|ij... , with up to four asymmetric indices of each of two types taking values between 1 and 4. The Q-functions satisfy three types of QQ-relations: We require that Q ∅|∅ = Q 1234|1234 = 1.
Distinguished Q-functions. We will call a Q-function distinguished if its indices take the lowest possible values, i.e. Asymptotics. The large u asymptotics of a general Q-function is governed by [26] Q A|I Of particular importance are the functions where the normalizations A and B satisfy (no sums over a or i) . (3.6b) We have the freedom to choose A a and B i freely as long as the products (3.6) are satisfied, but to maintain Q 1234|1234 = 1, the choice should satisfy . (3.7)
Pµ-system
The full Q-system carries a lot of redundant information which can be reduced into a much more compact formulation, called the Pµ-system. This consists of the 2 × 4 functions P a and P a introduced in (3.5), and six additional functions of the spectral parameter arranged in the anti-symmetric symbol µ ab . These, in turn, build the upper-index functions , (3.8) where the Pfaffian of µ, is in fact a constant determined by the normalization of the Q-system. The upper-and lower-indexed functions satisfy the relations P a P a = 0, µ ab µ bc = δ c a . (3.10) They are all multivalued functions of the spectral parameter u and have a very precise analytic structure, as we will see below. It contains an infinite number of branch cuts, that are all of square root type, while the QSC functions are required to be analytic everywhere else. Let us state the analytic properities of the functions individually.
Analytic structure of P. The multivalued functions P have one Riemann sheet with only a single branch cut 1 , in between the points ±2g. We denote the function values on this sheet by P(u). The analytical continuation into the second sheet is denotedP(u), and on this sheet there is an infinite number of cuts at ±2g + i Z. This is illustrated to the left in figure 4.
Analytic structure of µ. The functions µ have an infinite number of cuts at ±2g + i Z on all sheets but with the very special propertỹ µ = µ [2] , (3.11) i.e. the analytic continuation through the cut at ±2g is the same as the values on the first sheet, only shifted by i . An important consequence of this it that both the following expressions are regular on the real axis: which is exploited in the perturbative algorithm below. 1 In this paper we exclusively choose short cuts. One could of course choose different branch cuts, e.g.
long cuts that connect the branch points through infinity, but the short cuts are the natural choice in the weak coupling limit, g → 0. Solutions corresponding to single-trace operators. An important constraint is that for the solutions to the QSC that correspond to single-trace operators, µ need to have power-like large u asymptotics, up to an overall exponential twist factor, i.e.
where M ab is some integer. Furthermore, µ should satisfy the zero-momentum condition Relations. The analytic continuationP is given through µ and P bỹ P a = µ ab P b ,P a = µ ab P b , (3.15) and these functions are further related by the equation Using equations (3.11) and (3.15), it can be rewritten as ac . (3.17) These are in fact the same difference equations that are satisfied by the central Q-functions Q − ab|ij . Thus each µ ab can be written as a linear combination of the six corresponding Q − ab|ij . Symmetries. The Pµ-system is subject to two important symmetries [2]. First of all, the gauge transformations related to the freedom in the shifted weights (2.5); x is defined below in equatoin (3.20). Second, one can use the H-symmetry where H is a constant matrix, to rotate in the basis of P's and µ's.
All-loop ansatz for P
It is possible to construct an ansatz for P thanks to their simple analytic structure, which is central for the perturbative algorithm described in section 5. This procedure was described thoroughly in [13], and we simply make the natural generalization to the twisted scenario. The crucial idea is to use the Zhukowsky map to write an expansion that converges everywhere on the first sheet P(u), and also in a finite region on the second sheet P(u).
The Zhukowsky map. The two first sheets of P can be brought together into one by introducing the Zhukowsky variable The single cut on the first u-sheet is mapped to the unit circle in the x-plane. It is hence dissolved as the first sheet is mapped to the region |x| > 1 while the second sheet is mapped into the interior of the unit circle. As x is a double-valued function of u, we always choose the branch |x| > 1 and substitute x → 1 x for values on the second u-sheet. The map is illustrated in figure 4. Note that expanding the Zhukowsky variable in g gives and that the large u-asymptotics is x ∼ u g .
Explicit ansatz for P. By generalizing the ansatz of [13] and by testing it in explicit calculations, we propose the following ansatz for the functions P: Note that the combinationsλ a − Λ andλ a − Λ are independent of the choice of Λ, cf. the definitions (2.5) and (3.5b). We here introduced two new numbers, L and δ λ . The former is a modified version of the operator length. The pattern that we find is that L corresponds to the lowest operator length with which the quantum numbersλ andν can be achieved. The number δ λ is an offset that we, for the states from the Konishi multiplet, find to be 0 for all gradings except those that end with ...4, where it takes the value 1. However, the 0233 HWS ZΨ 2 11 with L = 3 has quantum numbers that cannot be achieved for a length-2 state, and consequently it has L = 3.
Furthermore, for the states from the Konishi multiplet we assume 2 that the coefficients c and d in the ansatz (3.22) have regular expansions in g 2 : Naturally, the coefficient of the highest power in u is fixed by the chosen normalizations A.
The beauty of the expansion in x is that it converges for all |x| > 1 but can be extended through the resolved cut into a finite region inside |x| < 1 (until the first singularity at x(u ± i )). As such, it does also cover a region on the second u-sheet, where the ansatz for P close to |x| = 1 (u = 0 for small g) can be obtained by replacing x → 1 x in (3.22). For the purpose of perturbation theory, the ansatz (3.22) should be expressed in terms of u and expanded in g, giving us an expansion of P on the form A crucial feature is that each perturbative contribution P (n) only contain a finite number of unknown coefficients c and d. As we will see in section 5, this gives us a starting point for doing perturbation theory.
Example: Ψ 11 F 11 -2333 For this example, the asymptotics of P and Q are dictated by the weightŝ The products (3.6) are where we make the choices , For this state, we have L = 2 and δ λ = 0, and setting Λ = 0 the ansatz (3.22) for P looks like Two examples of the explicit g-expansion to second order are where we have substituted the asymptotic coefficients A 2 and A 2 .
The leading Q-system
For each symmetry multiplet, there is a distinct solution to the QSC. In the undeformed theory, the infinite set of operators in the Konishi multiplet correspond to just a single solution. In the β-deformed theory, this solution splits up into several solutions corresponding to the submultiplets. These solutions should all reduce to the undeformed solution in the limit β → 0.
To find the solution, we generalize the strategy [12] to find the leading Q-system. The idea of the method is to first find the subset of distinguished Q-functions (3.2), which are related only by one type of QQ-relation (3.1c), by imposing so-called zero-remainder conditions on them. To do this, we first have to understand the boundary conditions of the problem, i.e. the precise structure of the distinguished Q-functions, given the quantum numbers and HWS grading of the operator in question.
Explicit boundary conditions
For the undeformed theory, the precise boundary conditions for the leading Q-system were discussed in [12,40]. In that work, the concept of a larger Young diagram Q-system was introduced, from which the gl(4|4) Q-system could be picked out as a subset. In the deformed theory, the concept of extended Young diagrams is not immediately applicable, so in our context we will work directly with the gl(4|4) Q-system.
The boundary conditions only change slightly for the twisted case. First of all, exponential twist-dependent factors appear. Furthermore, it is well-known that the number Bethe roots, i.e. the number of zeros in the Q-functions, is affected by twisting. The number of roots that appear in the Bethe equations is unchanged, but the number of roots in other Q-functions 3 will be altered. Before describing the boundary conditions for the twisted psu(2, 2|4) spin chain, we take a look at the su(2) Q-system to see these features.
Example: boundary conditions of twisted su(2) Q-system For example, eigenstates of the su(2) spin chain of the form Z L−M X M correspond to polynomial solutions to the single QQ-relation where Q 1 and Q 2 have the form In the twisted case, the QQ-relation remains the same, but the Q-functions change structure to where z is some twist. The degree of Q 2 is lowered by one, as a consequence of the fact that the leading powers in u of the two terms on the left hand side of (4.1) do not cancel. The important point is that the change in the number of roots happens only in the Q-functions that do not appear in the Bethe equations, which are 3 Note that one can in principle also write down Bethe equations for these functions by performing so-called duality transformations.
In analogy with this example, we can make an educated guess for a concrete ansatz for the larger psu(2, 2|4) Q-system. Our main requirement is that the number of Bethe roots in the Q-functions on the HWS grading path of the operator remains unchanged from the twisted to the untwisted case. Importantly, this requirement is in agreement with the asymptotic structure of the Q-functions (3.3).
where f a,s (u) is trivial "fusion factor" containing shifted powers of u ±L and where q a,s (u) is a polynomial factor carrying the Bethe roots.
Fusion factor f a,s . The fusion factors have the form Note that this corresponds to a particular fixing of the gauge symmetry (3.18). The values of f a,s for the psu(2, 2|4) Q-system are illustrated in figure 5.
The value of f a,s in the psu(2, 2|4) Q-system.
Counting Bethe roots. One can use the asymptotics (3.3) to deduce the number of roots in Q a,s , by subtracting the powers coming from the fusion factors. The pattern that one finds is that on the grading path where the operator in question is a HWS, the number of roots is the same in the undeformed and deformed theory. We will discuss the case where the operator is not a HWS in any grading in the undeformed theory in the example in section 6.3. For states that are HWS in some grading in the undeformed theory, we can then simply use the counting rules from the undeformed theory [12] to find the number of roots in the Qfunctions on the HWS grading path. The number of roots for operators from the four short multiplets forming the Konishi multiplet are depicted in figure 6. Knowing the number of roots on the grading path, one can deduce the number of roots in the remaining Q a,s . One has to count powers in the QQ-relation (3.1c), while also taking the fusion factors into account. Importantly, one should take into account whether the Q-functions have differing exponential factors or not. We give an examples of the counting procedure at the end of the section. . Number of Bethe roots in the undeformed distinguished Q-functions for the four short multiplets that form the Konishi multiplet. As described in [12], the four Q-systems are compatible due to certain roots being placed at u = i 2 Z and due to symmetry transformations that suppress different Q-functions in g.
Explicit solutions from zero-remainder conditions
Knowing the number of roots in the distinguished Q-functions, we can write a precise ansatz for them in terms of a finite number of coefficients, i.e. where M a,s is the number of Bethe roots in q a,s . To determine the coefficients c, we choose a particular path through the Q-system from Q 0,0 to Q 4,4 (not necessarily the grading path!) and make an ansatz there. It is preferable to choose a path with as few roots as possible. This gives us a concrete ansatz for seven Q's (and also Q 0,0 = Q 4,4 = 1) in terms of a number of unknown c's. The remaining Q's can then be generated form those on the chosen path through the QQ-relation (3.1c), i.e.
The polynomial part q a,s can be found as the quotient of this polynomial division, while the remainder gives us constraints on the coefficient c, as they have to vanish. An efficient strategy is to first generate all Q a,s and then impose the zero-remainder conditions simultaneously.
Generating the full Q-system and the leading Pµ-system
With the distinguished Q-functions at hand, it is rather straightforward to generate the remaining Q-functions. This was discussed in detail in [12], and we use the same strategy • First derive the functions Q a|∅ ≡ P a from Q a,0 and Q ∅|i ≡ Q i from Q 0,i .
• Then generate the 16 functions Q a|i through where Ψ is the inverse of the difference operator ∇, Ψ (∇F (u)) = F (u) + P, and P is an i -periodic function, since any such function belongs to the kernel of ∇.
• Generate the remaining Q-system from P a , Q i and Q a|i through the determinant relations given in [2]. In particular, we need the 36 functions Q ab|ij and the four functions Q abc|1234 .
where ω is a constant.
• Then constructP a . Finally compare these expressions with the ansatz forP to fix remaining unknowns.
Example: Ψ 11 F 11 -2333 To find the precise boundary conditions for the solution corresponding to the submultiplet containing this operator, we need to look at the 2333 path in the leftmost diagram in figure 6. This path traces out the functions Q 0,1 , Q 0,2 , Q 1,2 , Q 1,3 , Q 2,3 , Q 3,3 and Q 4,3 . There is only a single Bethe root in Q 1,2 , and the ansatz for these seven functions can thus be written as The number of roots in the other Q a,s can be found by power counting in the QQ-relations, and we find these numbers to be: where the green highlighting signals that the Q's carry a twist-factor. The symbol • signals that the corresponding Q vanishes at the leading order in g.
Note that we could in fact have made a completely trivial ansatz on the path 3333, but to illustrate the zero-remainder conditions, we stick with the starting point (4.11).
Using (4.8) to find Q 0,3 , we get from which it is obvious that we have to set c 1,2,0 = 0 to kill the remainder term. Going in the other direction, we can generate the other Q's, e.g.
Notice that when sending x → 1, the function reduces to the well-known Konishi solution The full set of leading distinguished Q-functions are listed in the following table. It is arranged in analogy to the 4 × 4 diagrams. The lower left corner holds Q 0,0 while the upper right contains Q 4,4 , and the grading line between them is indicated with the colored frames. The exponential twist factors, being the same across each row, are relegated to the right. The highlighting indicates the momentum carrying Q-function Q 2,2 . The presented normalization is chosen for brevity, where in general the leading u-power has unit coefficient.
Deriving P a and Q i . Moving on to the rest of the Q-system, we use the relations between Q a|∅ , Q ∅|i and the distinguished Q-functions to get P a and Q i , as explained in [12]. Exemplified for P 2 = Q 2|∅ , we make the ansatz in accordance with (3.22) and solve for the coefficients c k through the determinant relation The solution is For the other relations, we refer to [12] as the procedure is entirely analogous. The leading P a and Q i are , Generating Q a|i .
With P a and Q i we can now genererate the Q a|i through (4.9). As an example we have where η 2 is a Hurwitz η-function, defined below in equation (5.4). An important detail is that Ψ returns a constant for the arguments that do not contain an exponential twist factor, as the only allowed periodic function. These constants will be fixed together with the other coefficients when comparing the expressions forP. For instance, we have Obtaining Q ab|ij , Q abc|1234 and µ (0) . The determinant relations given in [2] are straightforward to apply and we omit these intermediate steps. Through equations (4.10) and (3.8) we then get µ (0) . This introduces the unknown coefficients ω and Pf(µ) (0) .
Fixing coefficients through matching the expressions forP.
We now have all we need to defineP ab P b (0) and similarily forP a (0) such that we can compare with the ansatz (3.22). Let us fix a few coefficients in this way: calculated (3.15) ansatz (3.22) 2,k .
The equation system for these coefficients are solved by while all the others vanish. We see that we already obtain the 1-loop anomalous dimension. Repeating this for allP (0) , we can fix all introduced coefficients. As a check and an example of the difference equation (3.17), we can look at which is adequately satisfied by the found expression We continue this example in section 5.1, where we will look at the perturbative corrections.
Perturbative corrections to the Pµ-system
In the perturbative solution of the QSC, we adapt the algorithm of [13]. It streamlines the redundant information in the QSC into a small set of steps that is carried out repeatedly order by order, only involving the quantities P and µ in the Pµ-system. An overview of the steps is given in figure 7 and further described below in section 5.1.
Difference equation on µ. The most central equation of the Pµ-system is the difference equation (3.17) for µ, here repeated: which couples all six component functions of µ. It can be rephrased for perturbative calculations as an inhomogenous equation for each order, where all terms involving lower orders of µ are collected into the source term U (n) ab . This equation can be solved iteratively, order by order, by using the ansatz (3.22) for P and exploiting the relationships and analytic structures of the involved quantities to fix the constants. Normally, all introduced constants are fixed at the end of each iterative step such that the only unknown parts inside U
4)
• i -periodic functions with at most constant u-asymptotics and poles only at i Z, written in the basis
5)
• and overall exponential factors of x i u a . All these functions form algebraic rings such that any expression can be written as quadrolinear combinations of them. This property ensures the closure of the Ψ-operation and allows for a fast and simple computer implementation, as discussed thorougly in [5,27,41]. Note that for the fully twisted QSC, one has to extend the above basis to include twisted η-functions, see e.g. [27]. These would arise from Ψ-actions like Ψ x i u (u+i n) m , Ψ x i u η A , but such expressions never appear in our calculation. We do not have a proof of this property, but it strongly hints that these functions only appear for twists of the su(2, 2) part of the symmetry. Conceptually, the perturbative computations in the β-deformed theory are thus very similar to those in the undeformed theory, and the Ψ-operation needs only a mild generalization. Whereas Ψ maps a polynomial in u to another polynomial of one degree higher, an overall x i u -factor times such a polynomial is mapped to a product of the same exponential factor and a polynomial of the same degree.
The functions P m enter through the i -periodic ambiguity in the solution of equation (5.3), i.e.
where the coefficients φ (n) k,m are fixed later in the algorithm. In practice, the infinite sum in (5.6) is truncated rather soon.
Numbers. Practically, the fact that the coefficients in the QSC functions contain the twist instead of just being numbers as in the undeformed case is a computational challenge, as we will see. As in the undeformed case, the numbers that appear in the functions are the algebraic numbers arising when solving the zero-remainder conditions for the leading Qsystem, and multiple zeta values (MZVs) ζ A that arise in the power expansion of η-functions at u = 0, e.g.
The MZVs also appear in the corresponding expansion of the i -periodic functions P m .
The perturbative algorithm
The algorithm for the perturbative calculation of the Pµ-system [13] consists of five steps, repeated at each order. We describe them here in chronological order, while a pictorial overview is given in figure 7.
Step 1 Define P (n) a and P a (n) through the ansatz (3.22). This introduces a finite number of coefficients, including the perturbative corrections to the anomalous dimension γ (n) , which the following steps aim to fix.
It is convenient to impose equation (3.10), P a P a = 0, at this point to already fix a few constants.
Step 2 Construct µ (n) ab through equation (5.3). This automatically defines µ ab (n) through the relation (3.8) and also introduces a few more constants φ (m) n,k due to the i -periodic ambiguity. In our implementation, this is the most computationally expensive step.
Step 3 Impose the regularity conditions (3.12) on µ ab . This amounts to expanding the expressions µ (n) ab + µ (n) ab [2] and µ ab −µ [2] ab √ u 2 −4g 2 at u = 0 and imposing that all poles vanish. This fixes many of the introduced constants. It is also where the MZVs first appear.
Step 5 Match the expressions defined in step 4 with the ansatz (3.22). This again requires power expanding at u = 0, and introduces more MVZs. This normally fixes the last introduced constants such that all quantities at order g 2n are fixed and can be used as input in the calculation of the next order.
Example: Ψ 11 F 11 -2333 We return to our example to illustrate the perturbative algorithm. Having all quantities at leading order already, we go through the algorithm step by step at the subleading order.
Step 1 We define P through the ansatz, substituting all coefficients we already fixed at leading step 1 Figure 7. An overview of the five steps in the perturbative algorithm. step 1, step 2 and step 4 introduce constants while they are fixed in step 3 and step 5. order: (5.8) The condition P a P a = 0 immediately fixes two of the four d-coefficients such that only d 3,1 remain.
Step 2 We move on to calculate µ (1) ab . These functions become bulky already at subleading order so we will only sketch the procedure.
Introducing the notation pol k u (coeff.s) for polynomials in u of degree k that contain specified, undetermined, coefficients, we can display the structure of the Ψ-operation in step 2. It acts on one of the terms in equation (5.3) as In accordance with equation (5.3), this is then multiplied by f ab|1 and summed with the other five terms to yield the full µ (1) ab .
Step 3 We impose the regularity conditions for µ (1) ab at u = 0, first by expanding µ , here for ab = 14, and demanding that all negative powers vanish. For example, we immediately see that φ ab . Secondly, we do the same for the second regularity condition, again here for µ (1) 14 : We have substituted the solutions from equation (5.9) in the second step. Most remaining coefficients are again very easy to identify and are, still at this loop order, very simple rational expressions in x. A few coefficients survive until step 5.
Generically, we would expect MZVs to appear in this step but that is first when the η-functions multiply negative powers of u, which doesn't happen at this order.
Step 4 Next, we calculate µ ab (1) andP (1) from relations (3.8) and (3.15). The smallest examples arẽ Note that the Pfaffian in the definition of µ ab is introduced as another constant to fix, with its own g-expansion.
Step 5 In the final step, we match the obtained expressions forP with the ansatz (3.22) (with x → 1/x). We expand the expressions from step 4 around u = 0 to the relevant order and compare it with where again we have substituted all coefficients known from previous steps in the second equalities. We have here truncated the ansatz in a way consistent with this loop order being our final aim.
Here we see the MZVs entering, although the anomalous dimension is still a simple integer at this loop order. The non-zero coefficients that explicitly appeared in this example are fixed to The full results of our perturbative calculations for this solution are shown in the conclusion of this example in section 5.2.
Results, performance and challenges
We have applied a Mathematica-implementation of the described algorithm to the examples in table 1. The success varies significantly, however, depending on the operator in question. For the simplest cases, we have reached 7-and 8-loop results on a standard laptop while for the most challenging one only the 2-loop anomalous dimension could be fixed within reasonable time. In this section, we discuss the general features and challenges, while we present the individual calculations in section 6.
As expected, the results for the anomalous dimensions contain MZVs, while the dependence on the twist comes in the form cos(nβ), with n being integer. They all agree with former results where such were known and they all reduce to the known result for the Konishi anomalous dimension in the undeformed theory in the limit β → 0.
Example: Ψ 11 F 11 -2333 For this example, we were able to complete eight perturbative loops with our implementation. The result for the anomalous dimension is: with the shorthand notation c k = cos(kβ). As a nice check, this result reduces to the known 8-loop result for the untwisted Konishi multiplet [41], which is given below in (6.2).
Computational challenges. How far we have been able to push the calculations is highly solution dependent. The main complication is the difficulty of dealing with the symbolical expressions involving twists and algebraic expressions arising in the solution of the zero-remainder conditions at the leading order. The appearance of a square root in a solution poses a practical complication as Mathematica has more difficulties simplifying such expressions. Much of this is by design, in order to avoid any assumptions of branch cuts, but even in simple cases (such as √ 6), there are significant slowdowns. Our attempts to remedy this have been to first manipulate all expressions such that the square root appears in the numerators and not the denominators, whenever it is possible. Secondly, it may be beneficial at certain points in the algorithm to replace the square root with a placeholder variable that squares to the square root argument. In the end, after various timing tests, we only used this in an iterative solver for the regularity conditions in step 3. It is still possible though that going back and forward in between the placeholder and the explicit square root would improve performance in other places too.
Although the β-deformation is arguably the mildest of all γ-deformations and only depends on a single parameter, it seems it is enough to seriously bog down the perturbative algorithm. Rapidly growing rational expressions of the variable x can start to accumulate at each order, in particular in the case where all the x a are different. In fact, even the leading Q-system can be rather complicated (as can be seen in the example for ZΨ 2 11 in appendix A).
Performance. As discussed, the computation times vary a lot depending on the solution. Overall, the computation time seems to scale roughly exponentially with the loop order, illustrated for four examples in figure 8. The costliest step is by far the construction of µ (n) ab in step 2, which also has the worst scaling. It is followed by the regularity conditions in step 3. The total times for the illustrated calculations at O(g 8 ) were 1.6 min for ZZ, 7 min for D 2 12 Z 2 , 31 min for D 2 12 ZX and 248 min for ZΨ 22 which shows what an impact the twist and the square root expressions have.
The memory required to store the Pµ-system and replacement rules for the constants is once again solution dependent. Typically, the computations have a roughly exponential memory scaling with the loop orders. The required memory for the Ψ 11 F 11 solution is g-order Figure 8. The scaling of computation times for four example operators: the ZZ which has no twist, the simple D 2 12 Z 2 , the D 2 12 ZX which contains √ 6 and ZΨ 22 which contains the more complicated square root x (2x 2 + 5x + 2). The total computation time is plotted as the thick black line in accordance with the logarithmic time scale on the y-axis. The shaded regions below that line indicate the fractional computation times (in percent) spent at each step in the algorithm and are hence independent of the scale on the axis. The step coloration follows section 5.1 as in step 1 (which is almost instant), step 2, step 3, step 4 and step 5. Note how the computation times increase significantly with the presence of the square roots.
Examples
In this section we analyze the solutions for the submultiplets given in table 1, except for Ψ 11 F 11 which was treated along the way.
The su(4) operator ZZ -HWS in 2222
Following the procedure of section 4, this operator (and any other operator from the Konishi multiplet with n f 1 = n f 2 = n f 3 ) corresponds to the same boundary as the Konishi multiplet in the undeformed theory. Consequently, the solution is exactly the same, and it has been treated extensively in [5,13], so we simply state the results to compare its simplicity to the other Konishi solutions in the β-deformation.
The boundary conditions for the solution can be summed up as z z z z z z z z z z z z z z z z z z z z z z z z z where the diagram shows the number of roots in the distinguished Q-functions Q a,s . Note that it is possible to choose a path from Q 0,0 to Q 4,4 with no Bethe roots, so the solution is in fact trivial. The distinguished Q-functions are from which one finds Going through the prescribed procedure, we can reproduce the first eight loop corrections to the anomalous dimensions: This result was first found in [41] and has been extended to 11 loops in [13], so we mainly include it for reference.
The sl(2) operator D 2 12 Z 2 -HWS in 1133
For the sl(2) Konishi, the boundary conditions are z z z z z z z z z z z z z z z z z z z z z z z z z Again, we see that there is a path without any Bethe roots, so the Q's can again be trivially generated and are rational in the twist. The Q's are 3 = see below , With the dependence on the twist being rational and rather simple, we have been able to run this example through seven loop orders in a few hours, yielding the result Again, this result nicely reduces to (6.2) in the limit β → 0, and the first four orders are in agreement with the known result [37].
The operator D 2 12 ZX -R-symmetry descendant in the undeformed theory
This is an example of an operator that is not a HWS in any grading in the undeformed theory. It is obtained by acting on the sl(2) Konishi operator with content D 2 12 Z 2 with the R-symmetry generator f † 3 f 2 . This symmetry is broken in the β-deformed theory, and thus these two types of operators are no longer in the same multiplet.
As the action of the R-symmetry does not correspond to a fermionic duality transformation, we keep the grading of the sl(2) operator, 1133, and then the charges and boundary conditions for such a state should be z z z z z z z z z z z z z z z z z z z z z z z z z Loosely, one can think of the action of the R-symmetry as deforming the Young diagram as → f † 3 f 2 since then the rule for determining the number of Bethe roots by counting boxes in the diagram [12] can be applied to get the number of roots on the 1133 path. This time, the lowest number of Bethe roots on a path from Q 0,0 to Q 4,4 is five, and the solution of the corresponding zero-remainder conditions in fact gives rise to two solutions that both reduce to the untwisted Konishi solution in the limit β → 0. Correspondingly, there must be two operators of this type in the Konishi multiplet. One solution is while the second one is given by the replacement √ 6 → − √ 6. For the first solution, the above distinguished Q-functions lead to the single-indexed Q-functions 3 = see below , Again, P a and Q (0) i for the second solution is obtained by the replacement √ 6 → − √ 6. The appearance of the √ 6 slows down the perturbative calculation somewhat such that reaching the 7-loop anomalous dimension with our code requires about 25 hours on a standard laptop. The anomalous dimension is the same for both solutions and is given by The solution of the distinguished Q-system contains the square root, which significantly slows down the computer calculations, as discussed in section 5.2. The distinguished Q-functions, using this notation, are while the leading P and Q read Despite the challenge of working with such expressions, the anomalous dimension up to five loops is nevertheless accessible within a few hours on a standard laptop: As we will now discuss, this solution is, up to the replacement β → 2β, the same for the two examples X YZ and Z 2 X 2 .
The operator X YZ -HWS in 0222
This operator has the same solution as ZΨ 22 in the example above, with the only difference that x → x 2 . The grading, oscillator content, twist factors and the shifted quantum numbers are z z z z z z z z z z z z z z z z z z z z z z z z z Though X YZ has length 3, it still has shifted weights identical to the operator ZΨ 22 treated above, and we thus set the modified length used in the ansatz (3.22) to L = 2.
Our procedure for finding the leading solution leads to the distinguished Q-functions given in appendix A due to their large size. They differ from the distinguished Q-functions of ZΨ 22 , but the difference only lies in which Q-functions are suppressed in g. By using the freedom to choose A a and B i , Q 1,0 ∝ 1 u 3 can be traded for Q 0,1 ∝ u 2 through a redefinition of the asymptotic normalization. With the rescaling which obviously respects (3.6), we can bring the entire set of distinguished Q-functions into the ones for ZΨ 22 , again with the substitution x → x 2 . Naturally, both the anomalous dimension and P and Q are the same as for ZΨ 22 , with the mentioned change of power for the twist factor x, corresponding to β → 2β.
6.6
The su(2) operator Z 2 X 2 -HWS in 0224 We have already discussed that the su(2) Konishi operator Z 2 X 2 is in the same submultiplet as ZX Y, so they should correspond to the same solution to the QSC. The boundary conditions for this solution are The solution for the distinguished Q-functions are displayed in appendix A. Again they differ from those of ZΨ 22 and ZX Y, but can be brought into the same form. The normalization then requires the rescalings The powers of u still do not match after such a rescaling but can be made to do so by a gauge transformation (3.18) with Λ = 1. In the ansatz, we need to set L = 2 and δ λ = 1 for it to match that of ZΨ 22 . Performing these manipulations it is evident that the P, Q and the anomalous dimensions are the same as for ZΨ 22 , with the change x → x 2 . We can thus conclude that the 5-loop anomalous dimension of the su(2) Konishi operator is given by (6.7) with the replacement β → 2β, which is in agreement with the known 4-loop result [32,38].
The fact that the twist factors are distinct does not make any difference conceptually and, in principle, this operator should be treatable with the presented algorithm just as were the previous examples. In practice, however, the twist dependent expressions grow very rapidly making computer calculations very slow. Already the leading order Q-system is very bulky and is put in appendix A. The leading P a are , , while the leading Q are Notice that for this operator, the length L = 3 and the modified length L = 3 are in correspondence, since no operator with L = 2 can give rise to the same boundary conditions.
We have so far not found an efficient way of dealing with the large x-expressions beyond the first correction in the perturbative algorithm, which fixes the 2-loop anomalous dimension. The fully simplified expressions of the objects at this first step require the same amount of memory as does step 6 for the simpler operators, and the memory scaling is much worse. Our results for this operator are thus limited to the 2-loop result γ (1) = 4(cos(β) + 2) (6.11) γ (2) = −4(5 cos(β) + 7).
Conclusion
In this paper, we discussed how to find explicit solutions to the twisted QSC in one of the simplest possible cases, the β-deformation. We considered several operators that in the undeformed theory belong to the Konishi multiplet, and we were able to produce a range of new results. Our main results are summed up in table 2.
Though we only scratched the surface by considering operators from the Konishi multiplet, the strategy is general and should be generalizable to the remaining spectrum. It would be interesting to have a classification of all submultiplets of the Konishi multiplet in the β-deformed theory. Furthermore, it would be very interesting to study operators that in the undeformed theory belong to the L = 2 BMN vacuum multiplet, in particular 1-magnon states, e.g. ZX . Our preliminary studies 5 indicate that it might be necessary to expand the QSC functions in odd powers of g for such states, similar to the special cases found in [13]. We may return to this question in future work, but also encourage others to attack it.
As we saw, it quickly becomes technically challenging to handle calculations with twist variables. The ultimate challenge would be to use the procedure to construct perturbative corrections to Q-operators [44][45][46][47][48]. A procedure to explicitly construct the 1-loop Q-operators was given in [49], but the technicality of constructing perturbative corrections in the fully twisted theory will probably require a courageous computational effort. However, this could lead to new results about perturbative corrections to eigenstates, the dilatation operator, and perhaps about the still mysterious integrable model that underlies the AdS/CFT spectral problem. | 12,656 | 2019-02-04T00:00:00.000 | [
"Physics"
] |
A Systematic Way to Infer the Regulation Relations of miRNAs on Target Genes and Critical miRNAs in Cancers
MicroRNAs (miRNAs) are a class of important non-coding RNAs, which play important roles in tumorigenesis and development by targeting oncogenes or tumor suppressor genes. One miRNA can regulate multiple genes, and one gene can be regulated by multiple miRNAs. To promote the clinical application of miRNAs, two fundamental questions should be answered: what's the regulatory mechanism of a miRNA to a gene, and which miRNAs are important for a specific type of cancer. In this study, we propose a miRNA influence capturing (miRNAInf) to decipher regulation relations of miRNAs on target genes and identify critical miRNAs in cancers in a systematic approach. With the pair-wise miRNA/gene expression profiles data, we consider the assigning problem of a miRNA on target genes and determine the regulatory mechanisms by computing the Pearson correlation coefficient between the expression changes of a miRNA and that of its target gene. Furthermore, we compute the relative local influence strength of a miRNA on its target gene. Finally, integrate the local influence strength and target gene's importance to determine the critical miRNAs involved in specific cancer. Results on breast, liver and prostate cancers show that positive regulations are as common as negative regulations. The top-ranked miRNAs show great potential as therapeutic targets driving cancer to a normal state, and they are demonstrated to be closely related to cancers based on biological functional analysis, drug sensitivity/resistance analysis and survival analysis. This study will be helpful for the discovery of critical miRNAs and development of miRNAs-based clinical therapeutics.
INTRODUCTION
MicroRNAs (miRNAs) are a class of small non-coding RNAs and have been proved to play important roles in regulating more than two thirds of human genes (Bandyopadhyay et al., 2010;Song et al., 2017). They usually regulate their target genes by binding to the complementary seed sequence at the 3 ′ untranslated region. The binding of miRNAs usually leads to the translation repression or degradation of the target mRNAs and ultimately affects the production of the corresponding proteins (Bartel, 2009;Fabian et al., 2010;Hata and Lieberman, 2015). For example, miR-21 was demonstrated to negatively regulate the expression of SAV1 and Smad6 (Xu et al., 2016) in colorectal cancer. One single miRNA usually targets many genes and one gene might be regulated by multiple miRNAs. To decipher the relationships between miRNAs and their target genes and unveil miRNAs' biological functions, many miRNA targets databases, such as Targetscan (Agarwal et al., 2015), miRDB (Wong and Wang, 2015), miRanda (Betel et al., 2010), and mirTarbase (Chou et al., 2018), have been built based on various biological experiments and/or different computation methods.
The dysfunctions of miRNAs have been reported to be involved in the tumorigenesis of various cancers (Bartel, 2004;Gotte, 2010;Dela Cruz and Matushansky, 2011;Lovat et al., 2011;Liu W. et al., 2018;Xu et al., 2018). For this reason, miRNAs have become potential biomarkers in cancer diagnosis and treatment (Slack and Chinnaiyan, 2019). Furthermore, some miRNA-based therapeutics have entered into the clinical research, i.e., miR-16based mimics in phase I clinical trial for treating advanced nonsmall cell lung cancer, and antimiRs targeted at miR-122 in phase II trial for treating hepatitis (Rupaimoole and Slack, 2017).
However, accumulating evidence indicates that miRNAs can also promote the expression of their target genes. For example, Vasudevan et al. found that miR396-3 could direct the AGO complex binding with the AU-rich elements to promote the translation of its target gene in . They further demonstrated that let-7 and synthetic miRcxcr4 could induce target mRNAs up-regulation on cell cycle arrest while repressing translation in proliferation cells . In addition to functioning in the cytoplasm, mature miRNAs are also found in the nucleus. Xiao et al. demonstrated that miR-24-1 in the nucleus can activate gene transcription by targeting their enhancers . Up to now, more than 200 positive regulations of miRNAs on genes have been experimentally identified in the literature.
It becomes a fundamental problem to elucidate the regulatory relations between miRNAs and their target genes in systems biology. Specifically, we need to know which genes are positively regulated by one miRNA and which genes are negatively regulated by it. The answer to this problem will provide a foundation to study the critical roles of miRNAs in tumorigenesis. Recently, Tan et al. first investigated this problem based on the Pearson correlation coefficients between the expression of miRNAs and their target genes in pan-cancer datasets . Surprisingly, they found many positive correlated miRNA-gene pairs. This demonstrates that miRNAs could exert their important roles in various cancers by positively regulating many genes.
Another important issue is to determine the critical miRNAs potential to affect the overexpression or under-expression of cancer-related genes. The answer to this question will help to determine a few "level point" miRNAs for designing miRNAbased therapeutic strategies. Cui et al. combined the miRNA sequence features and miRNA disease spectrum width (DSW) to define the importance of miRNAs (Cui et al., 2019). However, this static definition could not reflect the different regulatory mechanisms between miRNAs and genes involved in specific cancer well.
In this paper, we propose a novel miRNA influence capturing (MiRNAInf) to decipher regulation relations of miRNAs on target genes and identify critical miRNAs in cancers in a systematic approach. We study miRNA-gene regulations by assuming that the expression of one gene is determined by its upstream miRNAs. We model the expression of a gene as a function of the expression of the miRNAs targeting it. Through the Taylor expansion, we employ the first partial derivative to a miRNA to denote its regulatory effect on the target gene. The first partial derivative is then approximated by the Pearson correlation coefficient of the expression change of a miRNA and that of its target gene between disease and normal control.
We finally define the global influence of a miRNA by combining its local influence strength in an individual cancer and the degree of its target gene in a PPI network. Our results on breast cancer, prostate adenocarcinoma and liver cancer datasets further demonstrate that positive regulations are as common as negative ones in miRNA-gene interactions. We also find that only a few miRNAs have significant influences on the cancerrelated differentially expressed genes. The identified top miRNAs in the three datasets are not only highly correlated in a functional network, but also significantly enriched in some important functions such as inflammation, cell proliferation, apoptosis and cell cycle. It demonstrates that they are very likely to play essential roles coherently in tumorigenesis. More importantly, we find that the intervention of a few critical miRNAs may alleviate the abnormal expressions of most genes according to the regulatory effect and differential expression situation between miRNAs and their target genes. Besides, the identified important miRNAs influence patients' survival time of prognosis besides the sensitivity/resistance of some anti-cancer drugs. In sum, our study provides a systematic way to understand the key roles of miRNAs in cancers and to screen potential intervention miRNA biomarkers for future miRNA-based therapy and diagnosis in precision medicine.
Data Acquisition and Preprocessing
We collected three types of data: miRNA and gene expression data for three cancers, miRNA-gene interaction data, and protein-protein interaction (PPI) data from three different databases: TCGA, miRTarbase and STRING (Chou et al., 2016;Szklarczyk et al., 2017).
The Cancer Genome Atlas (TCGA) Data
We collected datasets with both miRNA and gene expression profiles for cancer samples and corresponding normal samples in this study. Three cancers with abundant gene expression and miRNA expression "pair datasets" were acquired from TCGA (http://tcga-data.nci.nih.gov/tcga/). They included 102 samples for breast cancer, 52 samples for prostate adenocarcinoma, and 49 samples for liver cancer.
MiRNA-Gene Interaction Data
Besides gene expression and miRNA expression data, we further downloaded miRNA-gene targets data from miRTarbase (Chou et al., 2016), a widely-used state-of-the-art database for miRNA-gene targets. miRTarbase includes 502,652 high quality experimentally validated miRNA-gene interactions between 2,599 miRNAs and 15,064 genes for the human species.
Protein-Protein Interaction (PPI) Data
The STRING database, which includes 10,048,286 interactions between the 19,576 proteins for human beings, integrates the experimentally validated and computationally predicted proteinprotein interactions (Szklarczyk et al., 2017). In order to use the highly confident interactions, we selected the interactions with a combined score >150. The distribution of proteins' degrees indicates that a small number of proteins have interactions with hundreds of other nodes, while most proteins only have interactions with a few of other proteins, which satisfies powerlaw distribution (Please go to Figure S1 for details).
Methods
In this study, we propose a miRNAInf method to identify critical miRNAs involved in cancer as well as conducting comprehensive functional analysis for miRNAs. The flowchart of the proposed miRNAInf methods is illustrated in Figure 1. The proposed method consists of the three steps. First, we determine the significant differentially expressed miRNAs and genes based on their expression data from TCGA. Second, we compute the local influence strength of a miRNA to its target gene. Finally, we evaluate the global influence of a miRNA in a specific cancer by integrating the local influence strength and gene's importance.
Identify Differentially Expressed miRNAs and Genes
We conduct normalization for miRNA and gene expression data before identifying differentially expressed miRNAs and genes. For each miRNA epression sample, we apply the RPM (Reads Per Million) method for normalization as described in Equation (1) for its simplicity and efficiency (Faraldi et al., 2019).
where E r i means the read counts of miRNA i and Total r indicates the total counts of all miRNAs in a specific sample. RPM normalizes all read counts with respect to the ratio between library size and a million number. Similarly, we normalize gene expression data as follows where E g j means the read counts of gene j and Total g indicates the total counts of all genes in a specific sample. We then apply DESeq2 (Love et al., 2014) to identify differentially expressed genes (DEGs) and differentially expressed miRNAs (DEmiRs) by collecting all genes and miRNAs with p < 0.05 adjusted by FDR method in differential expression analysis of the normalized data. Finally, we get 546,457 and 313 DemiRs and 5,057, 3,665, 3,064 DEGs for breast, liver, and prostate cancer, respectively.
Compute miRNA Local Influence Strength on a Target Gene
The expression of a gene can be assumed as a function of the expression of the miRNAs targeting it. Given a gene j regulated by m miRNAs r 1 , r 2 , . . . r i , . . . r m , then its expression g j = f j (r 1 , r 2 , . . . r i , . . . r m ) in disease state can be approximated by first-level Taylor expansion as where f j r 1 0 , r 2 0 , . . . r i 0 , . . . r m 0 is the expression value of gene j in normal state, R j is the index set of the miRNAs regulating gene j, is the change of miRNA i between the disease state and normal state.
The gene expression change g j of gene j can be calculated by moving the gene expression in the normal state f j r 1 0 , r 2 0 , . . . r i 0 , . . . r m 0 to the left side of Equation 3: where the right side represents the sum of expression change of gene j induced by the perturbation of each miRNA i. The partial derivative ∂f j (r 1 , r 2 ,...r i ,...r m ) ∂r i actually reflects the influence strength of miRNA i on gene j. A given miRNA i may target many other genes, we assume that r ij , the portion of the ith miRNA expression difference r i between the disease and normal states, affecting gene j, is positively proportional to the expression change g j . Given the target gene index set G i of miRNA i, we define r ij as the product of the absolute change ratio of gene j g i / j∈G i g j and the change of miRNA i between the disease state and normal state r i : Based on this, we can calculate the Pearson correlation coefficient ρ ij between r ij and g j . ρ ij > 0 indicates that miRNA i upregulate gene j, otherwise, it suppresses gene j. Then the partial derivative ∂f j (r 1 , r 2 ,...r i ,...r m ) ∂r i can be approximated by where the coefficient k ij can be assumed and approximated by a constant k for all miRNA-genes. Given one miRNA i, it may have different influence strengths on each target gene. Thus, we compute its local influence strength I ij as follows where the r ij means the average difference between disease and normal tissues for all patients. The local influence strength I ij considers both the correlation coefficient ρ ij and the average difference r ij . The higher correlation ρ ij between them, the larger its influence on gene j. Similarly, the larger average difference r ij , the larger also its influence on gene j.
Evaluate the Global Influence of Each miRNA
In order to describe the importance of a miRNA in a specific disease, we consider both the number of its target genes and the importance of each gene. Here, we define the global influence of a miRNA i for the disease by weighting its local influence by the importance of its target genes: where d j and d max represent the degree of the gene j and the maximum degree in the PPI network, respectively. The importance of the gene j is modeled as the ratio of its degree between the maximum degree in the PPI: d i d max . The global influence of a miRNA i involves both the local influence strength and the importance of its target genes. The more targets regulated by miRNA i, the larger its global influence. Simultaneously, the larger the degrees of its target genes in PPI, the larger the global influence.
Positive Regulations Are as Common as Negative Ones in Cancers
As mentioned before, it suggests that positive regulations also play important roles in cancers. We first study the distribution of the Pearson correlation coefficient ρ ij between the change of a miRNA and that of its target genes. Figure 2 shows the distribution of the number of ρ ij > 0.3 in breast cancer, liver cancer, and prostate cancer, respectively. First, the distribution of the number of ρ ij > 0.3 and that of ρ ij < −0.3 are very similar. Our observation further supports the results in Tan et al. (2019) that miRNAs exert both positive and negative regulations on their target genes. Second, most of the absolute ρ ij are smaller than or equal to 0.5, which indicates that most of the regulatory strengths are relatively weak because one miRNA may target even hundreds of genes; On the other hand, only a few absolute ρ ij are larger than 0.8, which indicates that several individual genes may be strongly regulated by very few miRNAs.
As an example, Figure 3 shows the scatter plots between the change of five miRNAs and that of their target gene KLF4, which regulates many critical physiologic and cellular processes . The X-axis of Figures 3A-E represents the expression change of a specific miRNA binding on target gene KLF4, X-axis of Figure 3F represents the total change of five miRNAs regulating KLF4, and the Y-axis of Figure 3 represents the expression change of KLF4. We can see that three miRNAs (hsa-miR-10b-5p, hsa-145-5p, and hsa-miR-335-5p) positively correlate with KLF4, while the other two miRNAs (hsa-32-5p and hsa-miR-7-5p) negatively correlate with it. This demonstrates that miRNAs targeting one gene may affect it differently. Their total expression changes positively correlate with that of KLF4 as shown in the last subplot of Figure 3. It indicates that some impacts of the negatively correlated miRNAs can be offset by those of dominant ones.
miRNAs Regulate Their Target Genes in a Complex Way
In this section, we select a portion of miRNAs and their target genes and display their local influence relation in a bipartite graph in Figure 4. We can see that one miRNA may promote the expression of some genes while repressing that of the others. On the other hand, one gene may be upregulated by some miRNAs while being downregulated by other miRNAs. Furthermore, one miRNA may have a larger influence (wide lines) on some genes while having relatively smaller influence (thin lines) on the others. The observations demonstrate that miRNAs interact with their target genes in a complex way. The inference of these complex interactions forms the basis for us to understand the detailed roles of each miRNA on a specific gene. For example, PIK3CA is regulated by hsa-miR-10b-5p (Influence strength, 0.9509), hsa-miR-335-5p (Influence strength, 2.83E-4), hsa-miR-17-5p (Influence strength,-7.75E-4), hsa-miR-19a-3p (Influence strength,-7.47E-5), and hsa-miR-155-5p (Influence strength,-7.29E-4). Then its expression is thus mainly upregulated by hsa-miR-10b-5p.
Only a Few miRNAs Have Significant Global Influences on Cancers
From the perspective of systems biology, we are more interested in the most critical miRNAs, i.e., dominant miRNAs FIGURE 2 | The distribution of the Pearson correlation coefficients ρ ij in breast cancer, liver cancer, and prostate cancer, respectively. Frontiers in Genetics | www.frontiersin.org that have the greatest influence on the whole regulatory network. Identifying the dominant miRNAs will answer the key question: which miRNAs are regulators of the most cancer-related genes? Figure 5 illustrates the global influence of the top-ranked 20 miRNAs in breast, liver and prostate cancers, respectively. We can see that some miRNAs appear in all the three cancers whereas others may only show in one or two cancers. It suggests that some miRNAs play a common important role in many cancers while others are more related to specific cancers. Moreover, there are only a few miRNAs whose global influences are extremely larger than those of the others. This indicates their dysfunction may have very crucial impacts on the development of cancers. For instance, miR-21 ranked as the top one in the three cancers, has been confirmed highly involved in cancer proliferation and metastasis (Liu H. et al., 2018;Wang et al., 2019). On the other hand, we find that the most influenced genes by the dysfunctional miRNAs are highly related with cancers, such as CDK2, TP53, HRAS, NFKB1 (Carroll et al., 2000;Normanno et al., 2009;Xu et al., 2016).
Intervention of a Few Critical miRNAs May Help to Alleviate the Abnormal Expression of Most Cancer-Related Genes
Because one miRNA may regulate multiple downstream genes and the intervention on it may have different effects on the expression of its target genes. From the perspective of miRNAbased therapy, it is very crucial to figure out how the intervention of one miRNA may affect the abnormal expression of their target genes. If a miRNA promotes the expression of a gene and they are both overexpressed or under-expressed, then the intervention of the miRNA will exert a positive effect on the target gene to alleviate its abnormal expression; If a miRNA represses the expression of a gene and they have an opposite abnormal expression situation, then the intervention of the miRNA will also exert a positive effect on the target gene to alleviate the abnormal expression. Conversely, if a miRNA promotes the expression of a gene and they have the opposite abnormal expression situation, then the intervention of the FIGURE 4 | The regualtory networks between miRNAs and genes in breast, liver, and prostate cancer, respectively. Red and green lines indicate upregualtion and downregualtion, respectively. Line width indicates the absoulte value of the local influence strength. Red and green nodes indicate overexpression and underexpression, respectively. (A-C) are the subnetworks for breast, liver and prostate cancer respectively. miRNA will exert a negative effect on the target gene to deteriorate its abnormal expression. On the other hand, if a miRNA represses the expression of a gene and they are both overexpressed/under-expressed, then the intervention of the miRNA will also exert a negative effect on the target gene to deteriorate its abnormal expression. Figure 6 shows the subnetwork of hsa-miR-21-5p in the three cancers, when intervention on hsa-miR-21-5p, the left-hand genes are those being positively affected while the right-hand ones negatively affected genes. This reveals that the intervention of one miRNA may have complex effects on cancer-related genes. Specifically, the abnormal expression of some genes can be alleviated while the other may be further deteriorated.
Based on the regulation relations and the abnormal expression situations, we summarize the number of positively and the negatively affected miRNA-gene pairs of the top five miRNAs in Table 1. The "+"/"-" symbols in Table 1 represent the number of positively/negatively affected miRNA-gene pairs after intervention. We have the following observations from the table. First, we can see that the interventions of the five miRNAs may affect about 1,000 of abnormal miRNA-gene pairs which indicate they regulate many downstream genes. Second, the number of positively affected genes is extremely larger than that of the negatively affected ones. Third, the absolute relative influence strengths of most of the positively affected pairs are larger than 0.5. The observations indicate that the intervention of the top five miRNAs may significantly drive the cancer-related genes to the normal levels. Therefore, they are potential invention biomarkers for miRNA-based therapy.
Most of the Critical miRNAs Involve in Some Important Biological Functions
We conduct functional analysis for top-ranked miRNAs by integrating the co-expression similarity, co-GO similarity, coliterature similarity, and co-similar disease similarity by using miRNA functional analysis tool MISIM (Li et al., 2019). There are 14, 15, and 13 miRNAs respectively annotated by MISIM in the top 20 important miRNAs of the three cancer datasets. Figure 7A shows the function similarity network of the top 20 miRNAs, where red color lines denote the correlation coefficients larger than 0.5. It suggests most of the miRNAs are highly correlated in their biological functions. Figure 7B shows the top 10 enriched biological functions (FDR<7.0E-02) of the miRNAs. These biological functions, such as inflammation, cell proliferation, apoptosis, and cell cycle, have been verified closely related to different cancers (Evan and Vousden, 2001;Taniguchi and Karin, 2018;Xu et al., 2020). This indicates the critical miRNAs might interact in a highly coherent way to drive the biological system from normal to disease state in the three cancers.
As the overlapped critical miRNA in three cancers, hsa-mir-21 was reported to involved in various cancers such as colorectal cancer, breast cancer and lung cancer (Xu et al., 2016;Liu W. et al., 2018). It has been reported that overexpression of miR-21 The bold values are the number of interactions with influence strength more than 0.5.
could promote the cellular proliferation, colony formation, invasion and also inhibit cell death in a wide variety of cancerous cells by regulation of various targets including PTEN, TPM1, and PDCD4 (Najjary et al., 2020).
Some Critical miRNAs Also Impact the Resistance/Sensitivity of Drugs
Some non-coding RNAs (ncRNAs) especially miRNAs could promote sensitivity or produce resistance of drugs by regulating their target genes. To evaluate the impacts of the critical miRNAs on the resistance/sensitivity of drugs, we submit the top 20 miRNAs to two state-of-the-art miRNA-drug interaction databases: ncDR (Dai et al., 2017) and mTD (Chen et al., 2017). The two databases include 1,056, 384, and 127 records for miRNA-drug interactions in breast, liver and prostate cancers respectively as well as curating a lot of resistance/sensitivity related ncRNAs. We find that there are 10, 6, and 4 miRNAs impacting drug resistance/sensitivity cases in breast cancer, liver and prostate cancer, respectively, among the top 20 miRNAs. The main reason for the small number of miRNAs in the liver and prostate cancers lies in that there are relatively fewer records about them in the two databases. Figure 8 shows their abnormal expression and corresponding influence on drug sensitivity/resistance. On one hand, one miRNA may influence multiple drugs with different effects. For example, the over-expression of hsa-mir-182 in breast cancer could induce drug resistance to both Olaparib and Cisplatin while promoting the sensitivity of Tamoxifen. On the other hand, some miRNAs may promote a drug sensitivity while others induce its resistance. The complex effects of these miRNAs on cancerrelated drugs not only further demonstrate their importance in cancer development, but also provide a new insight for accurate drug selection.
The Expression of Critical miRNAs Is Highly Related to the Survival Time of Prognosis
We also find that critical miRNAs can influence the survival time of prognosis seriously. Figure 9 shows the Kaplan-Meier curves of the top three miRNAs in breast, liver and prostate cancers, respectively. Most of them are significantly correlated to the overall survival time in both breast and liver cancers except for prostate cancer. One major reason is that most prostate tumors are slow-growing and many of them are not lethal. Furthermore, some important correlations between the miRNAs are supported by wet-lab experiments. For example, Yan et al. demonstrated that overexpression of miR-21 was associated with human breast cancer poor prognosis (Yan et al., 2008). Ji et al. showed that liver cancer patients with low miR-26 expression had shorter overall survival time (Ji et al., 2009(Ji et al., , 2013. These observations indicate that the identified critical miRNAs may also serve as potential biomarkers for the survival time of prognosis.
CONCLUSION
MiRNAs have been reported as a kind of important non-coding regulators influencing the expression of more than 60% genes. In this paper, we proposed a novel miRNA influence capturing (miRNAInf) method to characterize the regulatory mechanism of miRNA on their target genes as well as identify critical miRNAs that have dominantly important impacts on target genes. Out results from the breast, prostate and liver cancer datasets further verify that miRNAs may either upregulate or downregulate their target genes instead of mainly repressing them. We identified some critical miRNAs involved in the three cancers by constructing a miRNA-gene regulatory network. Our biological functional analysis shows that those critical miRNAs are not only highly correlated with each other but also involved in many important biological functions such as apoptosis, proliferation, etc. Furthermore, miRNA-gene interaction analysis reveals that the intervention of only a few top crucial miRNAs may potentially alleviate the abnormal expressions of many genes and push the cancer system to a normal situation. It suggests that the identified crucial miRNAs may serve as potential biomarkers for miRNA-based therapy as well as diagnosis. In addition, we find some critical miRNAs may influence the sensitivity/resistance of drugs as well as the survival time of prognosis. Our study provides a strong foundation to support the combination of miRNA-based therapy and cancer drugs to improve the treatment effect in precision medicine. To the best of our knowledge, this study first provides a systematic approach to decipher the roles of miRNAs in the diagnosis and prognosis of complex diseases and will inspire future studies in this field.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/Supplementary Material. The source codes for this study can be found at https://github.com/xupeng2017/MiRNAInf.
AUTHOR CONTRIBUTIONS
WL and PX designed the methods. PX wrote the codes. WL, PX, QW, JY, YR, GF, ZK, XS, and HH analyzed the data. WL, HH, and PX wrote the manuscript. All authors read and approved the final manuscript.
ACKNOWLEDGMENTS
We would like to thank the reviewers for their valuable comments and suggestions.
Copyright © 2020 Xu, Wu, Yu, Rao, Kou, Fang, Shi, Liu and Han. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | 6,514.2 | 2020-03-31T00:00:00.000 | [
"Medicine",
"Biology",
"Computer Science"
] |
An Efficient Overflow Detection and Correction Scheme in RNS Addition through Magnitude Evaluation
Number Systems are media for representing numbers; the popular ones being the Weighted Number Systems (WNS), which sometimes propagate carries during arithmetic computations. The other category, Un-Weighted Number Systems, of which the Residue Number System (RNS) belongs, do not carry weights but have not yet found widespread usage in general purpose computing as a result of some challenges; one of the main challenges of RNS is overflow detection and correction. The presence of errors in calculated values due to such factors as overflow means that systems built on this number system will continue to fail until serious steps are taken to resolve the issue. In this paper, a scheme for detecting and correcting overflow during RNS addition is presented. The proposed scheme used mixed radix digits to evaluate the magnitude of the addends in order to detect the occurrence of overflow in their sum. The scheme also demonstrated a simplified technique of correcting the overflow in the event that it occurs. An analysis of the hardware requirements and speed limitations of the scheme showed that it performs considerably better in relation to similar state of art schemes.
Introduction
The Residue Number System (RNS) has gained prominence in recent years due to its seemingly inherent features such as parallelism and carry-propagation free arithmetic computations.Notwithstanding the fact that, RNS is currently being applied in Digital Signal Processing (DSP) intensive computations like digital filtering, convolutions, correlations, Discrete Fourier Transform (DFT) computations, Fast Fourier Transform (FFT) computations and Direct Digital Frequency synthesis [1] [2] [3]; researchers in the area are still working hard around the clock in order that the RNS becomes a general purpose processor.These efforts have not completely come to fruition because of challenges, including conversion to and from RNS and decimal/binary number systems, the moduli sets to use, overflow detection and correction, magnitude evaluation, and scaling.
An RNS number X, is represented as , where , , , is uniquely represented provided X lies within the legitimate range [ ] 0, 1 M − where is the Dynamic Range (DR) for the chosen moduli set.Let X and Y be two different integers within the DR, if X Y , ( are the arithmetic operations , , , + − × ÷ ), results in a value that is outside the legitimate range, then overflow is said to have occurred.
Overflow in general computing occurs if a calculated value is greater than its intended storage location in memory [4] [5]; this relates to the DR in RNS which situation usually arises during addition and multiplication operations and failure to detect it will lead to improper or wrong representation of numbers and calculated results.Thus detecting overflow is one of the fundamental issues in the design of efficient RNS systems [6].
The conversion of an RNS number into its decimal/binary equivalent number (a process called reverse conversion) has long been mainly based on the Chinese Remainder Theorem (CRT) and the Mixed Radix Conversion (MRC) techniques with few modifications being their variants of recent times.Whiles the former deals with the modulo-M operation, the later does not but computes sequentially which tends to reduce the complexity of the architecture.Computations can be done using the MRC as follows:
X e e m e m m e m m m −
where , 1, 2, , are the Mixed Radix Digits (MRDs) and computed as follows: The MRDs i e are within the range 0 i i e m ≤ ≤ , and a positive number, X, in the interval [ ] 0, M can be uniquely represented.The magnitude of a number can become crucial in the determination overflow in RNS.The sign of an RNS number is determined by partitioning M into two parts: (for negative integers).
Recently, some techniques have been developed to detect overflow without necessarily completing the reverse conversion process; in [7], an algorithm to detect overflow in the moduli set ( ) are distributed among several groups.Then, by using the groupings, the scheme is able to diagnose in the process of addition of two numbers, whether overflow has occurred or not.The scheme in [3] evaluated the sign of the sum of two numbers X and Y and used it to detect overflow but adopted a residue-to-binary converter proposed by [9].The scheme in [10] presented a scheme by an Operands Examination Method for overflow detection for the moduli set ( ) All these schemes either relied on complete reverse conversion process as in the case of [3], or other costly and time consuming procedures such as base extension, group number and sign detection as in [8] and [10].
In this paper, a new technique for detecting and correcting overflow during the addition of two RNS numbers for the moduli set { } − + is presented; the technique evaluates the sign of an RNS number by performing a partial reverse conversion using the mixed radix conversion method.The sign of the addends is evaluated using only the MRDs, which is then used to detect the occurrence of overflow during RNS addition.The rest of the paper is organized as follows: Section 2 presents the proposed method, an anticipated hardware implementation (albeit theoretical) is presented in Section 3 with its realization in Section 4. Numerical illustrations are shown in Section 5 whiles the performance of the proposed scheme is evaluated in Section 6.The final part of this paper is the conclusion in Section 7.
Proposed Method
Given the moduli set { } This implies ( )( ) ( )( ) Lemma 1: Given the moduli set { } Therefore, we can re-write (2) as; Proof: If it can be shown that by substituting ( 9) and (10) into Equation (4) that, ( )( ) ) 9) and (10), it is possible to determine the sign of an RNS number X; whether 2 X M ≥ (for a negative number) or 2 X M < (for a positive number).
The proposed method uses comparison by computing the MRDs of each of the addends to determine which half of the RNS range it belongs rather than performing a full reverse conversion.To detect overflow during addition of two addends X and Y based on the moduli set { } dicates the sign of that addend is defined.Now, based on this bit, three cases will then be considered: 1) Overflow will definitely occur if both of the addends are equal to or greater than half of the dynamic range (M/2).
2) Overflow will not occur if both of the addends are less than M/2.
3) Overflow may or may not occur if only one of the addends is equal or greater than M/2 and will require further processing to determine whether over-Journal of Computer and Communications flow will occur or not.
Let the magnitude evaluation of the addends ( ) , X Y be represented by β , such that if 1 β = or 0 β = represents a positive number or a negative num- ber respectively as shown in Equation (11).The evaluation of the undetermined case in (3) is also represented by a single bit λ in (12). and, The proposed method will then detect overflow as follows: where ( ) , , + ⋅ ⊕ refer to the logical operations (OR, AND, XOR), respectively.For clarity, "1" means overflow occurs whilst "0" means no overflow.
Correction Unit
Let Z be the sum of the two addends.By substituting the individual MRDs for both addends (X and Y), Z can be obtained as follows;
Z X Y e X e X m e X m m e Y e Y m e Y m m e X e Y e X e Y m e X e Y mm
e X e Y ψ = + , we shall have Thus by adding the individual MRDs of the two addends, we obtain the sum Z according to (1) without having to compute separately for its MRDs.The value of Z obtained from ( 14) is the correct result of the addition whether overflow occurs or not.In case of overflow occurrence, the redundant modulus is employed by shifting M one bit to the left in order to accommodate the value.
Hardware Implementation
From Equation ( 8), the MRD's 1 e , 2 e and 3 e can be represented in binary as; 15) to (17) can further be simplified as follows; where, and ) x is a number that is smaller than 2 1 n + , two cases are considered for 1 x .First, when 1 x is smaller than 2 n , and second, when 1 x is equal to Else if 1, 1 n x = , the following binary vector can be obtained as Therefore, 3 t is calculated as And finally, ( ) Thus from equations ( 19) to (21), we have which implies and so, Z is implemented as; 1, 1,0 3,3 1 3,0 4,2 1 4,0 where,
Hardware Realization
The hardware realization of the proposed scheme is divided into four parts as Journal of Computer and Communications shown in Figures 1-4 19) and ( 21) with their parameters clearly defined according to (20) and ( 21) -( 27).The PCP begins with an Operands Preparation Unit (OPU) which prepares the operands in (20), ( 22) and ( 26) by simply manipulating the routing of the bits of the residues.Also, an n-bit 2:1 Multiplexer (MUX) is used for obtaining (26).ADD1 is an n-bit Carry Propagate Adder (CPA) and is used to compute (19), meanwhile (21) is obtained by using an ( ) 1 n − -bit CPA as ADD2 whose save ( 1 s ) and carry ( 1 c ) are then added using ADD3 which is also an ( ) 1 n − -bit CPA.These MRDs are used to determine the sign of the RNS number in Figure 2. Thus, the critical path for the PCP unit is made up of one ( ) 2 n modulo adder and two ( ) Second, is the Magnitude Evaluation Part (MEP) shown in Figure 2, which evaluates whether an RNS number is positive or negative according to Equation (11).The MEP uses one AND gate and an OR gate.These are both two input monotonic gates.Next, is the Overflow Detection Part (ODP) which compares the sign bits of the two addends by using an AND gate according to (13) which is then ORed with the evaluated bit of the undetermined case in (12) as shown in Figure 3.This is where the scheme detects the occurrence of overflow during the addition of two numbers.
Lastly, in Figure 4 is the Overflow Correction Part (OCP).The OCP evaluates the individual MRDs of the two addends separately to achieve the sum Z in (14).This is done using five adders; four regular CPAs and one carry save adder (CSA).This is computed according to (29) -(36).ADD4, ADD5 and ADD6 add separately the MRDs 1 2 , e e and 3 e respectively for the two addends.The re- sult of ADD4 is of importance because it is used in evaluating the undetermined case in (12). 2 z is a result of concatenation as well as 3 z which do not require any hardware.ADD7 is a CSA which computes the result of 1 2 , z z and 3 z whose save ( 2 s ) and carry ( 2 c ) are added using ADD 8 which is a CPA in order to get accurate sum Z whether overflow occurs or not.The schematic diagrams for the proposed scheme are presented in Figures 1-4.
The area (A) and time (D) requirements of the proposed scheme are estimated based on the unit-gate model as used in [12] and [13] for fair comparison.In this model, each two-input monotonic gate such as AND, OR, NAND, NOR has The area and delay of an inverter is a negligible fraction of a unit, and it is thus assumed to require zero units of area and delay [14].A 2:1 multiplexer has an area 3 A = and delay 2 D = ; A full adder has an area of seven gates and a delay of four gates but a CSA has a constant delay.Also, the adder requirements based on this model as presented in [14] is adopted for the comparison since the adopted adders are similar to the adders used for the proposed scheme.The results state that an estimation modulo; ( ) Therefore, the hardware requirements of the scheme are as follows: 63 14 The estimated delay of the scheme will be as follows: Now, in order to make an effective comparison, the proposed scheme is divided into two: Proposed Scheme I for when the OCP is not included in the comparison and Proposed Scheme II for the OCP being included in the comparison.The delay of the OCP overrides the delay of the delays of the MEP and the ODP if Proposed Scheme II is consider since they will all be computed in parallel and the critical path in that case will be dictated by the OCP.The area for the PCP and the MEP is double for two numbers X and Y but this is not the case for the delay of the two numbers since they are computed in parallel.Thus, the total area and delay of the proposed schemes are:
Numerical Illustrations
This subsection presents numerical illustrations of the proposed scheme.
Checking overflow in the sum of 49 and 21 using RNS moduli set {3, Thus, from (13) 0 overflow = , which implies no overflow has occurred ac- cording to the proposed scheme after processing.
Performance Evaluation
The performance of the proposed scheme is compared to schemes in [3] and [8]; the scheme in [8] does not contain a correction unit; the scheme by [3] has a correction unit but is not included in the comparison.And so both schemes do not have the correction component in the comparison.Table 1 shows the analysis of the proposed scheme with that of similar state-of-the art schemes.
As shown in Table 1, the proposed scheme for detecting overflow (Proposed Scheme I) in the given moduli set is very cheap in terms of hardware resources and faster than the scheme by [8] but requires a little hardware resources than the scheme by [3] albeit slower than the Proposed Scheme I.However, the complete proposed scheme (Proposed Scheme II) for detecting and correcting overflow requires more hardware resources than the other compared schemes but faster than both schemes by [3] and [8].
Clearly, Proposed Scheme I completely outperforms the similar state-of-the-art scheme by [8] for detecting overflow, but the trust of this work is to detect and correct overflow anytime it occurs; in so doing it has made tremendous gains in speed as shown in Table 1.
Table 2 shows a detailed analysis of the complexities and delay of the proposed scheme with that of the similar state-of-the-art schemes.
Table 2 reveals interesting results theoretically, from the analysis it is clear that the Proposed Scheme I requires less resources than what is required by [8].
From the table, smaller values of n shows that Proposed Scheme II requires more resources than that by [8] but drastically improves upon this requirements up to over 51% better than [8] for higher values of n (i.e.n > 4), this is clearly shown in the graph in Figure 5.The analysis from the table also shows that whilst for smaller values of n (say n = 1), the Proposed Scheme I is better than the scheme by [3] in terms of hardware resources, it tends to require up to about 18% resources more than that by [3].
From Figure 5, the scheme by [8] sharply increases for higher values of n followed by the Proposed Scheme II whilst the scheme by [3] requires the lesser resources.Regarding the delay, the proposed schemes (Proposed I and Proposed II) completely outperforms both schemes by up to over 35% than the scheme by [8] and over 90% faster than the scheme by [3] as shown in Table 2 and in Figure 5.It is worth noting that whiles the scheme by [3] performs better in terms of hardware resources, it tends to be the worst performer for speed and the percentage difference shows that Proposed I is more efficient.It is clear from the graphs that in terms of delay, the scheme in [3] sharply increases with increasing values of n whiles the marginal increase of the rest of the schemes are minimal.
Conclusion
Detecting overflow in RNS arithmetic computations is very important but can be difficult, more so, if it has to be corrected.In this paper, an ingenious technique of detecting overflow by use of the MRC method through magnitude evaluation as well correcting the overflow when it occurs was presented.This technique did not require full reverse conversion but used the MRDs to evaluate the sign of a number to detect the occurrence of overflow.With this technique, the correct value of the sum of two numbers is guaranteed whether overflow occurred or not.The scheme has been demonstrated theoretically to be very fast than similar-state-of-the-art scheme but required a little more hardware resources.However, the Proposed Scheme I, which is the one without the correction component completely outperformed the scheme in [8] in terms of both area and delay requirements.Also, results from Table 2 and Figure 5 showed that for higher values of n, the Proposed Scheme II also outperformed the scheme by [8].Future works will focus on simulating the theoretical results and implementing it on FPGA boards.
( 2 )
Journal of Computer and Communications for every integer 1 n > , the following hold true[10]: Journal of Computer and Communications (17) Journal of Computer and Communications Equations ( 25) Journal of Computer and Communications ω represent the MRDs of the two integers X and Y respectively.
. First is the Partial Conversion Part (PCP) shown in Figure 1, which evaluates the MRDs based on (18), (
Journal of Computer and Communications RNSwill result in the decimal number 10. Whilst the sum of the decimal numbers 49 and 21 is 70 which is obvious of overflow occurring.Checking for RNS overflow using the proposed technique > − , the scheme detects overflow oc- curring after processing.Checking overflow in the sum of 10 and 11 using RNS moduli set {3, RNS will result in the decimal number 21, which is correct result of 10 + 11.Checking for RNS overflow using the proposed algorithm
Figure 5 .
Figure 5. Graphs of area and delay analysis of the various compared schemes.
Table 2 .
Area, delay analysis for various values of n. | 4,345.6 | 2018-10-25T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Length of Hospital Stay Prediction at the Admission Stage for Cardiology Patients Using Artificial Neural Network
For hospitals' admission management, the ability to predict length of stay (LOS) as early as in the preadmission stage might be helpful to monitor the quality of inpatient care. This study is to develop artificial neural network (ANN) models to predict LOS for inpatients with one of the three primary diagnoses: coronary atherosclerosis (CAS), heart failure (HF), and acute myocardial infarction (AMI) in a cardiovascular unit in a Christian hospital in Taipei, Taiwan. A total of 2,377 cardiology patients discharged between October 1, 2010, and December 31, 2011, were analyzed. Using ANN or linear regression model was able to predict correctly for 88.07% to 89.95% CAS patients at the predischarge stage and for 88.31% to 91.53% at the preadmission stage. For AMI or HF patients, the accuracy ranged from 64.12% to 66.78% at the predischarge stage and 63.69% to 67.47% at the preadmission stage when a tolerance of 2 days was allowed.
Introduction
The demand for health care services continues to grow as the population in most developed countries ages. To make health care more affordable, policy makers and health organizations try to align financial incentives with the implementation of care processes based on best practices and the achievement of better patient outcomes. The length of stay (LOS) in hospitals is often used as an indicator of efficiency of care and hospital performance. It is generally recognized that a shorter stay indicates less resource consumption per discharge and costsaving while postdischarge care is shifted to less expensive venues [1]. It motivates the endeavor to develop a diagnosisrelated group (DRG) for patient classification based on the type of hospital treatments in relation to the costs incurred by the hospital. This quality assurance scheme was then linked to the prospective payment system (PPS) and adopted by the federal government in the United States for the Medicare program in 1983. This payment system was found to moderate hospital cost inflation due to a significant decline in the average length of stay (ALOS), which refers to the average number of days that patients spend in hospital [2]. Under the assumption that patients sharing common diagnostic and demographic characteristics require similar resource intensity, the aim of DRG is to quantify and standardize hospital resource utilization for patients [3].
Other than diagnostic attributes, most research focuses on two types of factors to explain the variation in LOS: patient characteristics and hospital characteristics. In examining data for the National Health Service (NHS) in the United Kingdom, the variation in LOS for those over age 65 was consistently larger across all regions [4]. It was observed that the variation in LOS between hospitals was larger compared to that between doctors in the same hospital [5]. Hospital policy in treatment management can also determine LOS. It was found that psychiatrists were able to predict LOS with significant accuracy, but only for patients they treated. Moreover, the prediction by a hospital coordinator 2 Journal of Healthcare Engineering involved in all patient treatments was significantly more correlated to the true LOS than psychiatrists' predictions [6]. A comparison of data from 24 hospitals in Japan showed that inpatient capacity and the ratio of involuntary admissions correlated positively to longer LOS [7]. A higher level of caregiver interaction among nurses and physicians, such as communication, coordination, and conflict management, was significantly associated with lower LOS [8].
The ability to predict LOS as an initial assessment of patients' risk is critical for better resource planning and allocation [9], especially when the resources are limited, as in ICUs [10,11]. Yang et al. considered timing for LOS prediction in three clinical stages for burn patients: admission, acute, and posttreatment. Using three different regression models, the best mean absolute error (MAE) in the LOS predictions was around 9 days in both the admission and the acute stage and 6 days in the posttreatment stage. With three more treatment-related variables, the results showed that the prediction accuracy was significantly improved in the posttreatment stage [11]. An accurate prediction of LOS can also facilitate management with higher flexibility in hospital bed use and better assessment in the cost-effectiveness treatment [12,13].
This prediction can even stratify patients according to their risk for prolonged stays [14,15]. Spratt et al. used a multivariate logistic regression method to identify factors associated with prolonged stays (>30 days) for patients with acute ischemic stroke. In addition to advanced age (>65), diabetes and in-hospital infection were significantly associated with prolonged LOS [14]. Lee et al. analyzed LOS data on childhood gastroenteritis in Australia and, using either the robust gamma mixed regression or linear mixed regression method, found that both gastrointestinal sugar intolerance and failure to thrive significantly affected prolonged LOS [16]. Schmelzer et al. used the multiple logistic regression method and found that both the American Society of Anesthesiologists (ASA) scores and postoperative complications were significant in the prediction of prolonged LOS after a colectomy [17].
Rosen et al. studied the LOS variation for Medicare patients after coronary artery bypass graft surgery (CABG) in 28 hospitals. They found that including deceased patients did not significantly influence the results. Other than age and gender, the most powerful predictors were history of mitral valve disease or cerebrovascular disease and preoperative placement of an intra-aortic balloon pump. Different hospitals varied significantly in their LOS, and the readmission rate was linearly related to longer LOS [18]. Janssen et al. constructed a logistic regression model to predict the probability for patients requiring 3 or more days in ICU after CABG. Only 60% of the patients predicted to be high risks had a prolonged ICU stay [15]. Chang et al. identified that, among preoperative factors, age of more than 75 years and having chronic obstructive pulmonary disease (COPD) were associated with increased LOS for patients who underwent elective infrarenal aortic surgery [19].
Even though diagnosis had been considered the primary factor affecting hospital stays, patients' clinical conditions, such as the number of diagnoses and the intensity of nursing services required, might be as critical in determining LOS variations within some DRGs [20]. One study showed that only 12% of the variation could be explained by patient characteristics and general hospital characteristics for patients with a primary diagnosis of acute myocardial infarction (AMI) [21]. For heart failure patients, Whellan et al. studied data from 246 hospitals for admission predictors for LOS. Patients with longer LOS had a higher disease severity and more comorbidities, such as hypertension, cardiac dysrhythmias, diabetes mellitus, COPD, and chronic renal insufficiency or failure. However, the overall model based on characteristics at the time of admission explained only a modest amount of LOS variation [22].
The purpose of this study is to develop artificial neural network (ANN) models to predict LOS for inpatients with one of the three primary diagnoses: coronary atherosclerosis (CAS), heart failure (HF), and acute myocardial infarction (AMI) in a cardiovascular unit in a Christian hospital in Taipei, Taiwan. A better recognition in critical factors before admission that determine LOS, or a capacity to predict an individual patient's LOS, could promote the development of efficient admission policy and optimize resource management in hospitals. This study aims to use ANN to predict LOS for patients with three primary diagnoses: coronary atherosclerosis (CAS), heart failure (HF), and acute myocardial infarction (AMI) in a cardiovascular unit. Moreover, two stages in LOS prediction are presented: one uses all clinical factors, designated as the predischarge stage, and the other uses only factors available before admission, designated as the preadmission stage. The prediction results obtained at the predischarge stage are then used to evaluate the relative effectiveness in predicting LOS at the preadmission stage.
The remainder of this paper is organized as follows. In Section 2, the method including steps in data collection and processing and prediction model construction is introduced. Then, the prediction results of various artificial neural network (ANN) models are presented in Section 3. The discussion of the results and the conclusion of the research finding is given in Section 4, with the limitations and future research directions.
Data Sources and Data
Preprocessing. This study was approved by the Mackay Memorial Hospital Institutional Review Board (IRB) for protection of human subjects in research. Clinical and administrative data were obtained for cardiology patients discharged between October 1, 2010, and December 31, 2011, in a Christian hospital with two locations in the metropolitan area of Taipei, Taiwan: Taipei branch and Tamshui branch. A total of 2,424 admission cases were collected for patients with one of three primary diagnoses: CAS, HF, and AMI. Then 47 admissions were identified as outliers, with more than three standard deviations from the mean, when fitting for both forward addition regression and backward elimination regression models. For the remaining 2,377 cases, 933 were coronary atherosclerosis (CAS) patients, 872 heart failure (HF) patients, and 572 acute myocardial An admission case might have zero to multiple comorbidities and similar medical histories were aggregated into comorbidity factors. For example, the history of hypertensive disease includes four types of diseases as identified by ICD-9 codes 401 (essential hypertension) to 404 (hypertensive heart and chronic kidney disease). Each case might have zero to multiple interventions during the admission. Out of a total of 46 types of intervention or diagnostic ancillary services found in the dataset, only the top 6 interventions with more than 5% occurrence in the entire dataset were adopted in this study. The last characteristic, TW-DRG pay, was regarding whether the admission case had been reimbursed by the pay-per-case (i.e., TW-DRG) system implemented by the National Health Insurance Administration (NHIA). The NHIA in Taiwan provides a universal health insurance system and covers approximately 99% of the population [23]. Except for using fee-for-service payment system, the NHIA started introducing the first phase of TW-DRG with 164 groups from 2010. Since cases in the same DRG are reimbursed with the same amount, it is to encourage hospitals to improve their financial performance by better utilizing medical resources [24]. Among the data collected, 25% were reimbursed through TW-DRG payment by the NHIA.
Statistical Analysis.
Pearson's correlation coefficients were used to study the relationships between LOS and each inpatient's characteristics. As summarized in Table 2, it was observed that all characteristics were significantly correlated with LOS except for the comorbidity of chronic airway obstruction (ICD 496). As for the risk factors, the top three significant positive correlated variables for longer LOS were patients with heart failure (ICD 428) as main diagnosis, who were older and female. It was consistent with the findings about factors related to prolonged LOS from literature: female, increasing age, and comorbidities such as cerebrovascular disease and diabetes mellitus [18,19,22]. The top three significant negative correlated variables for longer LOS were patients with coronary atherosclerosis (ICD 414) as main diagnosis, who went through either percutaneous transluminal coronary angioplasty (PTCA) or percutaneous coronary intervention (PCI). As shown in Figure 1, the distribution in LOS data was skewed with few cases staying longer than 14 days. The average and standard deviation of LOS for CAS patients were 2.63 days and 2.25 days, respectively. For AMI and HF patients, the average and standard deviations of LOS were 7.74 days and 5.93 days, respectively. The distribution of LOS for CAS patients was significantly different than that for patients with either AMI or HF (with value < 0.0001), which suggested different prediction models should be built for CAS patients and for non-CAS patients or referred to as AMI and HF patients.
Structure for Artificial Neural Networks (ANNs).
With the profound growth in clinical knowledge and technology, the development of more sophisticated information systems to support clinical decision making is essential to enhance quality and improve efficiency. Artificial neural networks (ANNs) are useful in modeling complex systems and have been applied in various areas, from accounting to school admission [25]. Walczak and Cerpa proposed four design criteria in artificial neural network (ANN) modeling: the appropriate input variables, the best learning methods, the number of hidden layers, and the quantity of neural nodes per hidden layer [26]. The learning method of ANN can be either supervised or unsupervised, depending on whether the Journal of Healthcare Engineering 5 output values should be known before or should be learned directly from the input values. For supervised learning, backpropagation is one of the most commonly used methods due to its robustness and ease of implementation [27].
The clinical benefits of using ANN had been notable in specific areas, such as cervical cytology and early detection of acute myocardial infarction (AMI) [28]. Compared with logistic regression, ANNs were found useful in predicting medical outcomes due to their nature of nonlinear statistical principles and inference [29]. Dybowski et al. adopted an ANN to predict the survival results for patients with systematic inflammatory response syndrome and hemodynamic shock. After improving the performance of ANN iteratively, the predicted outcome was more accurate than using a logistic regression model [30]. Gholipour et al. utilized an ANN model to predict the ICU survival outcome and the LOS for traumatic patients. The results showed that the mean predicted LOS using ANN was not significantly different than the mean of actual LOS [31]. Launay et al. developed ANN models to predict the prolonged LOS (13 days and above) for elder emergency patients (age 80 and over) [32]. Based on the biomedical literature from PUBMED, Dreiseitl and Ohno-Machado showed that the discriminatory performance of ANN models was better or not worse in 93% of the surveyed papers compared to the logistic regression method [33]. Grossi et al. found ANN models outperformed traditional statistic methods in accuracy in various diagnostic and prognostic problems in gastroenterology [34].
The selection of input variables used in an ANN model is critical. Li et al. found that the ANN model using all input variables yielded a slightly higher predictive accuracy than the one using a subset of variables filtered by correlation analysis [35]. Hence, we decided to consider all inpatient characteristics, including gender, age, location, main diagnosis, eight types of comorbidity, six types of intervention, and whether the case met the criteria for TW-DRG reimbursement. These input variables were then categorized into two stages: preadmission stage and predischarge stage, as shown in Table 3. Variables in the preadmission stage included information available prior to hospitalization, such as gender, age, hospital branch (location) to be admitted to, main diagnosis, and comorbidities. In the predischarge stage, additional to variables in the preadmission stage, it includes interventions and whether the case was reimbursed by TW-DRG payment. A case is to be reimbursed by TW-DRG payment, not default pay-per-service, depending on the actual discharge condition such as surgical procedure, treatment, and discharge status according to the NHIA guideline [36].
Separate ANNs were built to predict LOS: one for coronary atherosclerosis (CAS) patients and the other for heart failure or acute myocardial infarction AMI and HF patients. Figure 2 shows the general structure of backpropagation artificial neural networks in this research. The output layer has only one neuron and it generates a number ranged from 0 to 35 to represent the predicted LOS. The size of input layer depends on the number of input variables. Here, the prediction model using input variables in the predischarge stage is referred to as the predischarge model. Likewise, the model using variables in the preadmission stage is referred to as the preadmission model. For a predischarge model, the input layer in an ANN model has 18 neurons ( 0 = 18) for CAS patients. For AMI and HF patients, the value of 0 is 19 with one additional neuron with Boolean value to represent whether the major diagnosis is HF in a predischarge model. In preadmission models, the value of 0 is 11 and 12 for CAS patients and for AMI and HF patients, respectively. As for the hidden layer, more neurons were found to enable a better closeness-of-fit [37] with lower training errors [38]. However, large ANN size also required more training efforts [39] and could result in overfitting [38]. Some research suggested the number of neural nodes in hidden layers to be between 2/3 to 2 times of the size of the input layer [26,39,40].
Results
In this section, the LOS prediction in predischarge model and preadmission model using ANNs is benchmarked with the results using linear regression (LR) models. All prediction models are implemented using IBM5 SPSS5 v.21 and IBM SPSS Neural Networks 21. Similar to the preliminary trial run, the original data was separated into training dataset and test dataset. The training dataset included 744 admissions for CAS patients and 1,155 admissions for AMI and HF patients, and the test dataset consisted of 189 admissions for CAS patients and 289 admissions for AMI and HF patients. During training any ANN model, 70% of the training dataset were randomly assigned to the training set and the remaining 30% to the validation set. The training stops when the number of training epochs reaches 2,000 or there is no improvement in validation error for 600 epochs consecutively. For LR models, the entire training dataset was used to generate the linear regression functions.
For CAS Patients.
The performance of prediction models is evaluated using the same test dataset. Since the LOS predictions obtained by ANN or LR models are continuous numbers, we further define that a prediction of LOS is considered accurate if the difference is within 1 day from the actual LOS for CAS patients. Moreover, the effectiveness of predictability was measured according to the mean absolute 6 Journal of Healthcare Engineering error (MAE) and the mean relative error (MRE), defined as follows: wherẽand are the predicted LOS and actual LOS for the th test data, = 1, 2, . . . , , and is the number of testing instances.
To incorporate the randomness in data selected for training ANN, the results showed in Table 4 are the 95% confidence intervals (95% CI) for accuracy, MAE, and MRE based on 30 runs. All models were quite effective in predicting LOS, with the accuracy rate ranging from 88.07% to 91.53%, the MAE from 1 to 1.11 days, and the MRE from 0.44 to 0.47. Figure 3 shows a detailed look at the distribution in the accurate LOS prediction in the test dataset. It is observed that LR model performed better than ANN model for patients with LOS of 2 days, which was about 60% of the test dataset. However, both LR and ANN models were unable to predict correctly for LOS more than 5 days, which accounted for 3.7% of the test dataset.
For AMI and HF Patients.
Same performance indices are used to evaluate the effectiveness of prediction models for AMI and HF patients. Results summarized in Table 5 show that these models are not as effective in predicting LOS as for CAS patients, with the accuracy rate ranging from 32.99% to 36.33%. The MAE of all models has been quite stable, ranging from 3.76 to 3.97 days and the MRE from 0.69 to 0.77. Further, considering the high degree in the variation of LOS distribution, the definition of accuracy has been extended to include two more scenarios: a tolerance of 1 day is allowed (the difference of LOS prediction to the actual LOS is less than 2 days) or a tolerance of 2 days is allowed (the difference of LOS prediction to the actual LOS is less than 3 days). However, the accuracy rate of these models is increased from 63.69% to 67.47% only even with 2 days of deviation in prediction being allowed as in Table 5. Figure 4 shows the breakdown in the accurate LOS prediction with no tolerance in the test dataset. It is observed that both LR and ANN models performed better in predicting LOS between 8 and 11 days. In the predischarge model, ANN performs better than LR model for patients with LOS of 3, 5, 6, or 7 days, which is about 60% of the test dataset. Moreover, as shown in the resized charts in Figure 4, ANN models were able to predict correctly for cases with LOS greater than 11 days, which accounts for 14.5% of the test dataset. However, both LR and ANN models were unable to predict correctly for LOS greater than 18 days, which accounts for 5.9% of the test dataset.
Validation of ANN Models.
To determine a proper structure for ANN used in this study, a preliminary trial run was first conducted to identify a proper structure for ANN models while assuming that the neuron activation function used for each neuron in the hidden layer was log-sigmoidal function with outputs between 0 and 1 [41]. The original data was separated into two sets: training dataset and test dataset. The training dataset included the first 12-month data, from October 1, 2010, to September 30, 2011, with 744 admissions for CAS patients and 1,155 admissions for AMI and HF patients. The test dataset consisted of the data in the last 3 months, with 189 admissions for CAS patients and 289 admissions for AMI and HF patients.
To avoid overfitting, the training dataset was further separated into two sets: a training set, to update the weights and biases, and a validation set, to stop training when the ANN might be overfitting. In this study, the size of training set and validation set in training all ANN models was assumed to be 70% and 30% of the training dataset. The weights in an ANN were modified using a variable learning rate gradient decent algorithm with momentum [42]. The training stopped when the number of training epochs reached 2,000 or there is no improvement in the validation error for consecutively 600 epochs. After an ANN was trained, the model was then used to obtain the predicted LOS in the test dataset. Furthermore, to avoid the effect of randomness when comparing the results, a fixed training set and validation set were used when training the backpropagation ANNs. Figure 5 shows the root mean squared error (RMSE) for the training set, validation set, and test dataset of trained ANN models with different numbers of neurons in the hidden layer, ranging from 10 to 30. The training errors were found to be slightly decreasing as more neurons were included in the hidden layer. However, no overfitting was observed and the test errors had been quite stable for both models.
To balance between the required training effort and the test errors improvement, the number of neurons in the hidden layer, or the value of 1 in Figure 2, was set to be 13 for all ANN models. Figure 6 shows the weight distribution between input neurons and hidden neurons, as each dot indicates the weight of one of the input neurons to some hidden neuron, and each input neuron has a total of thirteen dots (weights) linked to the hidden layer. It further validates the size of ANN used in this study since the weights had been scattered evenly from −1.5 to 1.5 with only a few dots (weights) close to zero.
Discussion and Conclusion
This study proposed the use of the neural network techniques to predict LOS for patients in a cardiovascular unit with one of three primary diagnoses: coronary atherosclerosis (CAS), heart failure (HF), and acute myocardial infarction (AMI). The major observation based on the results was that the preadmission models were as effective in predicting LOS as the predischarge models. It was even found that some preadmission models performed slightly better than predischarge models as shown in Tables 4 and 5. This observation indicates that whether a patient might be reimbursed by TW-DRG did not provide additional predictive ability in LOS, and the assumption that a shorter LOS would be preferred in the sake of hospitals' financial performance when DRG was implemented was not applicable in our case hospital.
The benefit of using ANN models was more significant when predicting prolonged LOS for HF and AMI patients. When predicting prolonged LOS, most literature formulated the prediction models to determine whether an admission might belong to a prolonged stay [14,15,17] or whether the LOS might be within a fixed range of LOS days [16,22]. The study by Mobley et al. [43] predicted the exact LOS days for patients in a postcoronary care unit. With 629 and 127 admissions in the training and test file, a total of 74 input variables were used to predict 1 to 20 LOS days in ANNs. The mean LOS was 3.84 days and 3.49 days in the training file and the test file. They showed no significant difference in the distribution from the predicted LOS and from the actual LOS 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Number of neurons ANN for (AMI and HF) patients: predischarge model in the test file. However, ANN with two or three hidden layers made no prediction of LOS beyond 5 days [43]. In this study, the mean LOS was 2.65 days and 2.53 days in the training dataset and the test dataset for CAS patients. With only 18 input variables, our models were able to predict correctly for patients with LOS up to 5 days as shown in Figure 3. For AMI and HF patients, the mean LOS was 7.86 days and 7.23 days in the training dataset and the test dataset. Compared with LR method, the ANN model was able to predict patient stays longer than 11 days, as shown in Figure 4.
In general, it is observed that the LR model performed slightly better than ANN models in terms of accuracy as in Tables 4 and 5. It might be due to the reason that each ANN model was built by only 70% of the training dataset, which consisted of the first 12-month data, and the test dataset, which was the remaining 3-month data, had been highly consistent with the previous 12 months. This phenomenon implies that the clinical pathways were well-established in our case hospital.
Limitation of this research is that the major diagnosis and comorbidities for patients are assumed to be well-known in the preadmission stage. Further study is suggested to fully assess the use of ANN models in LOS prediction, especially for patients who might require longer LOS. Instead of predicting the actual LOS, it might be practical to first categorize LOS into risk groups. More patient characteristics, such as vital signs or lab readings at the time of admission, can be included to improve the performance of LOS predictability.
As the bed supply is limited, the utilization of hospital beds is considered economically critical for most hospitals and any policy related to improving bed utilization has profound impacts on the perception of quality in the provided care and satisfaction of patients and physicians. Currently, hospitalists rely on only aggregated data, such as occupancy rates and average LOS, to access the performance and competitiveness among clinics in the hospital. A reliable LOS prediction in the preadmission stage could further assist in identifying abnormality or potential medical risks to trigger additional attentions for individual cases. It might even allow bed managers to foresee any bottlenecks in bed availability when admitting patients to avoid unnecessary bed transfer between wards. | 6,363 | 2016-04-07T00:00:00.000 | [
"Computer Science",
"Medicine"
] |