text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Statistics review 6: Nonparametric methods
The present review introduces nonparametric methods. Three of the more common nonparametric methods are described in detail, and the advantages and disadvantages of nonparametric versus parametric methods in general are discussed.
Available online http://ccforum.com/content/6/6/509 Many statistical methods require assumptions to be made about the format of the data to be analysed. For example, the paired ttest introduced in Statistics review 5 requires that the distribution of the differences be approximately Normal, while the unpaired t-test requires an assumption of Normality to hold separately for both sets of observations. Fortunately, these assumptions are often valid in clinical data, and where they are not true of the raw data it is often possible to apply a suitable transformation. There are situations in which even transformed data may not satisfy the assumptions, however, and in these cases it may be inappropriate to use traditional (parametric) methods of analysis. (Methods such as the t-test are known as 'parametric' because they require estimation of the parameters that define the underlying distribution of the data; in the case of the t-test, for instance, these parameters are the mean and standard deviation that define the Normal distribution.) Nonparametric methods provide an alternative series of statistical methods that require no or very limited assumptions to be made about the data. There is a wide range of methods that can be used in different circumstances, but some of the more commonly used are the nonparametric alternatives to the ttests, and it is these that are covered in the present review.
The sign test
The sign test is probably the simplest of all the nonparametric methods. It is used to compare a single sample with some hypothesized value, and it is therefore of use in those situations in which the one-sample or paired t-test might tradition-ally be applied. For example, Table 1 presents the relative risk of mortality from 16 studies in which the outcome of septic patients who developed acute renal failure as a complication was compared with outcomes in those who did not. The relative risk calculated in each study compares the risk of dying between patients with renal failure and those without. A relative risk of 1.0 is consistent with no effect, whereas relative risks less than and greater than 1.0 are suggestive of a beneficial or detrimental effect of developing acute renal failure in sepsis, respectively. Does the combined evidence from all 16 studies suggest that developing acute renal failure as a complication of sepsis impacts on mortality? Fig. 1 shows a plot of the 16 relative risks. The distribution of the relative risks is not Normal, and so the main assumption required for the one-sample t-test is not valid in this case. Rather than apply a transformation to these data, it is convenient to use a nonparametric method known as the sign test.
The sign test is so called because it allocates a sign, either positive (+) or negative (-), to each observation according to whether it is greater or less than some hypothesized value, and considers whether this is substantially different from what we would expect by chance. If any observations are exactly equal to the hypothesized value they are ignored and dropped from the sample size. For example, if there were no effect of developing acute renal failure on the outcome from sepsis, around half of the 16 studies shown in Table 1 would be expected to have a relative risk less than 1.0 (a 'negative' sign) and the remainder would be expected to have a relative risk greater than 1.0 (a 'positive' sign). In this case only three studies had a relative risk of less than 1.0 whereas 13 had a relative risk above this value. It is not unexpected that the number of relative risks less than 1.0 is not exactly 8; the more pertinent question is how unexpected is the value of 3?
The sign test gives a formal assessment of this.
Formally the sign test consists of the steps shown in Table 2. In this example the null hypothesis is that there is no increase in mortality when septic patients develop acute renal failure.
Exact P values for the sign test are based on the Binomial distribution (see Kirkwood [1] for a description of how and when the Binomial distribution is used), and many statistical packages provide these directly. However, it is also possible to use tables of critical values (for example [2]) to obtain approximate P values.
The counts of positive and negative signs in the acute renal failure in sepsis example were N + = 13 and N -= 3, and S (the test statistic) is equal to the smaller of these (i.e. N -). The critical values for a sample size of 16 are shown in Table 3. S is less than or equal to the critical values for P = 0.10 and P = 0.05. However, S is strictly greater than the critical value for P = 0.01, so the best estimate of P from tabulated values is 0.05. In fact, an exact P value based on the Binomial distribution is 0.02. (Note that the P value from tabulated values is more conservative [i.e. larger] than the exact value.) In other words there is some limited evidence to support the notion that developing acute renal failure in sepsis increases mortality beyond that expected by chance.
Note that the sign test merely explores the role of chance in explaining the relationship; it gives no direct estimate of the size of any effect. Although it is often possible to obtain nonparametric estimates of effect and associated confidence intervals in principal, the methods involved tend to be complex in practice and are not widely available in standard statistical software. This lack of a straightforward effect estimate is an important drawback of nonparametric methods. Table 2 Steps required in performing the sign test Step Details 1 State the null hypothesis and, in particular, the hypothesized value for comparison 2 Allocate a sign (+ or -) to each observation according to whether it is greater or less than the hypothesized value. (Observations exactly equal to the hypothesized value are dropped from the analysis) 3 Determine: N + = the number of observations greater than the hypothesized value N -= the number of observations less than the hypothesized value S = the smaller of N + and N - 4 Calculate an appropriate P value Table 1 Relative risk of mortality associated with developing acute renal failure as a complication of sepsis The sign test can also be used to explore paired data. Consider the example introduced in Statistics review 5 of central venous oxygen saturation (SvO 2 ) data from 10 consecutive patients on admission and 6 hours after admission to the intensive care unit (ICU). The paired differences are shown in Table 4. In this example, the null hypothesis is that there is no effect of 6 hours of ICU treatment on SvO 2 . In other words, under the null hypothesis, the mean of the differences between SvO 2 at admission and that at 6 hours after admission would be zero. In terms of the sign test, this means that approximately half of the differences would be expected to be below zero (negative), whereas the other half would be above zero (positive).
In practice only 2 differences were less than zero, but the probability of this occurring by chance if the null hypothesis is true is 0.11 (using the Binomial distribution). In other words, it is reasonably likely that this apparent discrepancy has arisen just by chance. Note that the paired t-test carried out in Statistics review 5 resulted in a corresponding P value of 0.02, which appears at a first glance to contradict the results of the sign test. It is not necessarily surprising that two tests on the same data produce different results. The apparent discrepancy may be a result of the different assumptions required; in particular, the paired t-test requires that the differences be Normally distributed, whereas the sign test only requires that they are independent of one another. Alternatively, the discrepancy may be a result of the difference in power provided by the two tests. As a rule, nonparametric methods, particularly when used in small samples, have rather less power (i.e. less chance of detecting a true effect where one exists) than their parametric equivalents, and this is particularly true of the sign test (see Siegel and Castellan [3] for further details).
The Wilcoxon signed rank test
The sign test is intuitive and extremely simple to perform. However, one immediately obvious disadvantage is that it simply allocates a sign to each observation, according to whether it lies above or below some hypothesized value, and does not take the magnitude of the observation into account.
Omitting information on the magnitude of the observations is rather inefficient and may reduce the statistical power of the test. An alternative that does account for the magnitude of the observations is the Wilcoxon signed rank test. The Wilcoxon signed rank test consists of five basic steps (Table 5).
To illustrate, consider the SvO 2 example described above. The sign test simply calculated the number of differences above and below zero and compared this with the expected number. In the Wilcoxon rank sum test, the sizes of the differences are also accounted for. Table 6 shows the SvO2 at admission and 6 hours after admission for the 10 patients, along with the associated ranking and signs of the observations (allocated according to whether the difference is above or below the hypothesized value of zero). Note that if patient 3 had a difference in admission and 6 hour SvO 2 of 5.5% rather than 5.8%, then that patient and patient 10 would have been given an equal, average rank of 4.5.
Available online http://ccforum.com/content/6/6/509 Table 5 Steps required in performing the Wilcoxon signed rank test Step Details The sums of the positive (R + ) and the negative (R -) ranks are as follows.
As with the sign test, a P value for a small sample size such as this can be obtained from tabulated values such as those shown in Table 7. The calculated value of R (i.e. 5) is less than or equal to the critical values for P = 0.10 and P = 0.05 but greater than that for P = 0.01, and so it can be concluded that P is between 0.01 and 0.05. In other words, there is some evidence to suggest that there is a difference between admission and 6 hour SvO 2 beyond that expected by chance. Notice that this is consistent with the results from the paired t-test described in Statistics review 5. P values for larger sample sizes (greater than 20 or 30, say) can be calculated based on a Normal distribution for the test statistic (see Altman [4] for details). Again, the Wilcoxon signed rank test gives a P value only and provides no straightforward estimate of the magnitude of any effect.
The Wilcoxon rank sum or Mann-Whitney test
The sign test and Wilcoxon signed rank test are useful nonparametric alternatives to the one-sample and paired t-tests. A nonparametric alternative to the unpaired t-test is given by the Wilcoxon rank sum test, which is also known as the Mann-Whitney test. This is used when comparison is made between two independent groups. The approach is similar to that of the Wilcoxon signed rank test and consists of three steps ( Table 8).
The data in Table 9 are taken from a pilot study that set out to examine whether protocolizing sedative administration reduced the total dose of propofol given. Patients were divided into groups on the basis of their duration of stay. The data presented here are taken from the group of patients who stayed for 3-5 days in the ICU. The total dose of propofol administered to each patient is ranked by increasing magnitude, regardless of whether the patient was in the protocolized or nonprotocolized group. Note that two patients had total doses of 21.6 g, and these are allocated an equal, average ranking of 7.5. There were a total of 11 nonprotocolized and nine protocolized patients, and the sum of the ranks of the smaller, protocolized group (S) is 84.5.
Again, a P value for a small sample such as this can be obtained from tabulated values. In this case the two individual sample sizes are used to identify the appropriate critical values, and these are expressed in terms of a range as shown in Table 10. The range in each case represents the sum of the ranks outside which the calculated statistic S must fall to reach that level of significance. In other words, for a P value below 0.05, S must either be less than or equal to 68 or greater than or equal to 121. In this case S = 84.5, and so P is greater than 0.05. In other words, this test provides no evidence to support the notion that the group who received protocolized sedation received lower total doses of propofol beyond that expected through chance. Again, for larger Table 8 Steps required in performing the Wilcoxon rank sum (Mann-Whitney) test Step Details 1 Rank all observations in increasing order of magnitude, ignoring which group they come from. If two observations have the same magnitude, regardless of group, then they are given an average ranking 2 Add up the ranks in the smaller of the two groups (S). If the two groups are of equal size then either one can be chosen 3 Calculate an appropriate P value sample sizes (greater than 20 or 30) P values can be calculated using a Normal distribution for S [4].
Advantages and disadvantages of nonparametric methods
Inevitably there are advantages and disadvantages to nonparametric versus parametric methods, and the decision regarding which method is most appropriate depends very much on individual circumstances. As a general guide, the following (not exhaustive) guidelines are provided.
Advantages of nonparametric methods
Nonparametric methods require no or very limited assumptions to be made about the format of the data, and they may therefore be preferable when the assumptions required for parametric methods are not valid.
Nonparametric methods can be useful for dealing with unexpected, outlying observations that might be problematic with a parametric approach.
Nonparametric methods are intuitive and are simple to carry out by hand, for small samples at least.
Nonparametric methods are often useful in the analysis of ordered categorical data in which assignation of scores to individual categories may be inappropriate. For example, nonparametric methods can be used to analyse alcohol consumption directly using the categories never, a few times per year, monthly, weekly, a few times per week, daily and a few times per day. In contrast, parametric methods require scores (i.e. 1-7) to be assigned to each category, with the implicit assumption that the effect of moving from one category to the next is fixed.
Disadvantages of nonparametric methods
Nonparametric methods may lack power as compared with more traditional approaches [3]. This is a particular concern if the sample size is small or if the assumptions for the corresponding parametric method (e.g. Normality of the data) hold.
Nonparametric methods are geared toward hypothesis testing rather than estimation of effects. It is often possible to obtain nonparametric estimates and associated confidence intervals, but this is not generally straightforward.
Tied values can be problematic when these are common, and adjustments to the test statistic may be necessary.
Appropriate computer software for nonparametric methods can be limited, although the situation is improving. In addition, how a software package deals with tied values or how it obtains appropriate P values may not always be obvious.
This article is the sixth in an ongoing, educational review series on medical statistics in critical care. Previous articles have covered 'presenting and summarizing data', 'samples and populations', 'hypotheses testing and P values', 'sample size calculations' and 'comparison of means'. Future topics to be covered include simple regression, comparison of proportions and analysis of survival data, to name but a few. If there is a medical statistics topic you would like explained, contact us on editorial@ccforum.com. | 3,771.6 | 2002-09-13T00:00:00.000 | [
"Economics"
] |
Optimization of back-end WEB server
The application bottleneck is the database architecture. The applications developed more than 10 years ago suffer from low database performance. The databases in companies strongly enlarged during this time and began to work quite slowly. The article describes how the inclusion of master-master replication will optimize the backend operation to some extent without resorting to major changes. It will allow even distribution of requests across several (in this case, two) servers, regardless of the requested method.
Introduction
The most extensive optimization possibilities of application can be obtained due to the possibility of SSI. SSI is server side inclusion. In fact, SSI blocks are simple html comments containing a prescription on how a server should handle them. This technology allows submitting an html template to nginx with ssi blocks placed in it, the contents of which nginx will receive from other places.
Creation of processor stacks
If at the same time we get rid of some elements that are unique to the user on the common content (for example, numbers in task lists, indicating the number of new messages), we can create a shared cache of lists and tasks between users. The tasks in this case will be very convenient to cache if we use cache flush.
If we add cookies, which will store the user's role in the system and information about access to projects, it will be possible to use a cache list shared between certain user groups. For example, a group of managers sees the same lists, but on the pages with them there is information unique to the user.
Let us consider stage by stage, what should be done so that the described becomes possible: 1. The preparation of a common template that will be the same for all users. It can be cached as we like, or it can even be a static html file that nginx will give from the directory with the application files.
2. The preparation of the index.php script (the main and only entry point into the PMC application) so that we can get separate blocks of pages that will not contain unique content, and not the entire page as it is now. These can be messages inside tasks or lists of tasks, selected and sorted according to certain criteria. We will also need to create the ability to separately request user-unique content for the 3. The preparation of a script that, will receive unique content and fill in the necessary places with it after loading the page with a separate request. This is various metadata, reflecting the number of new messages, unread messages, messages from contractors, which are marked in a special way. There is no problem filling it after loading the list by creating empty elements with corresponding identifiers in the html list template.
4. The configuration of nginx to work with ssi blocks using directives in the context of the ssi on server; and ssi_types if types other than text / html will be used in ssi blocks To include dynamic content in the ssi block, use the <! -# include virtual = page_uri -> construct, to include a static file -<! -# include file = file_path -> The descrption may not give clear understanding how it works. It is much better to illustrate the operation of this mechanism through the example.
In order to apply the technologies described in this and subsequent sections, a lengthy and costly modification of the application is required. To demonstrate the operation of technology, we can use a test bench.
Creation of test application
A separate stack for the application will be created using Docker Compose. Nginx will render one page -a simple html template created on the basis of one of the task lists of a real application. Instead of the main content of the page, the ssi block will be included in it. The contents of the block will be generated by a php script, for which a separate container with the Apache web server and php will be started. It will receive two elements from the mysql base -task 1 and task 2. Each element will consist of two columns: name and time. Php will enter the current date and time in the "time" column. It is necessary to identify whether a block has been taken from the cache or received from the backend. We can enter the task by clicking on the name and see a list of several messages also received from the database. Messages can be added by sending a POST request to the page with an open task. During the input of information to messages, it will be written to the app database. The structure of its tables in this case is not important and will not be described [4].
Please note that in the figure there are two DBMS servers. Master-master replication is configured between them. At the moment, the application can use a common alias for two servers to resolve the name of the SQL server, but for simplicity, mysql1 is used at this stage. We can click each of the headings in the name column and get on the page with messages and the form for sending them: Accordingly, we can write a message to the page entering it in the text box and clicking the Enter button.
The nginx configuration file used in this example is shown in Appendix 8. The setting allows caching the contents of the table with tasks for a short time, but it prevents the contents of the tasks from getting to the cache.
The static index.html file includes several ssi blocks, the choice of which depends on the show argument. The full text of the file is given in Appendix 6.
Depending on the show argument, the render.php script passes the arguments which form the page. In order to cache index.html template, we need to make small changes to the configuration fileadd the map directive which defines the $ bypass variable and the use of the cache for location / (the configuration file in the application already contains these changes).
This example shows that: 1. In order to unload the back-end of the Apache server, caching of one of the types of dynamic pages is used. This application significantly speeds up the application and part of the load from the back-end server in the considered case in the main part of the work; 2. The back-end server no longer needs to generate the main page template each time. This takes an insignificant amount of time for php, but taking into account that this happened absolutely every time when accessing any page, the gain can be significant 3 After the html template begins to be cached, there is no need to get it from the back-end server. This is quite significant, since the operation is also performed every time a user requests a page.
For reference, the application is available at http://ssi.pictcut.com:8481
Performance analysis after the changes
No one really uses the test application. During the collection of data and in the following sections we will use the following pattern that describes the user's movement on the site. At the beginning, the server cache will be completely cleared: 1) http://ssi.pictcut.com:8481/index.html?arg=table -loading the task list page from the back end, as there is no cache yet; 2) http://ssi.pictcut.com:8481/index.html?show=messages&num=1transition to the first task. It is not cached on nginx, but there is also a DB cache in which requests results are cached. It is unlikely that with such simple content, it will significantly change the return time, but still it is worth to have a look at; 3) http://ssi.pictcut.com:8481/index.html?show=messages&num=1 -POST request, adding the mail . After adding a message, the database cache is invalidated and the subsequent selection is obtained from the database again. Everything that can be cached on the page will go to the cache since after executing the request the application will direct the user to the same page; 4) http://ssi.pictcut.com:8481/index.html?arg=table -outdated page from the cache will be returned and a background update will be performed.; 5) http://ssi.pictcut.com:8481/index.html?show=messages&num=2opening the page with task 2; 6) http://ssi.pictcut.com:8481/index.html?show=messages&num=2adding the message to task 2; 7) http://ssi.pictcut.com:8481/index.html?arg=tablereturn to main page. The last 3 steps will be performed to collect additional data.To illustrate the work of the cache, lines 14 and 30 are added to the render.php script sleep(1); It will slow down the script for a second, otherwise the pages load extremely fast due to their simplicity.
Thus, the access log will be as follows: Accordingly, in this case, for 7 page visits, caching saved about 2 seconds. In the case of the present application considered in the work, the list is the most frequently used page.
Failure stability
High availability of systems and their resistance to failures of individual nodes is an extremely important quality of web applications [2,3]. Load balancers allow distributing client connections between multiple back-end servers and monitor their status. This section will analyze failure stability of Apache running application with the nginx balancer.
Testing will be carried out with a "flushed" cache, that is, filled with cached pages (in fact, only one page is cached for this section). In this and the following sections on stability testing, the testing procedure will consist of the following actions: 1. For this example, the container with Apache will be turned off, for subsequent examples with PHP-FPM. Then try to load the main page, and then try to go to the task page 2. If a page with messages and a form is displayed, try to leave a message; 3. If after trying to leave a message the page has loaded, try to return to the main page with a list of messages.
Thus, for this example of the application, we need to move to the stack folder where the dockercompose.yml file is located and run the sudo docker-compose pause apache command.
After that, nginx loaded the main page with a list from the cache, but when we tried to access the tasks page after waiting, we received a 504 Gateway timeout response, which meant that the timeout from the back-end server was exceeded. In the error log, an entry appeared: When we tried to access the list page again, an error 504 was also received, since more than a minute had passed and the page was deleted from the cache.
Conclusion
We can conclude that with this architecture, nginx practically does not improve the availability of the application in the case of a failure at the back end. The only way to increase it is to deploy additional containers for the Apache layer. | 2,475.2 | 2020-11-01T00:00:00.000 | [
"Computer Science"
] |
Large size crystalline vs. co-sintered ceramic Yb 3+ :YAG disk performance in diode pumped amplifiers
: A comprehensive experimental benchmarking of Yb 3+ :YAG crystalline and co-sintered ceramic disks of similar thickness and doping level is presented in the context of high average power laser amplifier operation. Comparison is performed considering gain, depolarization and wave front deformation quantitative measurements and analysis.
Introduction
Progress in recent years in ceramic laser gain media [1] triggered a particular interest within the Diode Pumped Solid State Laser (DPSSL) community for new gain media with improved performances and/or characteristics.Not only can ceramics be manufactured in potentially any shape and large dimensions, but also new structures are accessible due to the versatility of ceramics, like rooftops, composite rods and co-doped absorber claddings [2,3].
Co-sintered ceramics exhibit regions with a different doping ion or different doping concentration.This is especially beneficial due to the significant reduction of Amplified Spontaneous Emission (ASE) and/or deleterious thermal effects.This approach is typically found in Anti-ASE-caps or covers with a higher thermal conductivity in order to transport the heat into a heat sink more efficiently.Another interesting property of laser ceramics is their potentially better fracture toughness, as cracks tend to stop at grain boundaries, while the thermal conductivity is not much different from their crystalline counterpart when produced through specific manufacturing processes [4,5].
Most of the laser quality ceramics are restricted to the cubic lattice system.Prominent cases are especially rare-earth doped Yttrium Aluminum Garnet (YAG) ceramics.Recent results have successfully demonstrated the production of non-cubic ceramics, like Fluoro APatite (FAP), by applying strong magnetic fields during sintering [6].Materials difficult to grow as a single crystal, e.g.sesquioxides due to their high melting temperature, are interesting candidates for ceramic laser gain media as well.
For high energy lasers, ceramics are potentially attractive candidates due to their large size, as laser damage is one of the restricting factors when scaling laser gain media.Typical laser damage thresholds for nanosecond pulses are found to be in the order of 3 J/cm 2 in the near infra-red to ensure a reliable laser operation.Consequently, large laser gain media with diameters in the size of more than 5 cm clear aperture are needed to sustain pulsed laser energies of more than 100 J.The Lucia laser system, currently under development at the Laboratoire pour l'Utilisation des Laser Intenses (LULI) at the Ecole Polytechnique, France, aims for pulse energies significantly above 10 J in the ns-pulse regime.Lucia is an Yb 3+ :YAG based Master-Oscillator Power-Amplifier (MOPA) DPSSL described in [7].Amplifying stages rely on the active mirror principle where the gain medium disk is cooled through an HR coated surface, while pump and extraction take place through the opposite AR coated surface.
Influence of ASE on the gain distribution within such large size, high gain Yb 3+ :YAG slabs has already been described [8].Thermally induced wave front distortions generated by Lucia main amplifier disk were detailed as well [9].After describing the experimental samples (section 2) and giving comparative gain measurements (section 3), this paper focuses on the experimental cross evaluation of wave front distortion (section 4) and thermally induced depolarization loss (section 5) introduced by crystal and ceramic Yb 3+ :YAG disks when submitted to the significant heat load experienced in the high average-power activemirror amplifier of the Lucia DPSSL.
Experimental platform
Experimental cross evaluations of wave front distortions and depolarization were performed on the Lucia room-temperature water cooled main amplifier head [7] with two 7 mm thick, 2 at.%Yb 3+ doped YAG disks as shown in Fig. 1.The first one is a 60 mm diameter single crystal grown using Horizontal Direct Crystallization [10] by Laserayin, CSC, Yerevan, Armenia.The second disk is a co-sintered ceramic where the Yb 3+ doped 35 mm diameter central part is surrounded by a 5 mm wide 0.25 at.%Cr 4+ doped periphery.It was manufactured by Konoshima Ltd., Japan.
For operation near room temperature, index matching liquids are a solution of choice to reduce the reflectivity at lateral surfaces.For this work, water was circulating in contact with the cylindrical surface of the disk.Liquids like α-bromonaphtalene (n = 1.66) or diiodomethane (n = 1.74) [11,12] offer a better index matching to YAG (n = 1.82) compared to water (n = 1.33).But neither of these fluids revealed any compelling benefit over water in our experimental case, as the main contribution to parasitic rays feedback into the gain medium was due to the disk's mount.
A low reflectivity treatment, black Nickel [13], was applied to the inner part of the disk mount facing the YAG lateral edges.The resulting low reflectivity helped to significantly reduce feedback back into the laser gain medium.
Foreseen low temperature (<200K) operation of the 2nd Lucia power amplifier prohibits the use of liquids due to temperature considerations.Cr 4+ doped YAG can actually easily absorb this emission at 1030 nm, as is depicted in Fig. 1.We measured an absorption of 5.5 cm −1 for a doping level of 0.25 at.%. Processing a disk shaped crystal of more than 10 cm 3 with such ring structure would be technically challenging.Fortunately, ceramic co-sintering is a manufacturing process offering the possibility to obtain such composite structure, as depicted in Fig. 1.
Small signal gain performance
Due to the temperature dependency of laser gain media cross sections [14,15], only a Small Signal Gain (SSG) measurement under single shot condition is a viable way to determine the impact of gain medium structure on ASE feedback.Despite the fact that the reflectivity of the outer gain medium mount was strongly reduced by applying a black Nickel coating and avoiding any parallel surfaces to the gain medium, reflections from the YAG/water surface still exist.Omitting residual reflections from the mount, about 2% reflectivity from the crystal/water interface still remains.In the case of the composite ceramic, all feedback can be omitted thanks to Cr 4+ absorption.The test setup is sketched in Fig. 2. The collimator exit port of a 15 mW fiber coupled cw laser at 1029.8 nm is imaged onto the backside of the HR coated gain medium and then imaged onto a photodiode (Thorlabs DET36).The beam diameter is about 1 mm.Fig. 2. The small signal gain (SSG) measurement setup.A cw laser source is imaged onto the backside of the laser gain medium and is again imaged onto a photodiode.Out of the oscilloscope traces the SSG is calculated.
As pump source, the Lucia main amplifier diode array was used [16].41 laser diode stacks, each emitting up to 3 kW peak power in 1 ms long pulses at 940 nm, are guided on a 7 cm 2 aperture.This aperture consists of a circular aperture of 30 mm limited in height to 24 mm by guiding mirrors [9].We reach up to 16 kW/cm 2 on average over the whole surface.Due to the light concentration system, we have a 15% higher intensity in the center due to modulations from the individual laser diode stacks -about 18.7 kW/cm 2 when driven at the highest diode current of 150 A.
Figure 3 (left) shows the resulting SSG at 16 kW/cm 2 for the crystal, the composite ceramic, a simulation taking ASE into account and a simulation without taking ASE into account.The simulation is performed using the HASEonGPU code [17] which includes reflection on front and back surface as well as the spectra of emission and absorption cross sections.Peripheral surfaces are treated as perfectly absorbing in this code.
Starting at ~600 µs pump duration, ASE has to be taken into account for the samples under test.From this point, crystal and ceramic SSG curves start diverging due to feedback from the periphery.In the case of a 30 mm diameter pumped area and a thickness of 7 mm, Total Internal Reflection (TIR) guided reflections will undergo ~7.4 reflections in average while passing through the pumped area.With an Angle Of Incidence (AOI) close to 33°, the length between each reflection is ~20% longer than the thickness of the gain medium.We have to keep in mind, that in the crystalline case additional ~30 mm of non-pumped Yb 3+ :YAG have to be crossed for each roundtrip between the peripheral surfaces leading to a transmission of about 33% for 2 at.% doped YAG at room temperature.A reflectivity of the periphery of about 2% to 3% finally corresponds to measured SSG values in the range of 3 to 3.4.Despite being a very simplified approximation, it is in very good agreement with the experimentally found starting point of major differences between the SSG for composite ceramic and the crystal.
As a comparison of ceramic and crystal SSG performance, Fig. 3 (right) shows the SSG values of the crystal versus the ceramic under similar pump intensities at 940 nm over 1 ms duration.SSG values significantly larger than 3 show increased losses for the crystalline case.This finally results in a 20% difference for the highest pump intensities available.The diverging behavior occurs at pump intensities close to 12 kW/cm 2 .
Thermally induced focal lengths
The active mirror architecture leads to a parabolic temperature distribution along the thickness of the homogeneously doped YAG disk, since the HR coated side is in contact with a coolant, while the other side is not actively cooled.The associated heat exchange coefficient h air is several orders of magnitude weaker compared to the h coolant coefficient measured in our setup to be 15 kW/m 2 /K.The resulting temperature difference between the front and back side leads to the bending of the disk as illustrated in Fig. 4. The active mirror can therefore be seen as a convex mirror, where an incoming beam with a flat wave front will exit the amplifier as a divergent beam.Let us call f mech the associated focal length.However, the thermally induced mechanical bending is not the only phenomenon encountered.In the case of a disk which is fully pumped over its whole surface no radial temperature gradients are expected.However, only a limited portion of the whole front surface is under pump action, and consequently a radially varying temperature distribution will appear [10].In the case of the composite structure of the ceramic gain medium, additional absorption of ASE will take place in the chromium doped external ring.We therefore expect a completely different temperature distribution compared to the crystalline gain medium.
With a thermal coefficient refractive index change dn/dT being positive [18] for YAG, a positive thermal lens f therm partially compensates for the concave active mirror in the case of the crystal [9].The global focal length f Tot satisfies Eq. ( 1) where f mech contributes half as much as f therm since the thermal lens is encountered before and after reflection on the convex HR surface.
In the case of the ceramic f therm is almost absent while f mech is strongly increased due to the higher thermal load.Figure 5 confirms that the chromium periphery creates a hot ring due to its strong parasitic transverse rays energy absorbing role.This phenomenon does not take place in the crystal case since these rays are refracted into the peripheral circulating water and absorbed by the 2% reflecting mount (itself cooled with water).Taking into account that, in any case, all the light trapped by TIR will ultimately be absorbed in the Cr 4+ cladding, one can estimate that an important amount of the pump power will be transformed into an additional heat source in the cladding.Calculations with our ASE code show about 40% heat load in the cladding.
Relying on the experimental set-up described in [9], wave front measurements were performed with both disks over a large range of incident pump powers.Out of the wave front maps the thermally induced defocus components were derived using the Zernike polynomials.The corresponding calculated focal lengths are illustrated in Fig. 6.This experimental data confirms the compensation effect of f therm and f mech in the case of the crystal where the global focal length never exceeds 100 m even in the half kilowatt average power regime.We observe that f Tot is degraded due to the increased thermal load for the ceramic.More than one order of magnitude is revealed compared to the self-compensated crystal case.Fig. 5. 3D thermo-mechanical model of both crystal (left) and co-sintered ceramic (right) 7 mm thick YAG disks.15 kW/m 2 /K heat exchanged coefficient water cooling takes place from the bottom (observe the cold blue surface) while pumping occurs on the top surface.The color temperature distribution clearly reveals the opposite radial temperature gradient of both cases.The heat source corresponds to a 16 kW/cm 2 pumping at 2 Hz, 1 ms.Fig. 6.Global thermally induced negative focal length f Tot versus average pump intensity for the crystal (diamonds) and the ceramic (triangles).The dashed line corresponds to the theoretical focal length [9].The relative error bars for the ceramic case are in the same order as for the crystal.They were omitted for visibility reason.
Depolarization measurements
In order to fully study thermally induced consequences on the extraction beam quality, not only its wave front perturbations need to be evaluated.The way its polarization quality might be affected is also a key aspect.This has a considerable impact on the Lucia amplifying stage to stage isolation with polarization dependent devices like Pockels cells or Faraday rotators.The efficiency of foreseen frequency conversion will also be strongly dependent of the polarization quality of the amplified pulses.
Depolarization losses
Cubic materials, like Yb 3+ :YAG, do not show any directional preference of the refractive index in the ideal, i.e. stress-less, case, it will become non-isotropic when stress is applied [19,20].In the case of a laser gain media, the average pump power generates an average heat source leading to thermal expansion, and therefore stress.Such stress can add up and lead to depolarization losses so high, that laser operation will generate laser beams with a significant portion of energy in unintended states of polarization within the beam.On top of this, intrinsic stress within the material due to the manufacturing process will also contribute to depolarization losses even in the case of an un-pumped gain medium.For crystalline gain media, the orientation of the unit cell is therefore important, as it is shown in Fig. 7.For cubic crystalline laser gain media, the [111] orientation is the most interesting, as a rotation around its axis has no influence on depolarization losses, as long as the angle of incidence is 0°.For angles significantly larger than 0°, a 120° symmetry is found, as it is shown in Fig. 7.As one can imagine, other crystal axis orientations will result in a different pattern when turning around this particular axis.For cubic YAG, an orientation along [100] can potentially minimize depolarization losses [21,22], but this effect is very sensitive to misalignment.Other orientations may not find rotational dependent losses less than the [111] orientation and thus are a somehow bad choices (e.g.[110] in our setup).For ceramic media the situation is somewhat different.Due to the random nature of the orientation of each subpart of the ceramic, no orientation dependence is expected, except maybe stresses induced due to coating, mounting, and other non-symmetric influences.The depolarization pattern is independent of crystal rotation and is in the same order as for a crystal in [111] configuration under 0° [23].
As already mentioned in the previous section, ASE simulations of our co-sintered ceramics showed, that we can expect about 40% of the total pump power ultimately ending in the ceramic cladding.This additional heat source significantly increases the deformation and stress compared to the crystalline case, where ASE rays are basically allowed to freely exit the gain medium and be dumped in the specially designed peripheral cavity filled with water.
Experimental setup
A dedicated experimental bench was set up as shown in Fig. 8: a 50 mW cw laser source at 1064 nm was polarized and transformed into an almost flat top beam using a beam expander and a serrated aperture of 22 mm outer diameter.This image plane was image relayed onto the HR-coated backside of the gain medium using a telescope with a slight magnification resulting in a beam diameter of 24 mm.The beam was then imaged onto a CCD camera while passing through a rotating analyzer as well as a set of optical densities OD and two 1064 nm bandpass filters (Thorlabs FL1064-10).By carefully calibrating 2"x2" optical densities using a Varian Cary 500 UV-VIS spectrometer, an extinction ratio of 2x10 −5 averaged over the whole beam was obtained when observed between crossed polarizers (Altechna, Glan-Laser type) while replacing the gain medium with a silver mirror.The gain medium was then carefully inserted into its holder and aligned into the test beam line.The gain medium mount could be rotated over 360°.The extraction beam angle was 24.3°, corresponding to a 13° propagation angle within the gain medium.The overall detection limit of this set-up is 0.02% average depolarization loss.
With a test wavelength of 1064 nm sufficiently close to the design wavelength of the optical coatings, interference patterns were not visible, while the bandpass filter allowed blocking the spontaneous emission around 1030 nm.
The Lucia main amplifier diode array was used as the pump source (see section 3).Guiding this pump light [9] through a circular aperture of 30 mm diameter led to about 110 J available for the gain medium.Considering the beam overlap including reflection on the HR coated side (see top right insert of Fig. 8), a circular beam diameter of 24 mm left sufficient space for alignment, while filling most of the pumped aperture and avoiding clipping at the gain medium mount.In order to adjust the average power on the target, we relied on the driving current of the laser diodes and/or the repetition rate of the laser diode drivers.Pump laser wavelength shift for the different average power regimes is controlled by applying a constant bias current on the laser diode stacks of up to 1 A, which is far below the lasing threshold of about 15 A [16].Using this method we ensured that laser diode emission wavelength drifts did not affect the average absorbed pump power.
Experimental results
Both crystal and ceramic were rotated around the axis perpendicular to the front surface.For an average intensity of 40 W/cm 2 in single pass, a strong modulation appears on the crystal angular depolarization loss map (see red triangles, right map of Fig. 7).The four-leaf symmetry observed is unexpected for a [111] oriented crystal.It was later discovered that this specific crystal was actually incorrectly oriented during post-manufacturing process, its orientation being about 10° to [110].
As expected, rotating the ceramic revealed almost no impact on depolarization loss.Left graph of Fig. 9 illustrates the increasing (% in log scale) losses (single pass) experimentally recorded when ramping up the average pump brightness at the gain medium level.Below 20 W/cm 2 , only intrinsic birefringence combined with potential mounting stress-induced birefringence can be observed.When increasing the brightness, the depolarization loss becomes proportional to I avg 2 before ultimately reaching saturation.The quadratic behavior is indicated by the dashed lines in the Fig. 9 (left).At 40 W/cm 2 , we observe that the 1% loss value is already exceeded for the co-sintered ceramic while it stays 5 times lower for the crystal when oriented at one of the four optimum angles.When doubling the pump average power to reach 80 W/cm 2 , losses greater than 5% over the beam full aperture are recorded.
Yb 3+ :YAG cross sections are not sensitive to polarization.But the angular multiplexing extraction architecture in place on a laser chain like Lucia generates differential losses on the s and p polarizations after each reflection on optics oriented at non-normal incidence.This is obviously the case of the YAG disk (24°) but also for several mirrors (45° or 30°).Although special care is given to the optimization of the dielectric layers to minimize reflection losses, it is still not zero.Some applications of the amplified pulse train require frequency conversion.The involved non-linear processes are generally characterized by an efficiency strongly related to the polarization quality of the IR beam to be converted.Consequently, controlling the state of polarization of the amplified pulses appears mandatory.Fig. 9. Birefringence loss for the ceramic (blue circles) and the crystal (red triangles).For both diagrams the vertical scales give losses in % with respect to the incident energy beam.The evolution of the average pump intensity hitting the disks is depicted on the left logarithmic diagram while the right angular map is given for a 40W value.The dashed lines in left the graph indicate a quadratic behavior of the depolarization loss with increasing average intensity.
Conclusions
A comprehensive experimental benchmarking of Yb 3+ :YAG disks of similar thickness and doping level was performed with the Lucia MOPA DPSSL active mirror water cooled amplifier head.The differences of both samples under study were the nature of the host YAG matrix (ceramic vs crystal) as well as the radial dimension and structure of the disk.
Whereas the 60 mm crystalline disk was supplied specifically for such operating environment (300 K) where ASE parasitic rays mitigating can be at least partially handled with a peripheral circulation of water, the co-sintered ceramic was designed to be used at a much lower temperature (<200 K) where parasitic rays absorption takes place in the Cr 4+ doped surrounding ring.
This study focuses on the two thermally induced major consequences on an amplified beam: its wave front deformation (quantified here with the defocus) and depolarization.This work reveals a clear advantage in favor of the crystal.Its monolithic structure contributes largely to this conclusion.Indeed the accumulation of heat in the periphery of the composite ceramic puts it in considerably more intense thermally-induced stress situation than the crystal YAG disk.This is quantified by an order of magnitude difference in term of global thermal lens in our cases.
Moreover, due to its anamorphous nature the ceramic is revealed to be much more sensitive to thermo-mechanical stresses.This is observed when comparing both disks ability to maintain the polarization quality of the beam to be amplified.
Following these observations, and considering the fact that there is no practical alternative to co-sintered ceramics for the Lucia low temperature operated 2nd amplifier head, a new generation of composite ceramics have been ordered.The Cr 4+ doping level has been divided by 10 to reach 0.025 at%, allowing then the parasitic rays energy to be spread much deeper in the outer ring.Consequently, the thickness of this cladding layer was increased.Such approach should help avoiding the heat load to be concentrated at the Yb 3+ /Cr 4+ interface and therefore relaxing somehow the internal constraints.Also, a cooling approach minimizing radial gradients will be implemented [24].On the other hand the efficiency of Cr 4+ cladding for ASE mitigation was revealed through gain measurements.
Fig. 3 .
Fig. 3. Left: ASE performance / SSG of the crystal (red), ceramic (orange), the case of no ASE (black), simulation with ASE (blue).Greyed part shows the pump duration (also indicated by the arrow).Right: SSG of the crystal compared to the SSG of the composite ceramics.The dash-dot line indicates SSG parity.The numbers indicate the average pump intensity in kW/cm 2 .
Fig. 4 .
Fig. 4. Lucia water cooled power amplifier mount (left) hosting the 7 mm YAG disk pump from the top and cooled from the other side.Thermally induced differential surfaces expansion (top right) leads to the disk bending (bottom right).
Fig. 7 .
Fig. 7. Calculated angular depolarization loss maps for an Yb 3+ :YAG crystal carrying three type of orientations.The left map is obtained for a double pass (active mirror architecture) extraction beam propagating through the disk at normal incidence (AOI 0°) whereas a 13° AOI (corresponding to Lucia experimental case) is depicted on the right map.
Fig. 8 .
Fig. 8.The experimental setup for the measurement of depolarization loss.A polarized cw laser beam is imaged onto the backside of the HR coated laser gain medium and recorded by a CCD camera setup.By rotating of the analyzer relative to the polarizer the depolarization loss can be measured.The gain medium can be rotated as well. | 5,541.2 | 2015-01-12T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Healthcare Intrusion Detection using Hybrid Correlation-based Feature Selection-Bat Optimization Algorithm with Convolutional Neural Network A Hybrid Correlation-based Feature Selection for Intrusion Detection Systems
— Cloud computing is popular among users in various areas such as healthcare, banking, and education due to its low-cost services alongside increased reliability and efficiency. But, security is a significant problem in cloud-based systems due to the cloud services being accessed via the Internet by a variety of users. Therefore, the patient’s health information needs to be kept confidential, secure, and accurate. Moreover, any change in actual patient data potentially results in errors during the diagnosis and treatment. In this research, the hybrid Correlation-based Feature Selection-Bat Optimization Algorithm (HCFS-BOA) based on the Convolutional Neural Network (CNN) model is proposed for intrusion detection to secure the entire network in the healthcare system. Initially, the data is obtained from the CIC-IDS2017, NSL-KDD datasets, after which min-max normalization is performed to normalize the acquired data. HCFS-BOA is employed in feature selection to examine the appropriate features that not only have significant correlations with the target variable, but also contribute to the optimal performance of intrusion detection in the healthcare system. Finally, CNN classification is performed to identify and classify intrusion detection accurately and effectively in the healthcare system. The existing methods namely, SafetyMed, Hybrid Intrusion Detection System (HIDS), and Blockchain-orchestrated Deep learning method for Secure Data Transmission in IoT-enabled healthcare systems (BDSDT) are employed to evaluate the efficacy of HCFS-BOA-based CNN. The proposed HCFS-BOA-based CNN achieves a better accuracy of 99.45% when compared with the existing methods: SafetyMed, HIDS, and BDSDT.
INTRODUCTION
Network Intrusion Detection Systems (NIDSs) identify malicious activities and safeguard the vulnerable services by monitoring network traffic and providing alerts when anomalous events are recognized.Some organizations that are primarily focused on obtaining private user data, establishing the foundation for modern-day detection and protection are attacked by cyber-attackers.Furthermore, the healthcare sector keeps growing, and most hospitals are integrating e-healthcare systems as quickly as feasible to fulfill the needs of their patients.IDS based on cloud networks employ anomaly-based techniques to protect the cloud-based applications [1].In network security, there are two common detection techniques for NIDS, anomaly-based detection, and signature-based detection [2].An anomaly-based IDS analyzes the network traffic and correlates it to a created baseline for unknown or known attacks, where a signature-based IDS is allowed to be employed while the attack patterns are established and predetermined [3,4].To address numerous security issues, the cloud utilizes numerous cybersecurity techniques like IDS, Intrusion Prevention Systems (IPS), and firewalls [5].The centralized processing technique used by cloud computing involves uploading every transaction and processing the enduser service requests based on the transmission bandwidth, capacity of storage, and computer resources [6].Proactive network security defenses are required to protect essential assets and data because the cloud attack vector has the potential to result in successful security breaches [7].
Network security has always placed a high priority on intrusion detection since it is crucial for identifying anomalous activity on secured internal networks [8].The network of intermediate, source, and endpoint are used to identify the Distributed Denial-of-Services (DDoS) attacks.The attack's endpoint is easily detected because of the massive volume of network traffic that is generated [9].A significant number of traditional intrusion detection systems use either a port-based or Deep Packet Inspection (DPI) technique.The port-based technique identifies traffic by using the ports established by the Internet Assigned Numbers Authority (IANA) [10].Software Defined Network (SDN) is an emerging design that is costeffective, flexible, adaptable, and controlled, thereby making it more suitable for presently employed complicated applications and bandwidth [11,12].SDN's goal is to create a logically centralized hub for internet and networking architects so that they quickly respond to the evolving client demands [13].Deep learning techniques, especially CNN represent remarkable capacity in automatically extracting features and intricate patterns from complex data, including network traffic [14].By employing Deep CNN, the IDS efficiently recognizes anomalous behavior and emerging threats in real time [15].Since cloud services are accessed via the internet by a variety of users, security is a significant concern in cloud-based systems because the health information of patients must be www.ijacsa.thesai.orgkept confidential, secure, and accurate.Moreover, any change in actual patient data results in errors during the diagnosis and treatment [31][32][33][34].Therefore, the HCFS-BOA based on CNN is proposed in this research, for intrusion detection to secure the entire network in the healthcare system.The main contributions of this research are as follows: The proposed HCFS-BOA approach is evaluated on the CIC-IDS2017, NSL-KDD benchmark datasets, and the Min-max normalization technique is employed to normalize the raw data.
For feature selection, HCFS-BOA is employed to examine the appropriate features that not only have significant correlations with the target variable, but also contribute to the optimal performance of intrusion detection in the healthcare system.
Finally, CNN is employed for classification to identify and classify intrusion detection accurately and effectively.The efficacy of HCFS-BOA is analyzed based on the performance measures of accuracy, precision, recall, and f1-score.
The rest of the paper is organized as follows: Section II presents the literature survey.The block diagram of the proposed method is discussed in Section III.The results are illustrated in Section IV, while Section V discusses the conclusion of this paper.
II. LITERATURE SURVEY
Faruqui et al. [16] presented a SafetyMed for Internet of Medical Things (IoMT) IDS by employing hybrid CNN-Long Short-Term Memory (CNN-LSTM).The SafetyMed was the first IDS that included an optimization approach based on the trade-off between Detection Rate (DR) and False Positive Rate (FPR).The SafetyMed enhanced the safety and security of medical devices and patient information.However, the presented SafetyMed method had no defense mechanism against an attack of Adversarial Machine Learning (AML).
Vashishtha et al. [17] implemented a HIDM for cloudbased healthcare systems to detect all kinds of attacks.The hybrid approach was a mixture of a Signature-based Detection Model (SDM) and an Anomaly-based Detection Model (ADM).The datasets of NSL-KDD, CICIDS2017, and UNSW-NB15 were employed to evaluate the efficacy of the HIDM approach.The implemented method had a higher detection rate with the error of Type-I and Type-II for both ADM and SDM.However, combining various detection systems increased the risk of false negatives and false positives.
Kumar et al. [18] introduced a BDSDT for the transmission of secure data in IoT-based healthcare systems.Initially, the architecture of blockchain was created in all IoT devices that were identified and established using a zero-knowledge proof, and then connected to the blockchain network using a smart contract-based ePOW consensus.Then, a bidirectional LSTM was employed using a DL to recognize IDS in the healthcare system.The BDSDT enhanced the privacy and security by combining both DL and blockchain methods.However, BDSDT wasn't effective against web and Bot threat attacks as there were fewer instances of these two classes which led to changes in actual patient data resulting in errors during the diagnosis and treatment.
Halbouni et al. [19] presented a CNN-Long Short-Term Memory (CNN-LSTM) for IDS system.The ability of CNN to extract the spatial features alongside the ability of LSTM to extract the temporal features were the highlights of this model.In order to improve performance, batch normalization and the layers of dropout were created to the presented method.The presented method decreased the false alarm rate and improved the rate of detection.However, CNN-LSTM failed to provide a high detection rate for specific kinds of attacks like web attacks and worms which led to changes in actual patient data resulting in errors during the diagnosis and treatment.
Han et al. [20] presented an Intrusion Detection Hyperparameter Control System (IDHCS) to regulate and train a Deep Neural Network (DNN) extracted feature and the module of k-means clustering in terms of Proximal Policy Optimization (PPO).The most valuable network features were extracted by the DNN under the control of an IDHCS, which also used K-means clustering to detect intrusion.The IDHCS performed effectively for each dataset, as well as the combined dataset.However, to represent a more realistic network environment, a diverse dataset needed to be examined.Bakro et al. [21] introduced a hybrid feature selection approach that combined filter techniques such as Particle Swarm Optimization (PSO), Chi-Square (CS), and Information Gain (IG).Combining each of these three techniques was a novel method that generated a more reliable process of feature selection by using every technique's strength to increase the possibilities of selecting the most associated features.The introduced method had the benefits of flexibility, time complexity, interpretability, and scalability.But, the feature selection approach was not done properly which resulted in overfitting.
Sudar et al. [22] implemented a Machine Learning (ML) approach based on Decision Tree (DT) and Support Vector Machine (SVM) to detect Distributed Denial of Service (DDoS) attacks.The classification approach was established in the environment of Software Defined Network (SDN).The DT and SVM approaches were deployed to distinguish among malicious and normal traffic data.This approach provided better accuracy and detection rate.Nonetheless, this implemented approach struggled to adapt to evolving attack strategies.
Praveena et al. [23] developed a Deep Reinforcement
Learning approach that was optimized by Black Widow Optimization (DRL-BWO) for intrusion detection in Unmannered Aerial Vehicles (UAV).The BWO approach was deployed for parameter optimization of the DRL method which assisted in enhancing the performance of intrusion detection in UAV networks.This approach was fit for the tasks of information extraction in high dimensional space.Nonetheless, the intricate nature of the DRL-BWO approach resulted in minimized interpretability.
Chinnasamy et al. [24] presented a Blockchain DDoS flooding attack with dynamic path detectors.The ML approach www.ijacsa.thesai.org was established to identify the attacks which focused on the DDoS assault.The primary essential traits were employed to predict the accurate DDoS attacks by utilizing a different attribute selection approach.Nevertheless, this presented approach led to severe network congestion which hindered the processing of transactions and slowed down the overall system's performance.
Chinnasamy et al. [25] developed an ML approach for effective phishing attack detection.Based on the input features such as Uniform Resource Locator (URL) and Web Traffic, the link was classified as phishing or non-phishing.This approach was determined by retrieving datasets from ML and phishing cases by employing SVM, Random Forest (RF), and Genetic.Nevertheless, ML approaches in phishing detection struggled to maintain pace with constantly evolving phishing tactics which led to potential delays in identifying the new attacks.
Anupriya et al. [26] implemented an ML approach for fraud account detection.To compute buddy similarity criteria, the adjacency network matrix graph was employed and then new features were acquired by utilizing the Principle Component Analysis (PCA).This was employed to equalize the data and transform it into the classifier in the next phase of crossvalidation for training and testing the classifier.Nevertheless, due to imbalanced datasets, this approach struggled with evolving the fraud pattern and generated false positives or negatives.
There are some limitations with the existing methods that are mentioned above such as the methods not being effective in detecting attacks which led to changes in actual patient data resulting in errors during the diagnosis and treatment.In order to overcome these issues, the HCFS-BOA-based CNN is proposed for intrusion detection to secure the entire network in the healthcare system.
III. PROPOSED METHODOLOGY
In this research, a hybrid CFS-BOA-based CNN approach is proposed for intrusion detection in healthcare systems using deep learning.It includes datasets, min-max normalization, feature selection using HCFS-BOA, classification using CNN, and performance evaluation.The overview of the proposed method is represented in Fig. 1.
A. Datasets
The proposed HCFS-BOA approach is evaluated on CIC-IDS2017 [27] and NSL-KDD benchmark datasets.The CIC-IDS-2017 dataset includes malicious and normal traffic data that is considered new and does not include an enormous amount of redundant data.It includes eleven new attacks namely, PortScan, Brute Force, DoS, web attacks like SSH, Patator, FTP-Patator, SQL injection, and XSS.It is created by the Canadian Institute for Cybersecurity in 2017, and its 80 features are employed to monitor malicious and benign traffic.The NSL-KDD is an extension of the KDD cup 99 database and contains 41-dimensional vectors with numerical and categorical values.The intrusion attacks in the NSL-KDD database are probe attacks, Remote to a user (R2L), Denial of Service (DoS), and the User to Root attack (U2R).NSL-KDD is an IoT dataset used for model training purposes in healthcare applications.
B. Pre-processing
After data collection, the normalizing process is established by rescaling the attributes with a uniform contribution.Typically, the data normalization technique addresses two key problems: the presence of outliers and the presence of dominant features.The various methods for normalizing data based on the measures of statistics are examined.Consider the data with records and instances, as expressed numerically in Eq. ( 1).
(1) where, indicates the label of class and represents the data to be learned via a learning process.The Min-max normalization technique [28] is employed to normalize the raw data, which is one of the various normalization techniques.This approach greatly minimizes the outlier's impact on the data.It scales the obtained data within the range of 0 to 1 which is numerically expressed in E q. (2).
(2) where, and represent the attribute's maximum and minimum values.By employing and , the acquired data are rescaled by the upper and lower boundaries.This acquired data is then passed as input to the feature selection.
C. Feature Selection
After normalizing the acquired data, the hybrid CFS-BOA approach is implemented for feature selection.In CFS-BOA, the features are selected by using a nature-inspired optimization technique to enhance the optimization process.The CFS-BOA's goal is to choose the most useful feature subset for detecting and avoiding security vulnerabilities while minimizing the redundancy and computational complexity.When compared to other optimization algorithms like Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO), the BOA tunes the optimization process for maximum efficiency for combining with CFS.The HCFS-BOA examines appropriate features that not only have significant correlations with the target variable, but also contribute to its optimal performance of intrusion detection in the healthcare system.This hybrid method has the potential to result in a more efficient and effective IDS that is specific to the unique characteristics of healthcare data and security requirements.
-feature subset's heuristic evaluation for a feature set that includes features ̅̅̅̅average degree of connection between the category label and the features ̅̅̅̅average degree of inter-connection between features A correlation technique based on the feature subsets is used for the evaluation of CFS.During the procedure, the feature set with the greatest value is determined to decrease the training and testing set size.A larger ̅̅̅̅ or smaller ̅̅̅̅ out of the obtained subsets by the approach provides a greater evaluation value.
2) Bat Optimization Algorithm (BOA): BOA is the first algorithm for optimization and computational intelligence, influenced by microbat echolocation behavior.In a ddimensional search, every bat flies at random with velocity, location and frequency at iteration.The current best solution is archived for bats in a population through an iterative search process.
The procedures for updating the location and velocity at each time step are mathematically presented in Eq. ( 4), Eq. ( 5) and Eq. ( 6). (4) (5) (6) where, is a vector selected at random from a uniform distribution.
Once a solution is chosen from the existing ideal solutions, a new solution for every bat is produced via a local random walk which is numerically expressed in Eq. ( 7).(7) where, is a random vector generated from uniform or Gaussian distribution in the range [-1,1].
is the average loudness of all bats at a time step.
Furthermore, the rate of pulse emission and loudness are modified as the iterations progress.They are updated using the following Eq.( 8) and Eq. ( 9). ( 8) (9) where, and are constant.
3) HCFS-BOA method for feature selection: The significance and correlation of the chosen feature subset are evaluated using the HCFS-BOA-based feature selection method.Correlation-based feature method is used in the HCFS-BOA to create a fitness function and assess the reliability of the reduced feature subset.CFS evaluates the correlation of mean feature class and the average intercorrelation between features for feature subset with features, where ) using (3).CFS is a classical filter method that selects relevant features based on correlation-based evaluation due to feature redundancy.By storing solutions in a bat's vector, BA is inspired by the echolocation activity of microbats, eliminating redundant features and reducing dimensionality.When a bat moves, it archives the best solution at the time.During the process of iterative search, the population scans for the optimum arrangement by updating and refreshing the position of each bat based on Eq. ( 4), Eq. ( 5), and Eq. ( 6).An ideal intrusiondetection approach has a higher detection rate and a lower false positive rate.Hence, a weighted fitness function is shown in Eq. ( 10).(10) where, and are the weights for the Detection Rate and False Positive Rate, respectively.A higher fitness means higher intrusion detection performance.In one iteration of the HCFS-BOA, the algorithm chooses a feature subset that depends on its correlation coefficients with the target variable.The bat optimization process involves updating the virtual bat's positions in the search space with each bat representing a potential feature subset.The technique iteratively refines feature selection by adjusting the position of bats and evaluates their performances via correlation-based metrics during both the testing and training phases.Thus, the rescaling acquired www.ijacsa.thesai.orgdata is passed into the feature selection phase which is sufficient for the classification of intrusion detection.
D. Classification
The selected features are classified using the CNN model which produces enormous results in domains such as Natural Language Processing (NLP), image processing, and healthcare diagnosis systems.For recognizing patterns and anomalies in network traffic or system logs, CNN classification is employed to improve intrusion detection in healthcare systems.Using CNN classification for IDS in healthcare helps to protect sensitive patient data, ensure the integrity of healthcare information systems, and avoid security breaches.It is an essential component of healthcare cybersecurity measures to protect electronic health records and vital healthcare infrastructure.
In contrast to Multi-Layer Perceptron (MLP), CNN reduces the number of neurons and parameters, resulting in rapid adaptability and minimal complexity.The CNN model offers an extensive number of clinical classification applications.CNN models are a subset of Feed-Forward Neural Network (FFNN) [29,30] and Deep Learning models.The convolution operations convention is constant which implies that the filter is independent in function, thereby reducing the amount of parameters.Pooling, convolution, and fully connected layers are the three types of layers used in the CNN method.These layers are required for performing feature extraction, dimensionality reduction, and classification.The filter is slid on the computers through the forward pass of convolution operation, and the input capacity of the activation map that assesses the point-wise result of every score is added to obtain the activation.The sliding filter is employed by linear and convolution operators, being stated as a quick distribution of dot product.Consider is the kernel function, is the input, at time is formulated as in Eq. (11).
Where, is for each .The parameter is the discrete which is presented in Eq. (12).
∑ (12)
The 2D image is given as input, is a 2D kernel, and the convolution is formulated as in Eq. ( 13).
∑ ∑ (13)
In order to improve the non-linearity, two activation functions, ReLU and softmax are utilized.The ReLU is mathematically represented as in Eq. ( 14).(14) The gradient for and for .The ReLU convergence ability is better than the sigmoid non-linearities.The next layer is softmax, preferable when the result requires including two or more classes which is mathematically formulated as in Eq. (15).∑ (15) The pooling layers are applied to the result in a statistic of input, and the structure of output is rescaled without losing the essential information.There are various types of pooling layers, this paper utilizes the highest pooling that individually produces large values in the rectangular neighborhood of individual points in 2D information for every input feature correspondingly.The fully connected (FC) layer, which is the last layer with and output and input are illustrated further.The parameter of the output layer is stated as a weight matrix .Where, and are rows and columns, and the bias vector .Consider the input vector the fully connected layer output with an activation function is formulated as in Eq. ( 16).(16) where, is the matrix product where function is employed as a component.This fully connected layer is applied for classification difficulties.The FC layer of CNN is commonly involved at the topmost level.The CNN production is compressed and displayed as a single vector.
Table I shows the notation description.
IV. EXPERIMENTAL RESULTS
In this research, the HCFS-BOA based CNN is simulated using a Python environment with the system configuration of 16GB RAM, Intel core i7 processor, and Windows 10 operating system.The parameters like accuracy, precision, recall, and f1-score are utilized to estimate the performance of the model.The mathematical representation of these parameters is shown Eq. ( 17) to Eq. ( 20).
Accuracy -Accuracy is the proportion of accurate predictions to all input samples and it is calculated using Eq. ( 10).
(17) www.ijacsa.thesai.org Precision -The precision measures the percentage of actual data records versus expected data records.The performance of the classification model is greater if the precision is higher. ( Recall -Recall is calculated as the sum of the true positives and the positive class images.
F1-Score -It is also known as the harmonic mean which seeks a balance between recall and precision. (20)
A. Quantitative and Qualitative Analysis
This section shows the quantitative and qualitative analysis of the proposed CSF-BOA-based CNN model in terms of precision, accuracy, f1-score, and recall, as presented in Tables II, III and IV.Table II illustrates the performance of feature selection on the CIC-IDS2017 dataset.The performances of ACO, PSO, CFS, and BOA are measured and matched with the proposed HCFS-BOA.Fig. 2 represents a graphical illustration of the feature selection methods.The obtained result shows that the proposed HCFS-BOA algorithm attains an accuracy of 95.98%, precision of 94.23%, recall of 93.62%, and f1-score of 94.96% which is better when compared to the existing optimization algorithms.
Table III illustrates the performance of classification with default features using CIC-IDS2017 dataset.The performance of Support Vector Machine (SVM), Artificial Neural Network (ANN), K-Nearest Neighbor (KNN), and Recurrent Neural Network (RNN) are measured and matched with the proposed HCFS-BOA.Fig. 3 represents the graphical illustration of classification performances.The obtained result shows that the proposed HCFS-BOA algorithm attains an accuracy of 93.68%, precision of 92.92%, recall of 91.69%, and f1-score of 92.73% which is superior when compared to the existing optimization algorithms.Table IV illustrates the classification outcomes with optimized features using CIC-IDS2017 dataset.The performance of SVM, ANN, KNN, and RNN are measured and matched with the optimized feature CNN.Fig. 4 illustrates the graphical representation of classification performances with optimized features.The obtained outcomes prove that the CNN algorithm accomplishes an accuracy of 99.45%, precision of 98.89%, recall of 98.67%, and f1-score of 97.98%, therefore being superior in contrast to the existing optimization algorithms.The ACO, PSO, CFS, and BOA consume 25 seconds, 29 seconds, 31 seconds, and 35 seconds of time, respectively.The time analysis of HCFS-BOA with CNN demands a training time of 20 seconds, being more robust in comparison with other optimization techniques like ACO, PSO, CFS, and BOA on the CIC-IDS2017 dataset.Table V shows the performance of classification with optimized features on the NSL-KDD dataset.Fig. 5 shows that the obtained outcomes of optimized results of the CNN algorithm accomplishes an accuracy of 98.13%, precision of 97.36%, recall of 97.07%, and f1-score of 95.34%, in that way, proving more robust in contrast to the previous optimization algorithms.The ACO, PSO, CFS, and BOA require 22 seconds, 25 seconds, 28 seconds, and 34 seconds of time, respectively.The time analysis of HCFS-BOA with CNN needs a training time of 15 seconds which is lesser than that of the previous optimization techniques like ACO, PSO, CFS, and BOA on the NSL-KDD dataset.
C. Validation of Real-Time Applications
The NSL-KDD dataset is commonly deployed for intrusion detection in IoT to ensure reliability and security for healthcare systems.This research uses the NSL-KDD dataset for training and validation purposes on real-time applications in the cloud.The NSL-KDD dataset is split into training, testing, and validation in the ratio of 70:15:15.IDS is created to detect the different types of attacks by evaluating system logs, network traffic, and behavioral patterns.Malware attacks, DoS attacks, Cross-Site Scripting (XSS), etc., are different attacks.These types of attacks are performed when the patient information is blocked or stolen by attackers.Therefore, the NSL-KDD dataset is employed for model training purposes to reduce the attacks in real-time healthcare applications.
D. Discussion
The CIC-IDS-2017 dataset is beneficial for intrusion detection because of its comprehensive representation of realistic traffic network scenarios with different types of attacks and normal activities.It provides a labelled and largescale dataset that assists the evaluation and enhancement of intrusion detection with enhanced robustness and accuracy.The NSL-KDD dataset is beneficial for intrusion detection as it solves limitations in the original KDD Cup dataset by minimizing redundancy and managing a more balanced distribution of classes.It generates the representation of a more realistic modern traffic network that contains normal behavior and different wider attacks that maximize intrusion detection robustness.By using these two datasets, the proposed approach is analyzed by generic type.Moreover, the advantages of the proposed method and the limitations of existing methods are discussed.The existing methods have some limitations such as the SafetyMed method [16] has no defense mechanism against an attack of AML.Combining various detection systems increases the risk of false negatives and false positives in HIDM [17].BDSDT [18] isn't effective against web and Bot threats since there are fewer instances of these two attack classes.The proposed HCFS-BOA-based CNN model overcomes the existing models' limitations.
To overcome the problem of AML attack, CFS is used to identify highly informative features for minimizing the risk of adversarial manipulations compared to other algorithms.BOA assists in identifying an optimal subset of features that maximizes detection accuracy and reduces the risk of false positives and false negatives.This is done by focusing on informative features in CFS that assist in enhancing the model's ability to discriminate between various attack classes like web and Bot threat.Combining CFS with BOA enables appropriate features that not only have significant correlations with the target variable but also contribute to the optimal performance of intrusion detection in the healthcare system, in contrast to the other methods.The CNN is deployed to identify and classify intrusion accurately and effectively.New attacks such as web and Bot threat attacks are classified effectively by using CNN.The proposed HCFS-BOA-based CNN achieves a superior accuracy of 99.45% when compared with the existing methods namely, SafetyMed, HIDS, and BDSDT.
V. CONCLUSION
In this research, the HCFS-BOA based on the CNN model is proposed for intrusion detection to secure the entire network in the healthcare system.The proposed method mainly comprises four stages: dataset, min-max normalization, feature selection, and classification.Initially, the data is obtained from the CIC-IDS2017 and NSL-KDD datasets, after which the min-max normalization is performed to normalize the acquired data.For feature selection, HCFS-BOA is employed for optimal performance of intrusion detection in healthcare systems.Finally, the CNN is deployed to identify and classify intrusion accurately and effectively.The proposed HCFS-BOA-based CNN achieves a better accuracy of 99.45% when compared with the existing methods like SafetyMed, HIDS, and BDSDT.In the future, hyperparameter tuning can be applied in feature selection for improving the model's performance.
Fig. 1 .
Fig. 1.Block diagram for the proposed method
1 )
Correlation-based Feature Selection (CFS):One of the most known filter algorithms is CFS which selects features based on the output of a heuristic (correlation-based) evaluation function.It seeks to choose subsets whose attributes are highly correlated with the class but unassociated with one another.Repetitive features are selected based on their high correlation with at least one other feature, while low-association features are ignored.The function of the CFS feature subset assessment is mathematically expressed in Eq.
TABLE II .
PERFORMANCE OF FEATURE SELECTION USING CIC-IDS2017 DATASET
TABLE III .
PERFORMANCE OF CLASSIFICATION WITH DEFAULT FEATURES USING CIC-IDS2017 DATASET
TABLE IV .
PERFORMANCE OF CLASSIFICATION WITH OPTIMIZED FEATURES USING CIC-IDS2017 DATASET | 6,459.6 | 2024-01-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Potential Output in Asia: Some Forward-Looking Scenarios
This paper presents estimates of potential output growth for a sample of 26 Asian economies and projects potential output growth through 2040 under several scenarios. Results suggest that in the absence of further capital deepening, and assuming continued total factor productivity growth at recent rates, potential output growth across economies could slow from a median of 4.6% during 2010–2015 to 2.7% between 2035 and 2040. Demographic trends and an assumed stabilization in capital–output ratios account for most of the slowing. Much better outcomes are possible if trends are supported by policy. Better total factor productivity growth could raise potential output by between 11% and 24% by 2040, while lower unemployment and higher participation rates could boost potential output by 10% or more in some South Asian economies. An improved investment climate could add between 6% and 10% to potential output in most economies, while accelerating structural convergence (moving labor from lower to higher productivity sectors) could raise potential output by 10% or more in half of the examined countries.
utilization of existing resources given the existing institutions and technology of an economy. Even with this definition, there are many different approaches that can be taken to identify the unobservable true level of potential output. Common identification strategies include (i) equating potential output as the level of output consistent with a statistically average growth rate of output (e.g., Hodrick-Prescott or Kalman filters, and other frequency or time-domain filters); (ii) using nonaccelerating inflation as an indicator of potential (see, for example, Lanzafame 2016); and (iii) using a notion of full utilization of the factors of production at trend levels of factor productivity to identify potential output (as this paper does).
II. Methodology
The estimates of potential output presented here are based on the production function approach similar to that used by, among others, the World Bank in its Macro-Fiscal Model (World Bank 2016b); the United States (US) Congressional Budget Office (Congressional Budget Office 2001); the Organisation for Economic Co-operation and Development (Beffy et al. 2006); the European Commission (Economic Policy Commission 2001, D'Auria et al. 2010, Denis et al. 2006; and the US Federal Reserve in its Federal Reserve Board model (Brayton, Laubach, and Reifschneider 2014). In this approach, the supply side of gross domestic product (GDP) is described by a simple Cobb-Douglas function of the form given below: 1 where GDP is gross domestic product, K is the capital stock, L is labor employed, TFP represents total factor productivity (TFP), α is the income share of capital (assumed to be 0.3), and the subscript t denotes time. 2 This paper's measure of labor differs from that of Burns et al. (2014) by using labor market data (labor force participation, sectoral employment, and unemployment) produced by the 1 There exist variants of this form. For example, the Federal Reserve Board model includes trend energy services as an independent factor of production.
2 Data appear to support the use of the Cobb-Douglas production function. In this paper, an economy-byeconomy estimation of a constant elasticity of substitution function yields a mean elasticity of substitution of 0.95. For a sample of 157 developing economies, the mean freely estimated capital share is 0.4, with the modal value lying between 0.23 and 0.34. For the Asian subsample, the mean freely estimated capital share is 0.55.
International Labour Organization to measure labor inputs. 3 Burns et al. (2014) and other earlier work used the working-age population as an alternative proxy. It is recognized that the labor market data capture labor market behavior imperfectly, especially in economies characterized by a sizable informal labor sector.
Equation (1) can be rewritten by expressing employment L as the product of the working-age population, P 1564 ; the labor force participation rate, Pr; and 1 minus the unemployment rate, U N R, or employment as a percentage of the labor force; which gives GDP t = TFP t · K α t · (P 1564,t · Pr t · (1 − U N R t )) 1−α (2) The above decomposition is widely used in macroeconomic analysis because it is simple, intuitive, and lends itself to straightforward interpretation. However, its application to developing economies is complicated by data limitations. While the majority of economies publish time series of GDP and the size of the working-age population, data on the capital stock are not widely available and labor market data (labor force participation and unemployment) are often not measured. When measured, labor market data are often ill-defined in economies characterized by widespread informal employment and subsistence agriculture. 4 The following discussion describes how these limitations have been dealt with in this paper.
A. Estimating the Capital Stock
Most developing economies do not have official estimates of their capital stock. This shortcoming is overcome by estimating the capital stock using a highly simplified version of the perpetual inventory method from investment data, dating back to 1960 in many cases, and assuming a depreciation rate of 7%. 5 The same basic methodology was employed for estimating capital stocks in developing economies by Nehru and Dhareshwar (1993) and is used by the Organisation for Economic 3 The International Labour Organization data set is derived from economy-level sources but data for some years and economies contain gaps (International Labour Organization 2011). Missing data are estimated by various methods. Even when data are derived directly from well-defined surveys, the surveys are not always comparable across economies. 4 GDP and investment data are sourced from the World Bank's Macro-Fiscal Model (World Bank 2016b), which in turn relies on World Development Indictors as a primary source and is supplemented by the International Monetary Fund's World Economic Outlook database and national source data. Population data are sourced from the World Development Indicators and United Nations (2015) population forecasts are spliced on for the forecast period. 5 The Organisation for Economic Co-operation and Development (2001) provides a comprehensive manual of methods for calculating the capital stock, mainly relying on disaggregated sectoral investment data, sectoral differentiation in depreciation rates, and a careful accounting of the cohort structure of the capital stock while also accounting for price changes in the capital stock. The method employed here assumes the same depreciation rate for all forms of capital and abstracts from the obsolescence implied by relative price changes over time. See Wolf (1997) for an exposition of simplified capital stock calculations that are nevertheless much more sophisticated than the procedure employed here.
Co-operation and Development in its Interlink Model for economies where the statistical agency does not produce an independent measure of the capital stock.
Using this methodology, a capital stock series, K i , is generated for each economy using the following capital accumulation equation, where i denotes the initial estimate: 6 Because at the starting point (t = 0) the capital stock is zero, this method underestimates the capital stock in early years. To get around this problem, a two-step procedure is employed. An initial estimate of an economy's capital stock is calculated and then divided by GDP to derive a preliminary estimate of the capital-output ratio for each economy. 7 Taking this initial estimate of the capital stock after 15 years, K i (15), and dividing by GDP in the same year (t = 15), gives an estimate of the economy's steady-state capital-output ratio. 8 In the second step, this estimate of the capital-output ratio in t = 15 is multiplied by GDP in t = 0 to derive a nonzero starting point for the capital stock of each economy as shown in equation (4). The capital stock for t = 1. . .n was then recalculated using equation (3), resulting in a much more accurate estimate of the capital stock: 9
B. Accounting for the Influence of Structural Change
After accounting for labor and capital, the unexplained part of GDP is TFP. An expression for TFP can be obtained after rewriting equation (1) as the product of output per worker and the capital-labor ratio raised to the labor share: Arnaud et al. (2011) cite an alternative method following Kohli (1982) that sets the initial capital stock equal to the level of investment in t = 0 and divides by the depreciation rate plus the long-run growth rate of investment. 7 In a steady-state model with a 7% annual depreciation rate, 85% of the initial capital, Inv(t = 0), will have depreciated after 25 years and the initial estimate of the capital stock will be 92% of the actual. After 15 years, the capital stock will have reached 80% of its long-term equilibrium level. Mathematically, the amount of the capital stock that existed at t = 0 will equal K 0 t = K 0 * 0.93 t at any given time t. 8 To deal with outliers, if the estimated capital-output ratio for an economy fell outside the 25th and 75th percentiles of its income cohort, the estimated capital-output ratio was set equal to either the 25th or 75th percentile level. 9 Assuming a steady-state model with 3% GDP growth per annum, the error in estimation of the capital stock would be 8% in year 0, 3.2% in year 10, and less than 1% in year 25. Of course, in most developing economies, GDP growth and investment rates have accelerated significantly over the past 20 years, suggesting that the actual estimation error is significantly smaller than suggested by the steady-state model.
Output per worker can be decomposed as the change in sectoral output per worker (w i ) and the change in the share of workers in each sector (s i ): Expressing the two terms in the above expression as ww (change in within-sector output per worker) and wb, the change in the relative size of the sectors gives w = ww + wb w B t can then be defined as the cumulative summation of earlier changes in a sector's influence on output per worker: And equation (5) above can be rewritten as Rewriting (6) gives a new expression for output as a function of TFP net of structural change, labor supply, and structural change: POTENTIAL OUTPUT IN ASIA: SOME FORWARD-LOOKING SCENARIOS 33
C. Estimating Trend Productivity Growth
After the capital stock and the contribution of structural change to the evolution of output per worker have been estimated, TFP net of structural change over time can be quantified by rearranging the production function shown in equation (6) and solving for Q TFP t as a residual: Trend net TFP, Q TFP t * , which is necessary to estimate potential output, can be calculated using the Hodrick-Prescott filter through the spot estimate of Q TFP t . The endpoint problem (Mise, Kim, and Newbold 2005) is resolved by assuming that for each economy, TFP growth from the endpoint of actual data through 2040 is equal to the economy's average rate of growth of Q TFP t during the period 1995-2015 (or 2014 where 2015 data are not yet available). 10
D. Calculating Potential Output
Assuming that (i) the labor force is fully employed (UNR and Pr are at their equilibrium values of UNR * and Pr * such that L * t = P * 1564,t Pr * t UNR * t ), (ii) all of the services of the available capital stock are used, and (iii) TFP net of structural change is at a level consistent with its long-term trend, 1 TFP * t , gives an expression for the growth rate of potential GDP * t . 11 Unlike labor, there is no separate estimate of the value of the capital stock at full employment because the relevant input here is capital services, which at full utilization rates are the services from the total capital stock raised to the power α: Armed with actual GDP and the estimate of potential GDP, GDP * t , it is possible to calculate the output gap, OG t , which is defined as the percentage difference between the actual output observed and the estimated potential output: 10 Historical data for GDP in 2015 were not available for all economies. For those economies where such data were unavailable, trend TFP growth was calculated using data for the period 2000-2014. 11 The equilibrium unemployment rate and participation rate are estimated using the Hodrick-Prescott filter, assuming that future levels of these variables are equal to their average level in 2000-2015. If actual output rises above its potential (positive output gap), then capacity constraints begin to bind and one would expect to see inflationary pressures build and also perhaps an increase in the current account deficit. On the other hand, if the output gap is negative, resources are underutilized and inflationary pressures subside. Normally, actual GDP growth will fluctuate around its estimated potential growth path.
III. Baseline Results
Using the methodology described above, Table 1 reports historical growth rates and estimates of potential output growth, TFP growth, the natural rate of unemployment, and the natural labor force participation rate for 23 Asian economies. Due to data limitations, labor's share of income in output is assumed to be 70% for all economies, the rate of depreciation of capital is 7%, and a relatively tight smoothing parameter (lambda equals 100) is used for the Hodrick-Prescott filter when calculating both the natural rates of unemployment and trend TFP. 12 Burns et al. (2014) report sensitivity analysis for alternative assumptions regarding labor's share of income in output (30%, 50%, and 70%); the capital depreciation rate (6%, 7%, and 8%); and different levels for the TFP smoothing parameter lambda. While historical estimates are impacted by the different assumptions, the extent of the influence is small.
A. Historical Trends
For the region as a whole, potential output growth per annum has accelerated markedly from around 4.1% in the early 1990s to around 5% in the 2000s, before easing somewhat during the first half of the 2010s (Figure 1). 13 Excluding the People's Republic of China (PRC), where potential output growth has been relatively stable until recently, the acceleration is less evident and potential output grows at just under 3% per annum, which is more or less the same rate as just before the 1997/98 Asian financial crisis.
Notwithstanding frequent concerns voiced in the international press about the slowing of developing economy growth after the recent global financial crisis, 12 Ravn and Uhlig (2002) show that a lambda value of 6.25 for annual data is consistent with a value of lambda of 1,600 as first proposed by Hodrick and Prescott for quarterly data. However, they do not show that 1,600 is the appropriate value for quarterly data. That value was proposed originally on the basis of the somewhat arbitrary assumption that "a 5% cyclical component is moderately large, as is a one-eighth of 1% change in the growth rate." An equally arbitrary but plausible assumption about the influence of the cycle on quarterly growth of 0.5, for example, would result in a quarterly lambda of 25,600, which in the Ravn-Uhlig methodology would give rise to an annual lambda of 100, which is the number used by the European Commission (Economic Policy Commission 2001). 13 Most tables and figures presented in this report are focused on the period after 1990 because labor market data necessary for the structural change decomposition are only available in the post-1990 period. However, the TFP (inclusive of structural change) and potential output calculations themselves are not dependent on this decomposition. As a result, calculations of TFP inclusive of structural change and potential output data are available as far back as 1970 for many economies. potential output grew faster during the postcrisis period (2009-2014) than in the preboom period (1993)(1994)(1995)(1996)(1997)(1998) in 13 of the 23 Asian economies for which sufficient data exist (Table 1a). Overall, the contribution from capital accumulation and TFP growth on potential output, excluding the PRC, has increased over time, while the contribution of labor to growth has declined in most of the economies covered. The largest accelerations were observed in the economies of the former Soviet Union, many of which underwent profound structural adjustments and reform in the 1990s that set the groundwork for stronger growth in the 2000s. Table 1a groups the 13 economies in which potential output growth in the most recent 5-year period (2009-2014) was higher than during the 1990s by their most important source of acceleration. Of these economies, only Kazakhstan saw labor force growth as the largest contributor to the acceleration in potential output growth. Although labor was the largest contributor to the acceleration in Kazakhstan, the contribution that labor made to the acceleration of growth in Georgia was actually larger. However, Georgia is not included in this group of economies because the contribution of capital to the acceleration in its potential output growth was even larger. In addition to Georgia, accelerated capital deepening was also the largest driver of improved potential output in Mongolia and Papua New Guinea, likely reflecting a boost in resource-related investment associated with the commodity boom. As global commodity prices have eased and are expected to remain low for some time (World Bank 2016a), it is unlikely that these economies' strong capital deepening will be sustained over the medium term.
Improved TFP growth (net of structural change) was the largest contributor to accelerating potential output growth in Bangladesh, India, Indonesia, Pakistan, the Philippines, Solomon Islands, and Sri Lanka. Importantly, TFP is growing strongly (close to 2% or more per annum) in all of these economies except Pakistan. Continued TFP growth, and therefore a sustained acceleration of potential output in POTENTIAL OUTPUT IN ASIA: SOME FORWARD-LOOKING SCENARIOS 37 Source: Author's calculations.
these economies, will depend on maintaining the reform process and technological progress. This may be particularly challenging in economies like Sri Lanka where the recent large gains in TFP growth likely reflect a temporary boost following the cessation of hostilities. In Azerbaijan, the Lao People's Democratic Republic (Lao PDR), and Nigeria, TFP growth from structural change has been the biggest driver of growth acceleration. The contribution was particularly large in Azerbaijan for both TFP net of structural change and structural change. Partly because of base effects, the contribution of each to potential output growth during 1993-1998 was actually negative.
The contribution of employment to potential output growth weakened in every economy where potential output growth slowed in the latest period relative to 1993-1998, but only in New Zealand was this the largest factor in explaining the slowdown. Weaker capital accumulation was the largest factor in four of the 10 economies-Japan, the Republic of Korea, Malaysia, and Singapore-partly resulting from an end to the rapid capital accumulation that occurred in these economies in the 1990s prior to the 1997/98 Asian financial crisis. Except for Japan, where capital accumulation did not contribute to potential output growth during 2009-2014, capital accumulation continued to be a major factor in explaining growth in each of these economies.
Weaker productivity growth (net of structural change) was the main factor behind the deceleration in potential output growth in Armenia, Australia, and the PRC, although in the cases of Armenia and the PRC, TFP continued to expand relatively quickly. In Cambodia, Nepal, Thailand, and Viet Nam, the largest factor driving the slowdown in potential output appears to have been weaker TFP growth due to structural change, which in the case of Thailand appears to be reflected in the stabilization of the employment share of agriculture in the economy.
IV. Long-Term Projections
The future of potential output in Asian economies will depend on a wide range of factors, including initial conditions, improvements in education policies (human capital), health outcomes, regulatory reforms, industrial policies, and demographics. The identification of the potential impact that individual polices may have on unemployment, labor participation, TFP growth, and investment lie well outside the scope of this paper. However, it is possible to examine the likely impact on potential output from convergence toward best practice along each of these dimensions.
To do so, a two-step procedure is followed. First, a business-as-usual or baseline scenario grounded in specific assumptions as to how each of the principal drivers of potential output is expected to behave over the next 25 years (2016-2040) is generated. In the second step, a series of alternative scenarios are generated to examine the influence that better performances in terms of capital, labor, TFP, and structural change might have on potential output.
For the purposes of constructing the baseline, it is assumed that (i) demographics proceed in a manner consistent with the baseline assumption of the United Nations' population projections, (ii) labor market efficiency is unchanged (constant natural unemployment and participation rates), (iii) investment continues at a rate consistent with current capital-output ratios (no capital deepening), (iv) the sectoral transformation of an economy continues along the same path as during the past 15 years, and (v) TFP growth continues to grow at the same average pace as during the past 15 years. Table 2 shows potential output growth rates for Asian economies during 2010-2015 and projected potential output growth rates for the period 2035-2040 based on these five assumptions. It presents the change in potential growth between these two periods and breaks down the individual contributions of employment, capital, TFP, and structural change. Figure 2 shows the same changes graphically, with the contributions for each economy sorted from the largest negative contribution to the smallest (or largest positive contribution).
These results are not a forecast but rather a projection of what might occur should the assumptions described above hold. In some cases, the projected change in potential growth and its sources may say more about the 2010-2015 period than it does about the forecast period. For example, in the case of the Lao PDR, where recently there has been rapid capital deepening, the sharp slowdown projected in the business-as-usual scenario mainly reflects the assumption of a stable capital-output ratio, and therefore an end to the rapid capital deepening that has driven recent growth. While probably not a short-term concern, the very slow long-term growth in this scenario highlights the challenge that authorities in the Lao PDR will face in transitioning the economy toward a more sustainable TFP-led growth model. On the other hand, the reduced contribution to potential growth from labor in the baseline scenario reflects a real influence.
With these important caveats in mind, Table 2 shows that average median potential growth among Asian economies is projected to fall by 2 percentage points by the period 2030-2040, with potential output growth in virtually every economy slowing to some degree or another. Slower growth of the working-age population (driven entirely by demographics) and the stabilization of capital-output ratios each account for -0.8 percentage points of the median slowdown. The median decline from slower TFP growth is a relatively small -0.2 percentage points, while structural change contributes -0.4 percentage points.
Almost every economy is likely to see the contribution of labor to potential growth decline during the review period, assuming no further declines in equilibrium unemployment or in the rate of labor participation (Figure 2b). Only Sri Lanka is projected to see the growth rate of its working-age population pick up between 2015 and 2040; therefore, the baseline contribution of employment to output growth rises marginally in Sri Lanka. Elsewhere, working-age population growth slows, with the contribution of labor to potential output growth declining the most in Azerbaijan, the Republic of Korea, and Singapore. Capital's largest contribution to slowing potential growth is observed in economies, such as the Lao PDR and Viet Nam, that have undergone an intense process of capital deepening in recent years. In these and similar economies, the assumption of a stable capital-output ratio implies that the rapid capital deepening of recent years will end, resulting in a substantial decline in the potential growth rate (e.g., 1.9 percentage points in the case of the PRC). Figure 2c provides a breakdown of which economies are most affected by this assumption.
Insofar as the recent pace of capital deepening is unsustainable, these slowdowns point to a real growth challenge in the medium term. If these economies want to maintain recent potential output growth rates, policies will either have to continue to create conditions that support the very high investment rates of recent years or substitute this investment with faster TFP growth and/or increased labor utilization.
For most economies, the assumption to hold TFP growth at the average rate observed during 2000-2015 has only a small impact on the contribution of TFP growth, with the notable exceptions of Georgia and Kazakhstan, where TFP growth in the postcrisis period (2010)(2011)(2012)(2013)(2014)(2015) was much lower than in the boom period (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010). As a result, the baseline assumption of average TFP growth implies a substantial boost to potential output growth for these economies. In contrast, TFP growth in Bangladesh, Cambodia, and Viet Nam was higher in the most recent period. As a result, assuming TFP grows at the slower-period average implies a significant decline in potential growth.
Slower structural change, which is largely a function of the assumption to hold the pace of structural change at a constant rate, implies slower growth of 0.4% for the median Asian economy, with impacts in excess of at least 1 percentage point in economies where structural change has picked up in the postcrisis period (2010)(2011)(2012)(2013)(2014)(2015), including Georgia, India, the Lao PDR, and Papua New Guinea. In contrast, structural change in Thailand slowed sharply in the postcrisis period and a return to more normal rates would imply faster potential output growth.
V. Convergence Scenarios
As discussed above, the baseline scenario is not a forecast but rather a mechanism to help identify potentially untapped sources of growth. In this section, the baseline projection is compared with scenarios employing different assumptions consistent with success in advancing more quickly than under the baseline scenario in one or more aspects of economic convergence. Each of these alternative scenarios assumes that an economy will put in place policies that allow different drivers of potential output growth (e.g., employment to working-age population ratio, capital-output ratio, level of TFP, and economic structure) to converge with the path followed by the Republic of Korea, an economy that made the transition from low-to high-income status relatively rapidly. As such, these alternative scenarios indicate in which area lie the greatest latent possibilities for sustained improvement in economic performance. While the Republic of Korea is a somewhat arbitrary convergence point at which to aim, it has the merit of being concrete and is rooted in one of the more successful Asian development stories of the past 100 years.
A. Employment Convergence
The first alternative scenario examines the impact on potential output if authorities succeed in reducing the unemployment rate and increasing the equilibrium labor participation rate in their respective economies to the levels observed in the Republic of Korea in 2014. This scenario is implemented by reducing the unemployment rate by 0.2 percentage points per annum until it reaches the Republic of Korea's 2014 level, and by raising the participation rate by the same 0.2 percentage points per annum until it reaches the Republic of Korea's levels. 14 Figure 3 reports for each economy the impact on potential GDP in 2040 expressed as a percent of baseline potential GDP.
The largest benefits from improved labor market efficiency, with around a 10% increase in potential GDP in 2040, are for India, Pakistan, and Sri Lanka, which are economies with relatively low participation rates due mainly to low rates of female labor force participation. The second-biggest improvements come from economies with high unemployment rates like Georgia, Indonesia, and the Philippines. Bringing more of the working-age population into employment could raise potential output by as much as 5.5% in these economies by 2040.
14 The modeled convergence rate is based on the average reduction or improvement observed in these rates among economies with falling unemployment and rising participation rates over the past 15 years. It implies a maximum improvement in both rates of 5 percentage points between 2015 and 2040.
B. Capital Convergence
In the second alternative scenario, the capital-output ratios in Asian economies slowly converge to the same level currently observed in the Republic of Korea. Figure 4 indicates that the Republic of Korea's capital stock was 3.2 times potential output in 2014, with the capital-to-potential output ratio among economies in the sample ranging from 1.2 to just under 3 in the case of the PRC.
In this scenario, economies were assumed to grow their capital-output ratios during the projection period until they reached that of the Republic of Korea in 2014, at which point the capital-output ratio would be held constant. The growth rate used was the greater of 2% per year or the average growth rate of an economy's capital-output ratio during 2000-2015. The 2% growth rate is roughly equal to the mean plus one standard deviation of the rate of growth of the capital-output ratio for all developing economies during the period 1980-2000, implying that the rate of growth of the capital-output ratio exceeded 2% in only roughly 15% of economies during this period. Figure 5 shows the percentage change in potential output in Asian economies in 2040 resulting from convergence with the Republic of Korea's 2014 capital-output ratio. For most economies, faster capital deepening adds 6-8 percentage points to potential output by 2040. For economies such as the PRC where the capital-output ratio was already close to the Republic of Korea's levels in 2014, the gains are minimal. Gains of as much as 13% are captured by economies such as Cambodia and Pakistan where the pace of capital deepening has been particularly rapid in recent years.
C. TFP Convergence
The third scenario examines the impact on potential GDP of faster TFP growth (net of restructuring). Here the challenges and potential gains are immense. Figure 6 reports three productivity growth rates for each economy in the sample. The first represents the average growth rate of TFP during 2000-2015, which is used as the baseline projection. The second reports the productivity growth that each economy would need to attain to converge to the PRC's baseline TFP level in 2040. The third shows the productivity growth rate that each economy would need to attain the Republic of Korea's baseline TFP level in 2040. Figure 6 confirms that TFP levels in most economies in the region lie well below the Republic of Korea's levels; even the PRC would need to increase the pace of its TFP growth by 50% if it were to close the gap with the Republic of Korea by 2040. Figure 6 shows that there is substantial variation in TFP growth across Asian economies, with top performers like the PRC recording TFP growth of 4.5% or more per annum during 2000-2015, while others such as Bangladesh, the Lao PDR, and Pakistan had TFP growth of less than 1% per annum during the same period. Overall, the median and mean TFP growth rates for the sampled economies are about 2.1%, and the standard deviation across economies is about 1.25%. Figure 6 also illustrates that the kind of sustained increases in TFP growth required to converge to the Republic of Korea's (or even the PRC's) levels of TFP do not appear attainable for most Asian economies. To reach the Republic of Korea's or the PRC's TFP levels in 2040, most economies would need to attain TFP growth of more than 5% per annum, and substantially more in some cases, which is more rapid TFP growth than any economy recorded during 2000-2015. Figure 7 reports the results of two simulations that evaluate the potential gains from exceeding baseline TFP growth, which is no simple task given that 2000-2015 was a period of record growth for most developing economies. This suggests that simply maintaining TFP growth rates from this period would be an achievement. The first scenario estimates the impact on potential output in 2040 of increasing the TFP growth rate of all economies by 0.5 percentage points per annum. The second scenario estimates the effect on potential output in 2040 of increasing TFP growth by the amount needed to match the developing economy mean for TFP growth during 2000-2015 (2.1%), or of raising those economies already above the mean by 0.5 percentage points.
In the first scenario, raising TFP growth by 0.5 percentage points per annum generates end-of-period increases in potential output ranging from 11% to 24% of GDP. In the second scenario, those developing economies where TFP growth during 2000-2015 was well below the median could see potentially huge increases in output of as much as 110% by 2040. The large increases recorded for high-income economies like Australia and New Zealand almost certainly overstate their prospects as their TFP levels in 2014 were already higher than the Republic of Korea's. The sharp increase in potential GDP among some developing economies in the second scenario reflects how weak TFP growth was during 2000-2015 for these economies, which in turn reflects how reliant GDP growth in these economies was on capital deepening. Switching from a growth model dependent on high investment rates to one more reliant on improved efficiency will not be easy, though even attaining the median TFP growth rate among developing economies could generate huge benefits.
D. Convergence of Economic Structure
The final scenario analyzes structural change (Figure 8). An important contributor to income growth in a developing economy is the gradual movement of firms and workers from lower to higher productivity sectors (Lewis 1954). For the baseline, it was assumed that economies maintained the same rate of structural change for the period 2015-2040 as they had during 2000-2015. For the alternative scenario, economic structure was assumed to follow more or less the same pattern of structural evolution as occurred in the Republic of Korea. This hypothesis fits the data surprisingly well for many economies. For example, simple regressions of the employment share of the service sectors for the PRC, Indonesia, and Malaysia on the Republic of Korea's service sector's employment share generated R-squared values of 0. 98, 0.74, and 0.96, respectively. 15 In the structural change scenario, sectoral employment shares were assumed to follow the pattern of development in the Republic of Korea from that point in time when its agriculture employment share was closest to an individual economy's. 16 Using this assumption for structural change, economies such as the Lao PDR and Papua New Guinea, which saw a great deal of structural change during 2000-2015, would see less structural change than under the baseline scenario where it was assumed that structural change would continue at the same rapid pace as the period 2000-2015. For most economies, however, the pace of structural change increases under the alternative scenario, with strong positive impacts on potential output in 2040, including GDP increases of 20% or more in several cases. 15 The regression is based on a simple model of eserv it = α + βeserv kor,t−lag , where the lag is determined by the period with the best fit (27 years, 27 years, and 10 years for the PRC, Indonesia, and Malaysia, respectively) and where the lags are selected based on a rolling regression designed to find the lag year with the best fit. 16 More explicitly, a log linear regression of the Republic of Korea's employment share against time was run and then inverted to solve for the lag to be used by substituting economy x's agricultural share of employment for the Republic of Korea's in the equation such that t x = e (eagr x -α/β) .
E.
Overall Impact Figure 9 presents the cumulative impacts of the four convergence scenarios examined. The alternative TFP scenario has the most consistently large impact, mainly because in contrast to the labor market scenario or the capital-deepening scenario, the TFP scenario assumes continued improvement every year. In the capital-deepening and labor market scenarios, once convergence with the Republic of Korea's levels has been achieved, no further improvement occurs and the contribution of convergence to annual potential growth along these dimensions falls to zero. While reforms in earlier years boost the level of potential output, they no longer contribute to raising the rate of growth of potential output in later years. Consistent with the Lewis (1954) turning point, the contribution to potential output growth of structural change is larger in less-developed economies because as income levels and economic structure converge, productivity gains from labor reallocation across sectors decline.
The structural change scenario yields large improvements in those economies where there has been relatively little structural change in recent years as the baseline scenario assumes structural change continuing at the average pace observed during 2000-2015.
VI. Concluding Remarks
In this paper, a relatively simple methodology for estimating potential output is presented and used to project potential output over the next 15 years. The first set of simulations, which roughly translate into a business-as-usual scenario, employs a set of baseline assumptions consistent with United Nations population projections; stable labor market efficiency; long-term trends of TFP growth; and a constant capital-output ratio, which implies capital growth consistent with the long-term equilibrium properties of the Cobb-Douglas production function and the pace of TFP and labor force growth.
These simulations reveal that, all things being equal, potential output growth across the 26 Asian economies in the sample would fall from a median of 4.6% per annum during 2010-2015 to 2.7% per annum by the period 2035-2040. In the baseline scenario, about 0.8 percentage points of the slowing is due to a decline in working-age population growth, while a stabilization of capital-output ratios at current levels would contribute a similar amount to the slowdown. TFP growth (net of structural change) in the baseline is assumed to stabilize at the average rate observed between 2000 and 2014. As a result, slower TFP growth contributes a relatively modest 0.2 percentage points to the slowdown, while assumed stagnation in the pace of structural change (the movement of labor from lower to higher productivity sectors) cuts 0.4 percentage points from potential output growth between the two periods.
The alternative scenarios presented illustrate the potential to improve on these results by implementing better polices that improve labor market efficiency, prompt more rapid capital deepening, boost structural change, and generate faster TFP growth. The specific policies that could yield these benefits at the economy level lie well outside of the scope of this paper. Nevertheless, the reported scenarios give a sense of the size of potential gains.
In particular, the scenarios suggest that raising TFP growth in economies in which TFP growth has been weak to the developing economy average of 2% per annum, and in all other economies by 0.5 percentage points per annum, could increase potential output in 2040 by between 11% and 24% over the baseline. Reducing unemployment rates and boosting labor force participation rates to the same levels as observed in the Republic of Korea in 2014 would raise end-of-period potential output by less than 6% for two-thirds of economies (as labor utilization rates are already relatively high in many economies), but could boost potential output by 10% or more in some South Asian economies where female labor participation rates are low. Capital deepening-either raising capital-output ratios by 2% per annum over an economy's average for the period 2000-2014, or attaining the sample's capital-output ratio average for the period 2000-2014-could add between 6% and 10% to potential output in most economies. In the alternative scenarios, the largest potential benefit comes from a pick-up in structural convergence toward the average rate observed in the Republic of Korea during the last 50 years. The removal of policies that may be restricting structural change, such as rural income support schemes and limits on rural-urban migration, could help boost potential output in 2040 in almost half of all economies by 10% or more above the baseline. | 9,263.8 | 2016-09-12T00:00:00.000 | [
"Economics"
] |
Virtual and Augmented Reality in Cardiac Surgery
Virtual and augmented reality can be defined as a three-dimensional real-world simulation allowing the user to directly interact with it. Throughout the years, virtual reality has gained great popularity in medicine and is currently being adopted for a wide range of purposes. Due to its dynamic anatomical nature, permanent drive towards decreasing invasiveness, and strive for innovation, cardiac surgery depicts itself as a unique environment for virtual reality. Despite substantial research limitations in cardiac surgery, the current literature has shown great applicability of this technology, and promising opportunities.
INTRODUCTION
Virtual and augmented reality (VR and AR) can be defined as a three-dimensional (3D) real-world simulation allowing the user to directly interact with it ( Figure 1) [1] . Through the integration of imaging data and input from users [2] , VR delivers a 3D graphical output which can be then visualized through a wearable headset ( Figure 2). Throughout the years, VR has gained great popularity in medicine and is currently being adopted for a wide range of purposes including medical education, stroke rehabilitation, and teaching of surgical techniques, particularly laparoscopy [3,4] . Despite its great advances in numerous areas of medicine, the future potential innovative impact of VR in cardiac surgery has not been extensively discussed yet, with no formal integration of this technology in this specialty. Nevertheless, due to its dynamic anatomical nature, permanent drive towards decreasing invasiveness, and strive for innovation, cardiac surgery depicts itself as a unique environment for VR.
COMMENTS The Role of VR in Undergraduate and Postgraduate Cardiac Teaching
See one, do one, teach one -the method of "learningby-doing" paired with "on-the-job-training" is one of the most popular surgical training methods. Ever since the surgeons have been using real-life scenario simulations, practical skills sessions, and video sessions [5] as means of improving their dexterity, the desire for more precise and even more stimulating methods has been growing. As the works to develop robots and interactive technologies to support surgical performance have been underway [6] , the VR and active engagement techniques came along. Initially, they have been thought to allow 3D view and better insight into the anatomy and procedures. Since the introduction of 3D visualisation, many alternative ways to utilize the new techniques have been developed. The real-time interaction, decreasing invasiveness of procedures, and a reliable tool of surgical skills assessments are only a few to mention for a variety of opportunities given by the VR training in medicine and surgery.
The Role of VR in Preoperative, Intraoperative, and Postoperative Cardiac Surgery
Perhaps one of the greatest potential future applications of VR systems in cardiac surgery will be their assistance and support for the much-desired shift from open sternotomy procedures to minimally invasive ones. Reducing patient's intraoperative trauma and allowing for faster postoperative recovery have been some of the priorities of cardiac surgery over the past decades, thus leading to the development of endoscopicallyand robotically-assisted minimally invasive cardiac procedures [7] . Nevertheless, the shift towards minimally invasive cardiac surgery (MICS) has been taking place at a slow pace due to numerous limitations. The latter include its learning curve and the shortage of safe and structured training methods, the difficulties in port location in order to enable effective X-ray and angiography coverage, and the limited access to anatomical and surgical targets [8][9][10][11][12] . Indeed, even at wideangled panoramic views, the use of an endoscope for the visualisation of complex anatomical structures in a 3D and dynamic environment proves to be complicated and could be considered a limitation to MICS. In light of these limitations, VR might be able to offer unique opportunities to improve the visualisation of surgical targets and enhance the beating-heart intracardiac surgical outcomes ( Figure 3) [13,14] .
Is VR and AR Implementation in Cardiac Surgery Cost-Effective?
Since the conception of VR and AR in a healthcare setting, the cost of its implementation has been a significant barrier to its use in surgery. This is particularly true for cardiac surgery which requires VR and AR simulations to be of high-fidelity and precision, regardless of the purpose of the simulation. For this reason, the sheer processing power alone of a computer suitable for such VR simulations rendered the technology poorly cost-effective [15] . However, over the past decade, the economic barriers to the use of VR and AR in a surgical setting have diminished as cheaper technology with significantly stronger processing power becomes commercialized and the opportunities of VR to improve patient safety, surgical training, and audit quality becomes evident.
In particular, implementation costs for VR and AR technology in a surgical setting are becoming more affordable through the use and adaption of commercially available hardware that is nonspecific in its use, such as in the case of recently developed VR headsets that provide high-quality visuals and realistic surgeon hand interactions [16] . It would not be unreasonable to assume that potentially significant advances in reducing costs when simulating cardiothoracic surgery can be made, particularly comparing to almost a decade ago.
Moreover, VR has shown promising and potential costeffectiveness in presurgical and interventional planning in congenital cardiac surgery. Whilst in the short-term, VR incurs significant initial set-up and implementation costs relative to 3D printing heart models, in the long-term it has been suggested to control costs, reduce material wastage, and allow for a more immersive and detailed experience for the multidisciplinary team by allowing improved depth perception and visualization [17] . In fact, it can also be argued that relatively similar start-up costs exist for 3D printing heart models, including the recruitment of appropriate technicians and specialists to facilitate the operation [18] . However, beyond the initial fixed set-up costs, VR may reduce costs compared with 3D printing, as the need for regular purchases of materials and disposal of plastic waste is removed [18] .
Within surgical teaching, VR can eliminate the even greater costs of cadaveric and animal tissue models whilst providing a wider range of anatomical variation [19] . Furthermore, VR allows for repetitions of the learning experience, which not only improves the efficacy of the curricula, but it can also lead to
Fig. 3 -Augmented reality in cardiac surgery (by Sadeghi et al. [14] ). A and B) Coronary angiography with proximally calcified aneurysm and an occlusion of the left anterior descending artery (LAD) with collateral retrograde filling from the right coronary artery (RCA) and no abnormalities in the left circumflex (Cx) artery. C) Reconstructions of a computed tomography (CT) scan were made by rendering three-dimensional virtual reality (VR) images. D and E) Reconstruction of the CT scan. G-J) Immersive VR was used to plan for the insertion location of thoracoscopic ports (for left internal mammary arteries [LIMA] harvesting) and for determining the ideal location for anterior mini-thoracotomy.
further cost savings in the long-term. Particularly in regard to robotic surgery, which is associated with significant costs in its use and implementation, VR has been shown to be a valuable alternative to operating room learning sessions.
CONCLUSION
From our perspective, VR and AR brings opportunities to rapidly develop the field of cardiac surgery. VR has gained growing popularity and adoption in different medical and surgical fields, being embedded from medical education to preoperative planning, operative assistance, and even postoperative support to patients. The drive for innovation in cardiac surgery has been growing over the past years in search of methods to maximize patient outcomes and quality of life and improve training pathways for young surgeons. Although substantial research limitations persist in the field of VR and AR application to cardiac surgery, the current literature has shown great applicability of this technology, and promising opportunities. | 1,779.4 | 2021-07-07T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
Remarks on numerical experiments of Allen-Cahnequations with constraint via Yosida approximation
. We consider a one-dimensional Allen{Cahn equation with constraint from the view-point of numerical analysis. Our constraint is the subdifferential of the indicator function on the closed interval, which is the multivalued function. Therefore, it is very difficult to make numerical experiments of our equation. In this paper we approximate our constraint by Yosida approximation. Then, we study the approximating system of our original model numerically. In particular, we give the criteria for the standard forward Euler method to give stable numerical experiments of our approximating equation. Moreover, we give some numerical experiments of approximating equation.
Introduction
In this paper, for each ε ∈ (0, 1] we consider the following Allen-Cahn equation with constraint from the view-point of numerical analysis: where 0 < T < +∞ and u ε 0 is a given initial data. Also, ∂I [−1,1] (·) is the subdifferential of the indicator function I [−1,1] More precisely, ∂I [−1,1] (·) is a set-valued mapping defined by (1.5) The Allen-Cahn equation was proposed to describe the macroscopic motion of phase boundaries. In the physical context, the function u ε = u ε (t, x) in (P) ε :={(1.1), (1.2), (1.3)} is the nonconserved order parameter that characterizes the physical structure. For instance, let v = v(t, x) be the local ratio of the volume of pure liquid relative to that of pure solid at time t and position x ∈ (0, 1), defined by where B r (x) is the ball in R with center x and radius r and |B r (x)| denotes its volume.
Note that the constraint ∂I [−1,1] (·) is the multivalued function. Therefore, it is very difficult to make numerical experiments of (P) ε . Recently, Farshbaf-Shaker et. al [11] gave the results of the limit of a solution u ε and an element of ∂I [−1,1] (u ε ), called the Lagrange multiplier, to (P) ε as ε → 0. Moreover, Farshbaf-Shaker et. al [12] gave numerical experiments to (P) ε via the Lagrange multiplier in one dimension of space for sufficient small ε ∈ (0, 1]. Also, they considered some approximating method. In fact, for δ > 0, they use the following Yosida approximation (∂I [−1,1] ) δ (·) of ∂I [−1,1] (·) defined by: where [z] + is the positive part of z. For each δ > 0, they considered the following approximation problem of (P) ε : x ∈ (0, 1). Then, they gave the following numerical result to (P) ε δ by the standard explicit finite difference scheme to (P) ε δ (see [12,Remark 5.3]): From Figure 1, we easily see that we have to choose the suitable constants ε, δ and mesh size of time △t and space △x in order to get stable numerical results of (P) ε δ . So, in this paper, for each ε > 0 and δ > 0, we give the criteria for the standard explicit finite difference scheme to give stable numerical experiments of (P) ε δ . To this end, we first consider the following ODE problem, denoted by (E) ε δ : Then, we give the criteria to get stable numerical experiments of (E) ε δ . Also, we give some numerical experiments of (E) ε δ . Moreover, we show the criteria to get stable numerical experiments of PDE problem (P) ε δ . Therefore, the main novelties found in this paper are the following: (a) We give the criteria to give stable numerical experiments of the ODE problem (E) ε δ . Also, we give numerical experiments to (E) ε δ for sufficient small ε ∈ (0, 1].
(b)
We give the criteria to give stable numerical experiments of the PDE problem (P) ε δ . Also, we give numerical experiments to (P) ε δ for sufficient small ε ∈ (0, 1].
The plan of this paper is as follows. In Section 2, we recall the solvability and convergence result of (E) ε δ . In Section 3, we consider (E) ε δ numerically. Then, we prove the main result (Theorem 3.1) corresponding to the item (a) listed above. Also, we give numerical experiments to (E) ε δ for sufficient small ε ∈ (0, 1] and δ ∈ (0, 1]. In Section 4, we recall the solvability and convergence result of (P) ε δ . In the final Section 5, we consider (P) ε δ from the view-point of numerical analysis. Then, we prove the main result (Theorem 5.1) corresponding to the item (b) listed above. Also, we give numerical experiments to (P) ε δ for sufficient small ε ∈ (0, 1] and δ ∈ (0, 1].
Notations and basic assumptions
Throughout this paper, we put H := L 2 (0, 1) with usual real Hilbert space structure, and denote by (·, ·) H the inner product in H. Also, we put V := H 1 (0, 1) with the usual norm In Sections 2 and 4, we use some techniques of proper (that is, not identically equal to infinity), l.s.c. (lower semi-continuous), convex functions and their subdifferentials, which are useful in the systematic study of variational inequalities. So, let us outline some notations and definitions. Let W be the real Hilbert space with the inner product (·, ·) W . For a proper, l.s.c. and convex function ψ : The subdifferential of ψ is a possibly multi-valued operator in W and is defined by For various properties and related notions of the proper, l.s.c., convex function ψ and its subdifferential ∂ψ, we refer to a monograph by Brézis [4].
, if the following conditions are satisfied: (ii) The following equation holds: We easily see that the problem (E) ε δ can be rewritten as in an abstract framework of the form: Therefore, applying the Lipschitz perturbation theory of abstract evolution equations (cf. [5,14,21]), we can show the existence of a solution u ε δ to (EP) ε δ on [0, T ] for each ε ∈ (0, 1] and δ ∈ (0, 1] in the sense of Definition 2.1. Thus, the proof of Proposition 2.1 has been completed. Next, we recall the convergence result of (E) ε δ as δ → 0. To this end, we recall a notion of convergence for convex functions, developed by Mosco [17]. Definition 2.2 (cf. [17]). Let ψ, ψ n (n ∈ N) be proper, l.s.c. and convex functions on a Hilbert space W . Then, we say that ψ n converges to ψ on W in the sense of Mosco [17] as n → ∞, if the following two conditions are satisfied: It is well known that the following lemma holds. Therefore, we omit the detailed proof. [17] (2.3) as δ → 0.
By Lemma 2.1 and the general convergence theory of evolution equations, we easily get the following result.
and u ε is the unique solution of the following problem (E) ε on [0, T ]:
Stable criteria and numerical experiments for (E) ε δ
In this Section we consider (E) ε δ from the view-point of numerical analysis. Remark 3.1. Note from Proposition 2.2 that (E) ε δ is the approximating problem of (E) ε . Also note from (1.5) that the constraint ∂I [−1,1] (·) is the multivalued function. Therefore, it is very difficult to study (E) ε mumerically.
In order to make numerical experiments of (E) ε δ via the standard forward Eular method, we consider the following explicit finite difference scheme to (E) ε δ , denoted by of (DE) ε δ : where △t is the mesh size of time and N t is the integer part of number T /△t. We easily see that u n is the approximating solution of (E) ε δ at the time t = n△t Clearly, the explicit finite difference scheme (DE) ε δ converges to (E) ε δ as △t → 0 since (DE) ε δ is the standard time discretization scheme for (E) ε δ . Here, we give the unstable numerical experiment of (DE) ε δ in the case when T = 0.002, ε = 0.003, δ = 0.01, the initial data u ε 0 = 0.1 and the mesh size of time △t = 0.000001: From Figure 2, we easily see that we have to choose the suitable constants ε, δ and mesh size of time △t in order to get stable numerical results of (DE) ε δ . Now, let us mention our first main result in this paper, which is concerned with the criteria to give stable numerical experiments of (DE) ε δ .
Also, by (3.2) and (3.5) we observe that f δ (u n ) ≤ 0 for all n ≥ 0. Therefore, we observe from (3.3) that Therefore, we infer from (3.5) and (3.8) that {u n ; n ≥ 0} is a bounded and increasing sequence with respect to n. Thus, there exist a subsequence {n k } of {n} and a point u ∞ ∈ R such that n k → ∞ as k → ∞ and By taking the limit in (3.3), we easily observe from the continuity of f δ (·) that u ∞ = 1/(1−δ), which is the zero point of f δ (·). Hence, taking into account of the uniqueness of the limit point, the proof of (i) has been completed. Next, we show (ii). To this end, we assume that △t ∈ ( ) . Then, we can find the minimal number n 0 ∈ N so that Taking into account of (3.11), u 0 ∈ (0, 1] and 1 + △t we can find the minimal number n 0 ∈ N so that u n0 > 1 and u i ∈ (0, 1] for all i = 0, 1, · · · , n 0 − 1. Also, by (3.4) we observe that thus, (3.10) holds.
To show (ii), we put Then, we observe from (3.2) and (3.3) that Therefore, we observe from (3.13) and τ ∈ (1, 2) that the zero point is in the interval between u n0 and u n0+1 . Also, by (3.12) we observe that which implies that Therefore, by (3.10), (3.13) and by repeating the procedure as above, we observe that for all n ≥ n 0 (3.14) and u n oscillates around the zero point 1/(1 − δ) for all n ≥ n 0 . Also, we observe from (3.12) and (3.14) that Therefore, by τ ∈ (1, 2), (3.14) and (3.15), there exist a subsequence {n k } of {n} such that u n k oscillates and converges to 1/(1 − δ) as k → ∞. Hence, taking into account of the uniqueness of the limit point, the proof of (ii) has been completed.
Remark 3.3. By (3.3) we easily see that
By (ii) of Theorem 3.1, we observe that u n oscillates and converges to the zero point of f δ (·) in the case when △t ∈ ( . However, in the case when △t = 2δε 2 /(1 − δ), we have the following special case that the solution to (DE) ε δ does not oscillate and coincides with the zero point of f δ (·) after some finite number of iteration.
Then, we easily observe that:
The case when △t = 0.000001
Now we consider the case when △t = 0.000001. In this case, we have: which implies that (i) of Theorem 3.1 holds. Thus, we have the following stable numerical result of (DE) ε δ . Namely, the solution to (DE) ε δ converges to the stationary
The case when △t = 0.000002
Now we consider the case when △t = 0.000002. In this case, we have: which implies that (ii) of Theorem 3.1 holds. Thus, we have the following numerical result of (DE) ε δ that the solution to (DE) ε δ oscillates and converges to the stationary Figure 4:
The case when △t = 0.000005
Now we consider the case when △t = 0.000005. In this case, we have: Therefore, we observe Remark 3.2. In fact, we have the following numerical result of (DE) ε δ that the solution to (DE) ε δ oscillates. (1 − δ). In this case, we observe Remark 3.2. In fact, we have the following numerical result of (DE) ε δ that the solution to (DE) ε δ oscillates between three zero points of f δ (·).
Numerical result of Corollary 3.1
In this subsection, we consider Corollary 3.1 numerically. To this end, we use the follwing initial data: Then, we have the following numerical experiment of (DE) ε δ that Corollary 3.1 holds. Namely, we observe that (3.16) holds with n = 6: Figure 8:
Conclusion of ODE problem (DE) ε δ
By Theorem 3.1 and numerical experiments as above, we conclude that (i) the mesh size of time △t must be smaller than δε 2 /(1 − δ) in order to get the stable numerical experiments of (DE) ε δ . (ii) we have the stable numerical experiments of (DE) ε δ with the initial data u ε 0 := (1 − δ) n−1 /(1 + δ) n , even if the mesh size of time △t is equal to 2δε 2 /(1 − δ).
The following variational identity holds: for all z ∈ V and a.e. t ∈ (0, T ).
Now, let us recall the solvability result of (P) ε δ on [0, T ]. Proposition 4.1. Let ε ∈ (0, 1] and δ ∈ (0, 1]. Assume the following condition: Also, we can show the existence of solutions to (P) ε δ on [0, T ] applying the abstract theory of evolution equations governed by subdifferentials. In fact, we define a functional φ ε δ on H by is the function defined in (2.1). Clearly, φ ε δ is proper, l.s.c. and convex on H with the effective domain D(φ) = V .
We easily see that the problem (P) ε δ can be rewritten as in an abstract framework of the form: Therefore, applying the Lipschitz perturbation theory of abstract evolution equations (cf. [5,14,21]), we can show the existence of a solution u ε δ to (PP) ε δ , hence, (P) ε δ , on [0, T ] for each ε ∈ (0, 1] and δ ∈ (0, 1] in the sense of Definition 4.1. Thus, the proof of Proposition 4.1 has been completed. Next, we recall the convergence result of (P) ε δ as δ → 0. Taking account of Lemma 2.1 (cf. (2.3)), we easily observe that the following lemma holds.
Then, φ ε δ (·) −→ φ ε (·) on H in the sense of Mosco [17] as δ → 0. By Lemma 4.1 and the general convergence theory of evolution equations, we easily get the following result. Proof. We easily observe that the problem (P) ε can be rewritten as in an abstract framework of the form: Therefore, by Lemma 4.1 and the abstract convergence theory of evolution equations (cf. [2,15]), we observe that the solution u ε δ to (PP) ε δ converges to the unique solution u ε to (PP) ε on [0, T ] as δ → 0 in the sense of (4.2). Note that u ε (resp. u ε δ ) is the unique solution to (P) ε (resp. (P) ε δ ) on [0, T ] (cf. Proposition 4.1). Thus, we conclude that Proposition 4.2 holds.
Stable criteria and numerical experiments for (P) ε δ
In this Section we consider (P) ε δ from the view-point of numerical analysis. Remark 5.1. Note from Proposition 4.2 that (P) ε δ is the approximating problem of (P) ε . Also note from (1.5) that the constraint ∂I [−1,1] (·) is the multivalued function. Therefore, it is very difficult to study (P) ε numerically.
In order to make numerical experiments of (P) ε δ , we consider the following explicit finite difference scheme to (P) ε δ , denoted by of (DP) ε δ : ε 2 for n = 0, 1, 2, · · · , N t and k = 1, 2, · · · , N x − 1, u n 0 = u n 1 , u n Nx = u n Nx−1 for n = 1, 2, · · · , N t , where △t is the mesh size of time, △x is the mesh size of space, N t is the integer part of number T /△t, N x is the integer part of number 1/△x and x k := k△x. We easily see that u n k is the approximating solution of (P) ε δ at the time t n := n△t and the position x k := k△x.
From Figure 1, we easily see that we have to choose the suitable constants ε, δ, the mesh size of time △t and the mesh size of space △x in order to get stable numerical results of (DP) ε δ . Now, let us mention our second main result in this paper, which is concerned with the stability of (DE) ε δ .
By the similar arguments as above, we observe that the function is non-negative and continuous. In fact, it follows from (3.2) that the function attains a minimum value at z = −1. Therefore, we observe from (3.2) and (5.1) that . (5.11) Also, for any z ∈ [−1/(1 − δ), −1], we observe from (3.2) that Here we note from (5.1) that is non-decreasing and attains a minimum value at z = −1/ (1 − δ). Hence, we have: ] .
(5.13)
Thus, we observe from (5.11) and (5.13) that which implies from (5.3) and (5.10) that Taking into account of Neumann boundary condition, namely, by u n+1 and u n+1 Nx = u n+1 Nx−1 , we observe from (5.9) and (5.14) that which implies that (5.3) holds for i = n + 1. Therefore, we conclude from the mathematical induction that (5.2) holds. Hence, the proof of (i) of Theorem 5.1 has been completed. Next, we show (ii) by the standard arguments. Namely, we reformulate (DP) ε δ as in the following form: Here by taking into account of Neumann boundary condition and initial condition, namely, by u n 0 = u n 1 and u n Nx = u n Nx−1 for all n ≥ 0, we observe that (5.15) is reformulated as in the following form: where we put Noting from (3.2) that By using the matrix as above, we observe that (5.15) can be rewritten as in the following form: where we put By the general theory, we observe that the eigenvalue λ j of matrix A is given by: and (5.24) Therefore, we observe from (5.23)-(5.24) that max 1≤k≤Nx−1 |u n k | is increasing with respect to n in the case when max 1≤k≤Nx−1 |u n k | ≤ 1. However, if u n k / ∈ [−1, 1] for some k = 1, 2, · · · , N x − 1, it follows from (5.1) and (5.18) that Therefore, the sum of all components in k-th row of A + B is the following: 1] for some k = 2, 3, · · · , N x − 2. (5.26) Althought max 1≤k≤Nx−1 |u n k | is increasing with respect to n in the case when max 1≤k≤Nx−1 |u n k | ≤ 1 (cf. (5.23)-(5.24)), we conculde from (5.22) and (5.25)-(5.26) that (ii) of Theorem 5.1 holds. Thus, the proof of Theorem 5.1 has been completed.
Taking into account of Theorem 5.1, we give numerical experiments of (DP) ε δ as follows. To this end, we use the following numerical data: Numerical data of (DP) ε δ .
Also, we consider the following initial data u ε 0 (x) defined by Therefore, we have c 0 δε 2 1 − δ = 0.00001515151515 · · · > △t, which implies that the criteria condition (5.1) holds. Thus, we have the following stable numerical experiment of (DP) ε δ : Therefore, we have c 0 δε 2 1 − δ = 0.00000029696969 · · · < △t, which implies that the criteria condition (5.1) does not hold. Therefore, we have the following unstable numerical experiment of (DP) ε δ : which implies that the criteria condition (5.1) holds. Therefore, we have the following stable numerical experiment of (DP) ε δ : Remark 5.4. We observe from Theorem 5.1 that in order to get stable numerical results of (DP) ε δ , we have to choose the suitable constants ε, δ and the mesh size of time △t and space △x. Therefore, if we make a numerical experiment of (P) ε for sufficient small ε, we had better consider the original problem (P) ε by using a primal-dual active set method in [3], a Lagrange multiplier method in [12] and so on.
Conclusion of PDE problem (DP) ε δ
By Theorem 5.1 and numerical experiments as above, we conclude that the mesh size of time △t and space △x must be satisfied for some constant c 0 ∈ (0, 1), in order to get the stable numerical experiments of (DP) ε δ . Also, by Theorems 3.1 and 5.1, we conclude that the value δε 2 /(1 − δ) is very important to make numerical experiments of (DE) ε δ and (DP) ε δ . | 4,996.6 | 2015-11-12T00:00:00.000 | [
"Mathematics"
] |
Maintaining Connectivity of MANETs through Multiple Unmanned Aerial Vehicles
Recently, Unmanned Aerial Vehicles (UAVs) have emerged as relay platforms to maintain the connectivity of ground mobile ad hoc networks (MANETs). However, when deploying UAVs, existing methods have not consider one situation that there are already someUAVs deployed in the field. In this paper, we study a problem jointing themotion control of existingUAVs and the deployment of new UAVs so that the number of new deployed UAVs to maintain the connectivity of ground MANETs can be minimized. We firstly formulate the problem as a Minimum Steiner Tree problem with Existing Mobile Steiner points under Edge Length Bound constraints (MST-EMSELB) and prove the NP completeness of this problem. Then we propose three Existing UAVs Aware (EUA) approximate algorithms for the MST-EMSELB problem: Deploy-Before-Movement (DBM), Move-Before-Deployment (MBD), and Deploy-Across-Movement (DAM) algorithms. Both DBM and MBD algorithm decouple the joint problem and solve the deployment and movement problem one after another, while DAM algorithm optimizes the deployment and motion control problem crosswise and solves these two problems simultaneously. Simulation results demonstrate that all EUA algorithms have better performance than non-EUA algorithm. The DAM algorithm has better performance in all scenarios than MBD and DBM ones. Compared with DBM algorithm, the DAM algorithm can reduce at most 70% of the new UAVs number.
Introduction
Unmanned Aerial Vehicles (UAVs) have emerged as promising relay platforms to improve networking performance (such as connectivity and throughput) for ground mobile ad hoc networks (MANETs) [1].UAVs have some unique characteristics suitable for the relaying task.Firstly, the motion flexibility of UAVs can expand the scope of ground-based networks especially in scenarios with obstacles.Secondly, UAVs can communicate with ground nodes in the line of sight, which can improve the link capacity between ground nodes.Last but not least, UAVs are often integrated with Communication, Computation, and Control (3C) system and various sensors, so that UAVs can be aware of the environment and control their motion adaptively.The adaptability of UAVs makes them suitable in providing relay service for MANETs that has dynamic network topology.
A variety of efforts have been made to explore the benefits of using UAVs as communication relays for ground MANETs.Some work wants to optimize the deployment of UAVs to improve the connectivity of ground nodes.Chandrashekar et al. presented a method on deploying minimum number of UAVs to connect a disconnected MANET [2].Some works study the motion control of UAVs to improve the link capacity of ground nodes.Jiang and Swindlehurst proposed a UAV's heading control algorithm that can maximize the link capacity of ground-to-air uplink channel using a multiantenna UAV [3].Dixon and Frew proposed a motion control method for using chains of UAVs to improve the link capacity between two isolated ground nodes [4].Some work considers both the deployment and motion control of UAVs.For example, Han et al. considered both the deployment and motion control of a single UAV to improve the connectivity of ground MANETs [1].
However, existing works on the deployment of UAVs have not considered a situation that some UAVs have already been deployed in the field.With the movement of ground MANETs, existing UAVs may fail to connect all ground nodes.New UAVs need to be supplied to maintain the connectivity of ground MANETs.In order to minimize the number of new added UAVs, both the movement of existing UAVs and the deployment of new UAVs need to be considered.This is a joint optimization problem that optimizes both the deployment and motion control of multiple UAVs.
We consider the usage of existing UAVs by moving them to proper positions so that the number of new UAVs that is needed can be reduced.The existing UAVs have a limited motion range that depends on the speed of UAVs.In order to support bidirectional communication between UAVs and ground nodes, we assume that UAVs have the same communication range as ground nodes.Figure 1 shows an example of how motion control of existing UAVs may reduce the number of needed new UAVs.Suppose that there are two ground nodes and two existing UAVs deployed in the field.Since the distance between two ground nodes is larger than their communication range , the ground MANET is separated into two parts which is shown in Figure 1(a).
In order to maintain the connectivity of ground MANETs, methods that do not consider existing UAVs such as [2] will add new UAVs to connect partitioned parts as shown in Figure 1(b).Here a new UAV is added and deployed in the middle of two ground nodes.So these two ground nodes can now communicate with each other with the help of the new UAV.If we do not consider existing UAVs, at least one additional UAV needs to be deployed to maintain the connectivity of ground MANETs.
To reduce capital expenditure, users would try their best to reduce the number of newly deployed UAVs.In other words, they will exploit the existing UAVs instead of ignoring them.By moving existing UAVs to proper positions and using existing UAVs as relays, the connectivity of ground MANETs can be improved.Figure 1(c) shows one proper movement.Move these two existing UAVs directly towards the line that consisted of two ground nodes until the distance between existing UAVs and one ground node is less than the communication range .Then a communication link is set up between two ground nodes.Thus the connectivity of ground MANETs is maintained and no new UAVs are needed to be deployed.
In this paper, we study the joint optimization problem of deployment and motion control of multiple UAVs so that the number of new added UAVs can be minimized.We firstly formulate this problem as a Minimum Steiner Tree problem with Existing Mobile Steiner nodes under Edge Length Bound constraints (MST-EMSELB) and prove the NP completeness of the problem.Then we present non-Existing UAVs Aware (non-EUA) algorithm and propose three Existing UAVs Aware (EUA) polynomial time approximation algorithms: Deploy-Before-Movement (DBM), Move-Before-Deployment (MBD), and Deploy-Across-Movement (DAM).The first two algorithms decouple the joint problem into deployment problem of new UAVs and motion control problem of existing UAVs.DBM algorithm optimizes the deployment of new UAVs before movement of existing UAVs and the MBD algorithm solves the problem contrarily.DAM algorithm is a mixed algorithm that solves the movement and deployment problem crossly.Simulation experiments show that all EUA algorithms have better performance in terms of new UAVs' number than non-EUA algorithm.DAM algorithm is always better than DBM and MBD algorithms and can improve the performance to 70% at most comparing with DBM algorithm.
The main contributions of this paper are as follows: ( (3) We compare the performance of proposed algorithms with non-EUA algorithms in simulation environment and demonstrate the effectiveness of proposed algorithms.
The rest of this paper is organized as follows.Section 2 presents some related work.We present the system model in Section 3 and formulate the problem in Section 4. In Section 5, we present three polynomial time approximation algorithms for the problem.We demonstrate the performance of proposed algorithm through simulation in Section 6. Section 7 concludes this work.
Related Work
Related works lie in two research fields: static relay deployment problem and mobile relay motion control problem.
Static Relay Deployment Problem.
Static relay deployment problems have been widely studied in wireless sensor networks (WSN).In WSN, it is hard to recharge the sensors after deployment.Due to the unbalanced load of message routing, some sensors will run out of energy before others.Then network partition happens and the whole network may be not available even though some sensors are still alive.
To extend the lifetime of WSN, some researches propose to deploy some static relay nodes in the field.Lloyd and Xue studied the problem of deploying minimum number of relay nodes so that, for each pair of sensor nodes, there is a connecting path consisting of relay and/or sensor nodes [5].They proved that the problem is NP complete and propose a polynomial time 7-approximation algorithm for the problem.Zhang et al. studied four related fault-tolerant relay node placement problems, discussed their computational complexity, and present a polynomial time O(1)-approximation algorithm with a small approximation ratio [6].Based on previous research, Misra et al. studied the constrained versions of the relay node placement problem, where relay nodes can only be placed at a set of candidate locations [7].They also proposed a polynomial time O(1)-approximation algorithm for the problem.Lee and Younis extend the usage of relay node to federating disjoint segments of WSN and proposed a distributed cell-based optimized relay node placement (CORP) algorithm [8].Marinho et al. studied the use of UAVs and cooperative multiple input multiple output (MIMO) techniques to keep the WSN connected [9].
Mobile Relay Motion Control
Problem.Similar to static relays, mobile relays were also proposed to extend the lifetime of WSN.But mobile relays often have rich resources and are able to move in the field.Thus mobile relays are more flexible than static relays and can be reused in different positions.
For WSN, the main purpose of using mobile relays is to save energy consumption of sensors and to extend the lifetime of the network.Wang et al. first studied the performance of a large dense network with one mobile relay and proposed a joint mobility and routing algorithm which can yield a network lifetime close to the upper bound [10].Venkateswaran et al. proposed a novel relay deployment framework that utilizes mobility prediction of MANET nodes to optimally define the movement of the relay nodes [11].Their simulation results indicate significant energy savings.El-Moukaddem et al. studied the problem of using mobile relays in dataintensive WSN to save the energy consumption of the whole network [12].They consider the energy consumption of both mobility and wireless transmissions.
As airborne platform of mobile relay, UAVs have been introduced to WSN, MANET, and other kinds of ground networks to improve the connectivity or link capacity between ground nodes.Since UAVs have relatively high mobility, sensibility, and self-controllability, they are especially suitable for MANET that has dynamic topology.
Chandrashekar et al. considered the problem of providing full connectivity to disconnected ground MANET nodes by dynamically placing Unmanned Aerial Vehicles (UAVs) to act as relay nodes [2].But they did not consider the mobility control problem of deployed UAVs.Jiang and Swindlehurst considered using a multiantenna UAV to connect a collection of single-antenna ground nodes [3].By dynamically adjusting the UAV heading they can maximize the approximate sum rate of the ground-to-air uplink channel.Han et al. considered using one UAV to improve the connection of ground-based wireless ad hoc networks [1].The location and movement of the UAV are optimized to improve four types of network connectivity including global message connectivity, worst case connectivity, network bisection connectivity, and -connectivity.Both of these two works just consider using a single UAV.
With the development of UAV's manufacturing technology, the size of UAV becomes smaller and the price also becomes cheaper.Thus it is possible to use a team of UAVs to provide network connection for ground nodes or improve their link capacity.Zhan et al. investigated a communication system in which UAVs are used as relays between groundbased terminals and a network base station [13].They developed an algorithm for optimizing the performance of the ground-to-relay links through control of the UAV heading angle.Cetin and Zagli studied UAVs' motion control to achieve continuous long-range communication relay infrastructure [14].They have proposed a novel dynamic approach to maintain the communication between vehicles.Besides dynamically keeping vehicles in range and appropriate position to maintain communication relay, artificial potential field based path planning also provides collision avoidance system.Ponda et al. presented a cooperative distributed planning algorithm that ensures network connectivity for a team of heterogeneous agents operating in dynamic and communication limited environments [15].The algorithm predicts the network topology and proposes relay tasks to repair connectivity violations.Dixon and Frew considered using chains of UAVs to improve the connectivity for two isolated ground nodes [4].The mobility of UAVs is controlled to maximize the communication link capacity for the end-toend connection.But they just assume that there are two nodes in ground MANETs.
System Model
We assume a scenario that multiple UAVs are used to maintain the connectivity of ground MANETs and some UAVs have already been deployed in the field.However, due to the movement of ground nodes and limited communication range, existing UAVs are not able to connect all ground nodes.Thus we need to add new UAVs to maintain the full connectivity of MANETs.The system model is shown in Figure 2. In the figure, vehicles represent ground nodes.As we can see, there are already two UAVs deployed in the field, but, due to the long distance between vehicles, the ground network is partitioned into two parts.In order to keep the full connectivity, a new UAV is added as relay that connects these two partitioned subnetworks.
Mobility Model of UAVs.
We assume the UAVs used in this paper are small four-rotor UAVs.The four-rotor UAVs can stay in a constant position and fly directly up and down.It can also spin 360 degrees around itself with a zero radius.To simplify the system, we assume all UAVs fly in different altitude so that collision avoidance of UAVs needs not to be considered in this paper.
Since the four-rotor UAV is small and always uses battery as energy, the velocity of the UAVs is limited.Because ground nodes are continuously moving, the task of motion control of existing UAVs and deployment of new UAVs must be finished within a given deadline.This requirement is especially important for some military scenarios.So we assume in this paper the mobility of existing UAVs is constrained.They can move towards any direction.However, the distance between new positions and current positions must not be more than a constant length.And we call this constant length motion range in this paper.
Communication Model.
Connectivity represents the communication capability between nodes in a network.Here, the link capacity is used to represent the connectivity.The meaning of maintaining the connectivity of ground MANETs is to keep the link capacity higher than a given threshold.When the link capacity between two nodes is higher than the threshold, these two nodes are connected.Otherwise, they are disconnected.
Link capacity is the upper bound of data rate when transmitting data.According to Shannon equation the link capacity can be computed using (1).Here, is the link capacity in b/s; is the bandwidth and SNR is the signal noise ratio Signal noise ratio (SNR) is the ratio of receiving power of signal with power of noise signal.The SNR of node from node is defined as (2) [16].Here, is the sending power, , is the distance between node and node , and is the path loss. 0 is the power density of noise and here we consider the Gaussian white noise, so 0 = 4 * 10 −21 Hz. is the bandwidth According to (1) and ( 2), we can find that there are two factors that affect the connectivity.One factor is the distance between two nodes.The larger the distance is, the weaker the link capacity will be.When distance is large enough, the link capacity will be smaller than the given threshold and these two nodes will be disconnected.The other factor is the path loss, which reflects the signal interference of the environment.Different environment has different path loss.Signal loses more power when it is transmitted near the ground than that in the air.So when two nodes have the same distance, the link capacity between these two nodes is higher when these two nodes are in the air than that when they are on the ground.
In this paper, we define the connectivity between any two nodes as a binary variable.If the distance between two nodes is not more than a constant length, we assume these two nodes are connected.Otherwise, these two nodes are disconnected.Here we name the constant length communication range.The communication range of ground nodes is smaller than the communication range between two UAVs or between one UAV and one ground node.
Problem Formulation
In our definitions, we assume that all current positions of ground nodes and existing UAVs are known.We also assume there are no obstacles that affect the mobility of UAVs or transmissions.Our problem can be described as follows: given a set of ground nodes and a set of existing UAVs, we want to find new positions for existing UAVs and positions for new added UAVs to form tree spanning all ground nodes so that the number of new added UAVs is minimized.There are two constraints in this problem.One is the distance between new position and current position of each existing UAV that is not more than a given motion range.The other is that length of each edge in the tree is no more than a given communication range.Given the number of new added Steiner points and a topology which specifies the edges in the final tree, a bottleneck tree under this given topology and edge bound constraint can be computed in polynomial time [19] and the mobility constraint of mobile Steiner points can be checked in polynomial time.Therefore MST-EMSELB problem belongs to the class NP.It follows from this remark and Theorem 1 that we have proved Theorem 2. Detailed proofing of Theorem 1 can be found in our previously published paper in [20].
Heuristic Solution
As previously mentioned, the MST-EMSELB problem belongs to NP complete problem; thus we try to find polynomial time approximation algorithms for this problem.In this section, we firstly present existing methods and then propose three heuristic algorithms for MST-EMSELB problem.
5.1.Non-EUA Algorithm.Currently, there are no particular algorithms designed for MST-EMSELB problem.The most related problem of MST-EMSELB is MST-EMS problem.When we crystallize the MST-EMS problem in the scenarios of using UAVs to maintain the connectivity of MANETs, a specific problem of MST-EMS is using new UAVs to maintain the connectivity of MANETs without considering existing UAVs.
Lin and Xue presented a minimum spanning tree (MST) based heuristic algorithm for MST-EMS problem whose worst case approximation ratio is 4 [18].The MST heuristic algorithm firstly generates a minimum spanning tree over .It then divides each edge in the tree into small pieces of length at most by inserting ⌈()/⌉ − 1 degree-2 Steiner points so that all pieces in edge have equal length.Here () is the Euclidean length of edge .
Since no mobile Steiner points are considered, the MST heuristic algorithm cannot be directly used for MST-EMSELB problem.Here we just take the Lin and Xue methods as a comparative method.Since this method has not considered the existing UAVs, we call this method non-Existing UAVs Aware (non-EUA) algorithm.Non-EUA algorithm just computes minimum number of new UAVs that is needed to connect all ground nodes.None of the existing UAVs will be reused for connecting ground MANETs.So the number of needed new UAVs computed by non-EUA should be an upper bound of other Existing UAVs Aware algorithms.The non-EUA algorithm is shown in Algorithm 1.
DBM Algorithm.
Since this is a joint optimization problem and there are two variables that need to be optimized, one variable is the new position of existing UAVs and the other variable is the position of new added UAVs.So we decouple the MST-EMSELB problem into two subproblems: the movement control problem of existing UAVs and the deployment problem of new added UAVs.In order to optimize the joint problem, we solve these two subproblems one by one.The first algorithm we proposed is Deploy-Before-Movement (DBM) algorithm that firstly optimizes the deployment of new UAVs and then optimizes the movement control of existing UAVs.
The main idea of DBM algorithm is shown as follows.Firstly we use the non-EUA algorithm to generate candidate positions of new added UAVs without considering existing UAVs.Then we match existing UAVs with candidate positions of new added UAVs.A match between an existing UAV and a candidate position of a new added UAV means that the new added UAV will be replaced by the existing UAV by moving this existing UAV to the candidate position.Since the motion range of existing UAVs is limited, the number of matches is also constrained.Here, we use Hungary algorithm to find maximum matches so that the number of needed new UAVs can be minimized.The DBM algorithm is shown in Algorithm 2.
Although DBM algorithm can utilize the mobility of existing UAVs, the candidate position for moved existing UAVs is limited which is the position computed by non-EUA algorithm.Due to the motion range limitation, existing UAVs may not be able to move to these candidate positions.Figure 3 shows one worst case for DBM algorithm.The worst case scenario is similar to the example showed in Figure 1.But the motion range of existing UAVs is constrained in this scenario.In Figure 3(a), the radius of motion range of existing UAVs is .When using DBM algorithm, the candidate position of new UAVs is the middle point of two ground nodes as shown in Figure 3(b).The distance from one existing UAV to the candidate position is and in this case is larger than .Thus, none of these two existing UAVs can be matched to the candidate position.So the minimum number of needed new UAVs computed by DBM algorithm is 1.
However, the best solution for this case is shown in Figure 3(c).The best movement of existing UAVs is that they all move directly towards the line that consisted of two ground nodes until meeting the motion range.A communication link will be set up for two ground nodes using two existing UAVs as relays and no new UAVs are needed in this solution.The worst case indicates that DBM algorithm cannot make best use of existing UAVs due to the less optimal motion control.
MBD Algorithm.
As previously mentioned, we decouple the joint optimization problem into two subproblems: the deployment of new UAVs problem and motion control problem of existing UAVs.DBM algorithm firstly solves the deployment problem and then solves the motion control problem.However, due to the less optimal motion control, DBM algorithm encounters some worst cases that none of existing UAVs can be reused.So we reverse the solution and propose Move-Before-Deployment (MBD) algorithm that firstly solves the movement problem and then solves the deployment problem.
The main idea of MBD algorithm is as follows.Firstly, we use a heuristic function to generate new positions of existing UAVs .Then, we merge the set of ground nodes and the set of existing UAVs with new positions into a big node set ∪ .After that, we generate a minimum spanning tree Input: , , , , // is the position of ground nodes, is the position of existing UAVs, is the motion-range of existing UAVs, is the communication-range between ground nodes and is the communication-range between ground nodes and UAVs.Output: , // is the new position of existing UAVs and is the position of new added UAVs (1) Initially: ← ; ← Φ; (2) generate candidate positions of new UAVs using non-EUA(, , ) algorithm; (3) cost ← Φ; (4) for each pair ∈ and ∈ do (5) if distance( , ) ≤ then (6) cost [𝑖][] = distance( , ); (7) else (8) cost [𝑖][] = +∞; (9) end if (10) end for (11) over set ∪ and then existing UAVs cut process will be used to cut all 1-degree existing UAVs in the tree until all existing UAVs in the tree have at least 2 degrees.For the rest subtree of , new UAVs will be added to edges of that has larger length than .
The heuristic function mentioned in previous paragraph to generate new positions of existing UAVs might affect the performance of the whole MBD algorithm.We find that the DBM algorithm will be a perfect heuristic function to generate new positions for existing UAVs if the motion constraints of UAVs are released.This is because when the motion constraints of existing UAVs are released, they can move to any positions.So all of existing UAVs can match any candidate positions of new UAVs, since these candidate positions are computed by non-EUA algorithm that can minimize the number of new UAVs to maintain the connectivity of ground MANETs without considering existing UAVs.Thus, DBM algorithm can find the best positions for existing UAVs.The MBD algorithm is shown in Algorithm 3.
In MBD algorithm, the existing UAVs cut process is an important and necessary process to minimize the number of new added UAVs, because when constructing the minimum Input: , , , , // is the position of ground nodes, is the position of existing UAVs, is the motion-range of existing UAVs, is the communication-range between ground nodes and is the communication-range between ground nodes and UAVs.Output: , // is the new position of existing UAVs and is the position of new added UAVs (1) Initially: ← ; ← Φ; (2) compute new positions of existing UAVs using DBM(, , , , ) algorithm; (3) generate a complete graph (, ) over ∪ ; (4) compute a minimum spanning tree (, ) based on (, ); (5) while true do (6) ← Φ, ← Φ; // is the set of 1-degree UAVs and is the set of edge connecting vertexes in .(7) for each V ∈ do (8) if degree(V ) == 1, V ∈ then (9) ← V , ← , ; (10) end if (11) end for (12) if sizeof() > 0 then (13) ← ( − ), ← ( − ); ( 14) else (15) break; (16) end if (17) end while (18) spanning tree , all existing UAVs have been considered as terminals.However, existing UAV that has 1 degree in is the leaf of the tree and should not be considered in adding new UAVs in next step.So we recursively cut 1-degree existing UAVs until all existing UAVs have at least 2 degrees and then the number of new added UAVs can be minimized.
The existing UAVs cut process is shown in Figure 4.There are three ground nodes and four existing UAVs in this scenario.The generated minimum spanning tree over all ground nodes and existing UAVs is shown in Figure 4(a).We can find that the degree of existing UAV is 1.So the existing UAVs cut process will firstly delete UAV and generate a subtree in Figure 4(b).Again, we find the degree of existing UAV is 1.So UAV is also cut and generates a subtree in Figure 4(c).Now, we find that there are not any 1-degree existing UAVs in the tree and all existing UAVs have at least 2 degrees.So we will add new UAVs to edges of the subtree that has larger length than which is shown in Figure 4(d).
DAM Algorithm.
Although MBD algorithm can minimize the number of new added UAVs using the existing UAVs cut process, the new positions of existing UAVs may not be optimal.This is because the new position of existing UAVs in MBD algorithm is generated using DBM algorithm which is based on the release of motion range constraints.So, when the motion range of existing UAVs is constrained, the performance of MBD algorithm will be depressed.
Since the deployment of new UAVs and the motion control of existing UAVs affect each other, we think that if we can solve the deployment and motion control problem crosswise, then the solution of the joint problem may be optimal.So here we propose a Deploy-Across-Movement (DAM) algorithm that solves these two problems simultaneously.
The main idea of DAM algorithm is as follows.Firstly, we generate a complete graph (, ) on ground nodes and sort all edges of in length increasing order.Then, we consider all edges , in the set that the length of the edge is no more than and vertexes of the edge belong to different components.After this step, we will get several components that consist of connected ground nodes.Now we will recursively move existing UAVs and add new UAVs to connect partitioned components until all partitioned components are connected into one component.In each loop, we will try to connect all vertexes pairs V and V that belong to different components using two different methods.One method uses existing UAVs to set up a communication chain between V and V by moving UAVs to certain positions.New UAVs will be added to edges of the chain which has larger length than .The other method does not consider existing UAVs and just try to set up a communication chain between V and V by adding new UAVs.The number of new added UAVs using these two methods will be compared and the less one will be recorded as the minimum number of new UAVs (MNN) for connecting V and V .The vertexes pair that has minimum MNN will be selected to connect two partitioned components in this loop.New positions of existing UAVs and positions of new added UAVs generated to connect this vertexes pair would also be recorded as part of the final result.The DAM algorithm is shown in Algorithm 4.
The New-UAV-Chain method in DAM algorithm is just the non-EUA algorithm in two-ground node case.The Existing-UAV-Chain (EUC) method is an important part of DAM algorithm since it controls the movement of existing UAVs.The details of Existing-UAV-Chain method are shown in Algorithm 5. Given two ground nodes 1 , 2 and all existing UAVs , it will firstly generate a minimum spanning tree over set 1 ∪ ∪ 2 and get the existing UAVs chain from 1 to 2 .For each UAV in the chain , it will compute a new position that depends on position of its left node and right node in the chain.For the first UAV, the left node is 1 and for the last UAV the right node is 2 .There are three candidate new positions for with different priority.The first candidate position with highest priority is the middle point of nodes 1 and 2 .The second candidate position with less priority is the shadow point from to the line that consists of nodes 1 and 2 .The last candidate position with least priority is the point of all reachable points from , nearest to the line that consists of nodes 1 and 2 when distance between and the line is larger than the motion range .The new position of UAV will be set to one of these three candidate positions with highest priority under motion range constraint.Afterwards, new UAVs will be added to edges in chain that has larger length than .
Figure 5 shows the three candidate positions in EUC method.There are two ground nodes 1 , 2 and five existing UAVs in this scenario.The minimum spanning tree generated is shown in Figure 5(a).The UAV chain from 1 to 2 is → → .Then EUC method will compute candidate Due to the motion range of , the new position it will choose is 3 in this situation.
5.5.Discussion.In the proposed algorithms, we do not consider the power consumption of UAVs.In fact, the battery power of each UAV is limited and the limited power will certainly affect the deployment and movement of UAVs.
Here, the power consumption is modeled as a constant factor.Existing UAVs that have very low battery power should land and recharge; thus it will be excluded from the set of existing UAVs.So all existing UAVs have adequate battery power.
The battery power of UAVs can also be modeled as dynamic factor; thus existing UAVs may have different battery power due to duration time of their task.Then the deployment and movement UAVs should consider the power cost and optimize the survival time of the whole network.However, that is beyond the focus of this paper and we may consider it in our future work.
Simulation Experiment
A simulation environment was set up to test the performance of the proposed algorithm.Simulation parameters are shown in Table 1.
To demonstrate the effectiveness of proposed Existing UAVs Aware (EUA) algorithms, we compare the performance in terms of new added UAV numbers with non-EUA algorithm and the CBBA (Consensus-Based Bundle Algorithm) proposed by Ponda et al. [15].Different from non-EUA algorithm, CBBA is an algorithm that considers existing UAVs in deploying new UAVs.It uses distributed planning strategies to control the movement of existing UAVs so that network connectivity can be ensured.We also compare performance among three proposed EUA algorithms: the DBM, MBD, and DAM algorithms.We change the testing scenario by varying 5 simulation parameters including the field size, number of ground nodes, number of existing UAVs, communication range, and motion range.For each scenario, we carry out the simulation on 100 randomly generated topologies and take the average performance as the final performance for this scenario.Our simulation scenarios can be classified into five cases.We will analyze our simulation results one by one.
6.1.Varying Field Size.We use a square as testing field.We set the number of ground nodes as 50, existing UAVs as 5, communication range as 500 m, and motion range as 50 m.In this scenario, we vary the edge of testing field from 1 km to 10 km by increments of 1 km and randomly generate 100 different topologies for both ground nodes and existing UAVs.
Figure 6 shows the average number of new added UAVs to connect all ground nodes.It is obvious that the number of new added UAVs computed using the EUA algorithms is smaller than the one using non-EUA algorithm.And it is possible to observe an increase in the number of new added UAVs with the increasing size of field.This is because the larger field size is the larger distance between ground nodes since the number of ground nodes is constant.So more relays are needed to maintain the connectivity of ground MANETs.The DAM method has best performance, while DBM method, MBD method, and CBBA method have similar performance.
Varying Number of Ground Nodes.
In this scenario, we set the edge of test field as 5 km and keep other three parameters the same while varying the number of ground nodes from 10 to 100.
Figure 7 shows the average number of new added UAVs to connect all ground nodes.It is obvious that the number of new added UAVs computed using the EUA algorithms is also smaller than the one using non-EUA algorithm.And the performance of DAM algorithm is always the best one of all algorithms.At the beginning, the DBM algorithm is little better than MBD and CBBA algorithm, but, with the increase of ground nodes number, the performance of MBD algorithm is better than DBM algorithm.There is a trend in the simulation result that the number of new added UAVs firstly increases but after it reaches a peak, it decreases very quickly, because the increasing number of ground nodes requires more UAVs to maintain its connectivity at the beginning.But afterwards, sufficient number of ground nodes will increase the connectivity of MANETs so less UAVs are needed.
Varying Number of Existing UAVs.
In this case, we set number of ground nodes as 50 and kept the field size, number of ground nodes, communication range, and motion range static.We vary the number of existing UAVs from 2 to 20, by increments of 2.
Figure 8 shows the average number of new added UAVs when given different number of existing UAVs.As we expected, the number of new added UAVs computed using the EUA method is smaller than the one using non-EUA method in all scenarios.Another observation can also emulate the importance of considering existing UAVs.The observation is that there is an obvious decrease in the number of new added UAVs with the increase of the number of existing UAVs when using EUA algorithms, while the number of new added UAVs using non-EUA algorithm almost keeps the same.We can also find that CBBA and MBD algorithm are better than DBM algorithm.DAM algorithm always has the best performance.From Figure 8, we can see that the MBD algorithm can averagely reduce about 30% of new UAVs number compared with CBBA and DBM algorithm.DAM algorithm can averagely reduce about 70% of new UAVs number compared with DBM algorithm.
Varying Motion Range.
In this case, we set existing UAVs as 5 and keep the field size, number of ground nodes, number of existing UAVs, and communication range static.We vary the motion range from 10 m to 100 m, by increments of 10 m, and generate 100 different topologies for each motion range.Figure 9 shows the average number of new added UAVs under different motion range of existing UAVs.EUA algorithms have better performance than non-EUA algorithms and DAM algorithm has best performance.CBBA method firstly has better performance than DBM and MBD method, but when motion range is larger than 50 m the performance of CBBA is worse than BDM and MBD method.The observation of BDM and MBD method is similar to results under varying number of existing UAVs.However, the difference between DBM algorithm and MBD algorithm is decreasing with the increase of motion range constraint.So when the motion range increases, the difference between DBM and DBM algorithm will vanish.
Varying Communication Range.
In this case, we set motion range as 50 m while keeping the number of ground nodes, number of existing UAVs, and motion range static and vary the communication range of ground nodes from 50 m to 100 m, by increments of 50 m.The communication range between ground nodes and UAVs is set as twice communication range of ground nodes.
Figure 10 shows the average number of new added UAVs under different communication ranges.We can observe that, for all algorithms, the number of new UAVs is decreasing with the increase of communication range.This is due to the fact that larger communication range will lead to better network connectivity.Thus fewer relays are needed to maintain the connectivity of the network.So when the communication range increases to certain value, the difference between all algorithms will vanish.The DAM method has the best performance.CBBA, DBM, and MBD methods have similar performance.
Conclusion
This paper studies the problem of using UAVs to maintain the connectivity of ground MANETs.Different from existing works, this paper considered a condition that some UAVs have already been deployed in the field.Due to the movement of ground MANETs and limited communication range, existing UAVs are not able to connect all ground nodes, so new UAVs need to be deployed to maintain the connectivity.
2 MathematicalFigure 1 :
Figure 1: An example that illustrates the importance of existing UAVs in maintaining the connectivity of ground MANETs.
Figure 4 :Figure 5 :
Figure 4: Existing UAVs cut process and new UAVs adding process of MBD algorithm.
Figure 6 :
Figure 6: Comparison with varying field size.
Figure 7 :
Figure 7: Comparison with varying numbers of ground nodes.
Figure 8 :
Figure 8: Comparison with varying numbers of existing UAVs.
Figure 9 :
Figure 9: Comparison with varying motion range of existing UAVs.
MathematicalFigure 10 :
Figure9shows the average number of new added UAVs under different motion range of existing UAVs.EUA algorithms have better performance than non-EUA algorithms and DAM algorithm has best performance.CBBA method firstly has better performance than DBM and MBD method, but when motion range is larger than 50 m the performance of CBBA is worse than BDM and MBD method.The observation of BDM and MBD method is similar to results under varying number of existing UAVs.However, the difference between DBM algorithm and MBD algorithm is decreasing with the increase of motion range and is almost equal to each other at the right end of the figure.This is because MBD algorithm uses DBM algorithm to generate new positions of existing UAVs.And it is based on the release of motion Since this problem is similar to Steiner Tree Problem with minimum number of Steiner points, we formulate this problem as a Minimum Steiner Tree problem with Existing Mobile Steiner points under Edge Length Bound constraints (MST-EMSELB).The Steiner points here stand for UAVs and the Edge Length Bound is the communication range.The formal definition of MST-EMSELB problem is shown as follows.Given.There are a set of ground nodes , a set of existing UAVs , a motion range , ground node communication range , and ground-air communication range .Here, < , = { 1 , 2 , . . ., } , = { 1 , 2 , . . ., } .Output.There are new positions of exiting UAVs , positions of new added UAVs , and a tree spanning : .2.2.Decision Version of MST-EMSELB Problem.Given a set of terminal points and a set of mobile Steiner points in the two-dimensional Euclidean plane R 2 , a nonnegative constant , a positive constant , and a nonnegative integer , the MST-EMSELB problem firstly asks whether there exists a motion of mobile Steiner points in set such that distance between new position and old position of each mobile Steiner point is no more than .Then, the MST-EMSELB problem will check if there exists a tree spanning a point set ⊇ such that the length of each edge in the tree is no more than and the number of new added Steiner points (points in ∩ ¬( ∪ )) is no more than .
Table 1 :
Simulation parameter.UAVs in the chain one by one.Take the first UAV as an example.The three candidate positions for are 1 , 2 , and 3 shown in Figure5(b). 1 is the middle point of 1 and 2 . 2 is the shadow point of to lines 1 , 2 .And 3 is the point in the motion range of that is nearest to lines 1 , 2 .The priority of these three candidates' position is 1 > 2 > 3 . | 9,570.4 | 2015-11-22T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
A study of positioning orientation effect on segmentation accuracy using convolutional neural networks for rectal cancer
Abstract Purpose Convolutional neural networks (CNN) have greatly improved medical image segmentation. A robust model requires training data can represent the entire dataset. One of the differing characteristics comes from variability in patient positioning (prone or supine) for radiotherapy. In this study, we investigated the effect of position orientation on segmentation using CNN. Methods Data of 100 patients (50 in supine and 50 in prone) with rectal cancer were collected for this study. We designed three sets of experiments for comparison: (a) segmentation using the model trained with data from the same orientation; (b) segmentation using the model trained with data from the opposite orientation; (c) segmentation using the model trained with data from both orientations. We performed fivefold cross‐validation. The performance was evaluated on segmentation of the clinical target volume (CTV), bladder, and femurs with Dice similarity coefficient (DSC) and Hausdorff distance (HD). Results Compared with models trained on cases positioned in the same orientation, the models trained with cases positioned in the opposite orientation performed significantly worse (P < 0.05) on CTV and bladder segmentation, but had comparable accuracy for femurs (P > 0.05). The average DSC values were 0.74 vs 0.84, 0.85 vs 0.88, and 0.91 vs 0.91 for CTV, bladder, and femurs, respectively. The corresponding HD values (mm) were 16.6 vs 14.6, 8.4 vs 8.1, and 6.3 vs 6.3, respectively. The models trained with data from both orientations have comparable accuracy (P > 0.05), with average DSC of 0.84, 0.88, and 0.91 and HD of 14.4, 8.1, and 6.3, respectively. Conclusions Orientation affects the accuracy for CTV and bladder, but has negligible effect on the femurs. The model trained from data combining both orientations performs as well as a model trained with data from the same orientation for all the organs. These observations can offer guidance on the choice of training data for accurate segmentation.
| INTRODUCTION
Segmentation of the organs-at-risk (OARs) and the tumor target is one of the key problems in the field of radiotherapy. Computer-assisted automated methods have the potential to reduce the inter-and intraobserver variability and relieve physicians from the labor-intensive contouring workload. Such problems have been addressed in clinical applications using "atlas-based" automated segmentation software. [1][2][3] Despite the popularity of such software, the recent deep learning revolution, especially the fully convolutional neural networks (CNN), [4][5][6][7][8] has turned the tables due to its significant improvement in terms of segmentation accuracy, consistency, and efficiency. Lustberg et al. 9 and Lavdas et al. 10 demonstrated that CNN contouring demonstrated promising results in CT and MR image segmentation as compared with atlas-based methods. Ibragimov et al. 11 successfully applied CNN for OAR segmentation in the head and neck CT images. The authors 12 previously reported a dilated CNN with high accuracy for segmentation of rectal cancer. With the promising learning tools and the enhancement of computer hardware, deep learning will dramatically change the landscape of radiotherapy contouring. 13 As is well-known, data are one of the most important components of any machine learning system, 14 especially for the deep networks. 15,16 Although the approaches substantially improve the performance, training CNN requires a large number of fine quality contour annotations to achieve a satisfactory segmentation outcome.
The training data for modeling must be representative of the characteristics of the image sets in the study. Special attention should be paid to collecting and constructing an appropriate dataset for any segmentation system for CNN. Patients undergoing radiotherapy for rectal cancer are generally treated either in a prone position to reduce the volume of small bowel in the high-dose region 17 or in a supine position as it is much more stable. 18 A different positioning orientation (prone or supine) will result in variability 19 in location, shape, and volume of the structures of interest. These differences may affect segmentation performance when training and testing across different positioning orientations.
In this study, we investigated the effect of cross-orientation on segmentation for rectal cancer radiotherapy using CNN. This issue is highly relevant for the following reasons. First, whether a CNN model trained with patients positioned in one orientation performs poorly for cases in the opposite orientation has not been studied before. Although this may be subjectively true, there have been no experiments to support this assumption and no quantitative evaluation of such deterioration. Second, there has been no prior report on whether and how much the training with data from both orientations would affect the segmentation accuracy. More data can increase the diversity, but mixing two very different types of data are likely to lead to confusion in model training. This is an open question whose answers may influence the training strategies of deep learning. Third, segmentation is often the prerequisite of medical image analysis. If the positioning orientation affects segmentation, it will also affect further quantitative analysis, e.g., radiomics, which is based on the segmentation. This study will therefore provide evidence and guidance for patients positioning orientation considerations. The image data were pre-processed in MATLAB R2017b (Math-Works, Inc., Natick, MA, USA). A custom-built script was used to extract and label all the voxels that belonged to the specific contours from the DICOM structure files. We used a contrast-limited adaptive histogram equalization (CLAHE) 12,20 algorithm to pre-process the CT images for image enhancement. For the patients in the "supine" position, the images were rotated 180°clockwise to create the corresponding "virtual prone" images. This is to remove the effects that are entirely caused by the physical orientation of the image. The final data used for CNN were the 2D CT slices and the corresponding 2D labels. The process and the additional image pre-processing were fully automated.
2.B | Convolutional neural networks implementation
We used the ResNet-101 7 as the deep learning network for segmentation. As illustrated in Fig. 1, the inputs of the network were the original 2D CT slices and the outputs were the corresponding maps with the segmentation labels. Table 1
| EXPERIMEN TS
In order to evaluate the effect of positioning orientation on segmentation, we designed the following three sets of experiments for comparison.
1. Segmentation using the model trained with data from the same orientation; 2. Segmentation using the model trained with data from the opposite orientation; 3. Segmentation using the model trained with data from both orientations.
Subsequently, we chose subsets with j as the testing sets and i != j as the training set to train the jth set of models. We repeated this step until we trained five sets of models to cover all the data.
In order to avoid overfitting during training phase, we adopt an offline and online data augmentation schemes. The offline augmentation randomly transformed the training cases with noise pollution and rotation (between −5°and 5°), which enlarged the training dataset by ten times. The online augmentation applied methods of randomly scaling the input images (from 0.5 to 1.5), randomly cropping, and randomly left-right flipping. With the data augmentation, the network hardly trained the same augmented image twice, as the modifications were performed at random each time. This greatly increased the diversity of samples and made the net more robust.
We implemented the training and testing of our model using Caffe, 21 which is a publicly available deep learning platform. The model was trained in a 2D pattern. During the testing phase, all the 2D CT slices were tested one by one. In detail, the 2D CT slices were the inputs and the corresponding segmentation probability maps were the outputs. The model parameters for each network were initialized using the weights from the corresponding model trained on ImageNet. 22 In this case, the input channel of "Conv1" layer should be three. However, our input was the gray image of CT, which has only one channel. We solved this problem by taking only the first channel of each filter in the "Conv1" pre-trained on Ima-geNet when loading the model. This was achieved by modifying the original code of Caffe, that is, to compare the channel number c1 of the current network and the channel number c2 of the pre-training model. If c1 is less than c2, only previous c1 channel of the filters is used. The training set was used to "tune" the parameters of the networks. The loss function and the training accuracy were computed with "SoftmaxWithLoss" and "SegAccuracy" built-in Caffe, respectively. 21 The optimization algorithm of training used backpropagation with the stochastic gradient descent (SGD). We used the "poly" learning rate policy where current learning rate equals the base one multiplying ð1 À iter max iter Þ power . In this study, we set the base learning rate to 0.001 and power to 0.9. The batch size was set to 1 due to the limitation of physical memory on GPU card. The training iteration number was set to 90K. The momentum and weight decay were set to 0.9 and 0.0005, respectively. The training and testing phases were fully automated with no manual interaction. All computations were undertaken on an Amazon Elastic Compute Cloud with NVIDIA K80 GPU.
3.B | Performance evaluation
Physician approved manual segmentation was used as the gold standard reference. The spatial consistency between the automated segmentation and the manual reference segmentation was quantified using two metrics: the Dice similarity coefficient (DSC) 23
| RESULTS
The results of the segmentation accuracy are summarized in Tables 2 and 3. The CNN segmentation models for CTV and bladder trained with cases positioned in the opposite orientation performed significantly worse (P < 0.05) than that trained with cases positioned in tasks to solve problems faster and more effectively. Segmentation using CNN with transfer learning will be explored in the future.
| CONCLUSIONS
The experiments demonstrated that the orientation of the training dataset affects the accuracy of CNN-based segmentation for CTV and bladder but has negligible effect on the femurs. The model trained from data combining both orientations works as well as model trained on data from the same orientation for all the organs.
These observations provide guidance on how to choose training data for accurate segmentation.
ACKNOWLEDG MENTS
This project was supported by U24CA180803 (IROC) and CA223358 from the National Cancer Institute (NCI).
CONFLI CT OF INTEREST
No conflicts of interest. F I G . 3. Segmentation results on cases positioned in prone using CNN models trained with different types of datasets. | 2,417.8 | 2018-11-12T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
An Effective Quantum Genetic Algorithm Based on Drama Resource Mining Using Wireless Sensing Technology
Wireless sensor technology has penetrated various domains of today’s life and plays a vital role in the advanced technology. Numerous researchers have combined this outstanding technology with other fields such as resource mining, industry, healthcare, automobile system, gaming industry, and dramas. However, in traditional resource mining, long mining time leads to incomplete mining results along with low accuracy. In order to improve the effect of resource mining, we have proposed an effective Quantum Generic Algorithm (QGA) based on drama resource mining by using wireless sensor technology. In our proposed scheme, we have combined the RFID technology of wireless sensor with wireless network protocol stack for the purpose of collecting drama resources on the network platform. We have classified the drama resources on the network platform by using QGA based on the results of resources collection. Additionally, we have mined the semantic association features of frequent patterns of the drama resources on the platform and combined with the fuzzy attribute feature detectionmethod.(e experimental results show that this method is superior to the traditional methods in terms of resource mining time, mining results’ comprehensiveness, and mining results’ accuracy, which shows that this method has practical application value.
Introduction
Drama is derived from a Greek word that means "to do" or "to act." It is essentially a story that is acted out. And every play, whether serious or comedic, ancient or modern, conveys its tale through people in scenarios that are based on real life. Additionally, another key objective of drama is to generate a situation wherein the inferences, feelings, knowledge, and skills in education are liberated. In this connection, opera is a type of drama wherein music plays a central role and vocalists perform dramatic roles, but it differs from musical theater [1]. Opera is the product of combining Chinese opera art and TV media. e development and maturity of TV drama is the combination of TV as a modern means of communication and drama as a cultural resource [2,3]. Drama represents the integration of resources needed to develop drama and the implementation of localization and characteristic strategy of TV drama [4,5]. How to effectively obtain drama resources in the network platform has become a significant problem to be solved [6].
Many authors combined drama with today's modern technologies such as big data, wireless sensor technology, and artificial intelligence to facilitate human from low cost, high speed, and low data consumption.
In this regard, the authors in [7] proposed a data mining method based on FP growth algorithm to collect and extract features of power consumption information data of smart watt hour meter in operation, analyze abnormal power consumption data, apply machine learning method to learn eigenvalues, and deduce judgment threshold of power consumption abnormality; the association rule Data Mining method is used to fuse the results of independent detection, so as to realize the data mining of electricity theft. e experimental results show that this method can mine the abnormal data of power consumption in different periods, but it has the problem of long mining time. While in [8], the authors presented a data mining method for dam safety monitoring based on FP growth. After pruning the preprocessed monitoring data, priority tree is generated to mine frequent items.
is method not only has the characteristics of fast mining speed and concise results, but also can compare single factor or analyze the relationship between multiple factor coupling and target variables, which provides a good idea for dam safety monitoring data mining. However, the main limitation of their work is that the results obtained by this method are not comprehensive, and there is a problem of missing data. Similarly, the authors of [9] planned a data mining method combining genetic algorithm and association rules is proposed. In their proposed work, firstly, GA crossover operator and mutation operator are improved adaptively, so that they can adjust adaptively according to the fitness value of function in the iterative process. Secondly, the improved adaptive GA is integrated into the association rules, making full use of GA's good global search ability to improve the efficiency of mining association rules with massive data. irdly, in order to avoid useless rules and reduce the existence of irrelevance, the intimacy degree is integrated to improve the reliability of association rules. Finally, on Hadoop big data platform, the optimized algorithm is verified by analyzing traffic data. e results show that the algorithm has the advantage of fast convergence speed, but the accuracy of data mining results is not high. In [10], a big data mining method for ocean going ship operation monitoring based on association rules is proposed. Firstly, the monitoring data source is obtained, and the data is stored in the database. Secondly, the ocean going ship operation monitoring data is preprocessed, so as to generate the ship operation monitoring big data mining model and complete the operation monitoring big data mining. e experimental results show that this method can ensure the accuracy of data mining, but it also has the problem of incomplete data mining results. All these works are facing limitations that need to be overcome. Inspired from the current uprising of wireless sensing technology in various fields, specifically in the field of dramas, this study aims to develop a network platform wireless sensing technology for drama resource mining. e traditional methods are facing numerous problems regarding dramas such as long time mining, incomplete mining results, and low mining accuracy, so, in order to solve these problems, our proposed system provides efficient mining time and comprehensiveness of resource mining results with high accuracy by using Quantum Genetic Algorithm. In our proposed work, we have first designed a network platform to collect the drama resources; after that, we have acquired drama resource on the network platform based on wireless sensing technology by explaining its circuit diagram. We have also investigated Quantum Generic Algorithm based on our proposed drama resource mining and realized it. e remainder of the paper consists of the following sections. We provide a list of related works in Section 2. In Section 3, we discuss our work strategy. e experimental attempt is discussed in Section 4. Finally, in Section 5, the study work's conclusion is presented.
Related Work
Drama is significant to a variety of academic areas, including cultural heritage transmission and multimedia repository classification and search. Tale ontologies [11][12][13][14] were proposed with two main purposes in mind: to identify story kinds and to provide an underlying model for narrative annotation. e authors used OWL to design several graphic kinds in [15]. e system employs the drama to execute casebased reasoning: provided a story plan, the system searches the drama for a plot that is comparable, calculating the semantic similarity of the provided plot to the plots recorded in the drama. In a similar vein, [16] utilizes automatic classifiers to categorize plot kinds, while the opiate system [17] creates and populates story worlds using a Proppian model of tale. A computational approach is used in [18] to create new stories in the manner of Russian fairy stories using the formal model. Several authors have questioned the extension of Propp's concept as a general story model in recent times, particularly in regard to digital media [13,19].
One of the primary obstacles in the research on drama resource mining is resolving discrepancies between media kinds and genres. In [20,21], the authors present the OntoMedia drama, a medium-independent paradigm that may be used in a variety of projects to document the narrative content of various media objects, spanning from written literature to comics and television drama. e logical notion of procedures, as used in SUMO, is used to reason about stories and produce storylines in [1]. Although not directly applicable to narrative frameworks, this method demonstrates the importance of proper action representation for tale characterization and annotation. Many initiatives have looked into the use of ontologies in online access to cultural material throughout the last decades. Computation ontologies, as discussed by [22], are particularly well suited to encoding conceptual models for access to digital resources and structuring the interaction between the archive and its users. e cultural Sampo initiative [23] makes a groundbreaking addition to the use of ontologies for culture and heritage accessibility. is project includes a set of domain ontologies that serve as a backdrop for exploring cultural items and monitoring their underlying relationships [24]. e system permits study of artifacts depending on their relationships with a reference tale at the narrative level; however, the story depiction is only functional for access to cultural items and is not meant as a standalone account of the narrative domain. e authors in [25] advance the wireless sensor network coverage model by studying the operational features of the wireless sensor network. ough, it has better computational difficulty because it presents the optimization relations into the Particular Swarm Optimization (PSO). In [26], the authors planned a 2-phase system for gaining the finest energy provision technique by dealing with the game equilibrium of the design. Additionally, based on a connection model, in [27], the authors have presented a method of node optimization coverage for passive checking scheme of 3D-WSN. Currently, Quantum Optimization Algorithms have been progressively used to advance the network effectiveness of WSNs. For the purpose of improving the accuracy of positioning, the authors in [11] proposed a Positioning Algorithm based on quantum particle swarm optimization by using the parallelism of quantum computing.
ough, the quality of the solution cannot be effectively improved by simply using Quantum Optimization Algorithm; therefore, there is a need to combine it with other techniques to further optimize the search capabilities of the algorithm. In [4,5], the authors stated that drama represents the integration of resources needed to develop drama and the implementation of localization and characteristic strategy of TV drama, while in [6], the authors explained how to effectively obtain drama resources in the network platform has become a significant problem to be solved. Many authors combined drama with today's technologies, that is, big data, wireless sensor technology, and artificial intelligence, to facilitate human from low cost, high speed, and low data consumption [28].
Proposed Work
In this section, we discuss our proposed network platform for drama resource along with acquisition of drama resources on the network platform based on wireless sensor technology; after that, we will discuss Quantum Genetic Algorithm based on our proposed scheme, and at the end of this section, we will combine drama resources with the fuzzy attribute feature detection method for realization of the our proposed system [29].
Network Platform Drama Resource Collection Platform.
In the process of drama resource mining on the network platform, drama resources are generally not indexed by search engines, and these high-quality drama resources cannot be directly obtained through search engines. is needs some mechanisms that enable search engines to obtain drama resources with high efficiency, thanks to drama resource mining, which has the ability to improve the coverage rate of drama resources by search engines [21]. Keeping in view of this, we have planned a drama resource collection platform that mainly includes page processing module, drama resource query interface recognition and classification module, and link construction and validity verification module. Furthermore, effective links and corresponding page contents in our proposed drama resources are found and used as resources to be searched by search engines. e drama resource collection platform mainly includes the following modules: page download and processing module, drama resource query interface recognition and classification module, link construction and validity verification module, and storage module. e overall schematic diagram is shown in Figure 1. e following is a brief introduction to the functions implemented by each module in the drama resource collection platform of the network platform: (1) Page download and processing module: the main task of this module is to obtain the page source code which enables page downloading and processing. Since the page source code contains a lot of impurity information, the page source code needs to be cleaned up and converted into a DOM tree for easy operation; otherwise, it will affect the efficiency of resource collection and processing [1].
(2) Drama resource query interface recognition and classification module: this is another important module of our proposed scheme. is module is connected with the users who seek drama resource. e main task of this module is to recognize the desired query and classify the drama resource query interface by field, removing irrelevant query interface.
(3) Link construction and validity verification module: this is another most important module of our proposed scheme, which is connected with drama resource query interface recognition and classification module. In this module, URL links are constructed mainly by querying keywords, querying drama resource keywords, at the same time, using more links to find more URL links, and verifying the validity of all URL links validity, filtering out the query results that cannot be obtained. (4) Storage module: this is the last module of our proposed scheme that enable our system to save the verified valid URL link and its corresponding page information, so as to obtain the network platform drama resource collection result.
Acquisition of Drama Resources on the Network Platform Based on Wireless Sensor Technology.
Because of the diversity of drama resources, it is not comprehensive to obtain the results of drama resources only through the network platform, so it is necessary to further collect the drama resources. As a new technology concept, the rapid development of wireless sensor technology has made it widely used in many fields such as consumer electronics, crop monitoring, livestock health monitoring, and medical services. In various factory environments, wireless local area network technology has been widely used as a communication information transmission tool between workers [30]. At the same time, radio frequency identification technology (RFID) as an electronic tag is also used in public transportation systems and personnel identification systems in the service industry widely used. e RFID system is explained in Figure 2. e RFID tags in our proposed system are talented to identify every drama resource independently. is RFID is accomplished by analysis of numerous tags concurrently and instantly and can manage with strict and dirty environments. Furthermore, the RFID tags can also hold larger quantities of data, and data on tags can be read or simplified deprived of line of sight. ese tag items are reusable and can also be automatically tracked out without the input of worker which eliminates the human errors, and they are not spoiled as simply like barcodes.
Our proposed scheme uses RFID technology in wireless sensor technology as a solution to collect drama resources on the network platform [31]. Among them, the wireless communication chip of the radio frequency transceiver module is TRY6831. In order to meet the low power consumption requirements of the node, the SQ series embedded microcontroller produced by TI is used as the main control module of the node.
Scientific Programming e TXD end of the embedded single-chip microcomputer is connected to the RXD end of the sensor, the RXD end of the single-chip microcomputer is connected to the TXD end of the sensor, the power ground is the GND end, and the VCC end is connected to a 5 V power supply. e data resources are transferred to the embedded single-chip microcomputer to obtain the resource collection results. e circuit diagram of the network platform drama resource acquisition sensor is shown in Figure 3. e resource collection nodes in Figure 3 are connected by a TRY6831 wireless communication chip. e chip has a transmission rate of 320 kb/s and a transmission distance of 11.2 m. It has the characteristics of high performance, low power consumption, and low cost. e wireless sensor network protocol can be logically divided into two types: voice-oriented and data-oriented. In many wireless networks based on data transmission, small, low-cost, low-complexity wireless sensor networks are widely used. e wireless sensor network protocol essentially implements the connection of the entire protocol through the interface between the user and the protocol entity. For a specific layer user, it can call some services provided by the current layer protocol entity through service primitives. In the process, the current layer protocol entity will also call service primitives to return some status information to the user [32,33]. e IEEE 802.15.4 and Zigbee alliance are committed to making low energy consumption, low-rate transmission, and low cost as important goals. e IEEE 802.15.4 standard and the Zigbee protocol specification have standardized the functions that should be implemented at each layer in the form of service primitives. e work of implementing the protocol is to implement the various primitives in the standard, aiming to provide a unified standard for the long-distance and low-speed interconnection between individuals and devices. IEEE 802.15.4 defines 13 PHY layer service primitives and 35 MAC layer service primitives.
(1) Physical layer (PHY): this indicates the physical layer which is mainly responsible for data modulation and demodulation, sending and receiving, directly operating the physical transmission medium (radio frequency) downwards, and providing services for the MAC layer upwards. (2) Media access control (MAC): this layer is also called Data Link Layer, which is responsible for single-hop data communication between adjacent devices. It is also responsible for establishing synchronization with the network, supporting association and disassociation, and MAC security; it can provide a reliable connection between two devices. (Figure 4), which determines the mechanism used when devices are connected and disconnected from the network. is layer performs route discovery and route maintenance between devices. is layer also completes the discovery of neighboring devices within one-hop range and storage of related information, creates a new network along with it, and assigns network addresses for new networked devices [17]. is layer provides all endpoints services and connects to the device through the network layer and the security service provider layer and also provides services for data transmission, security, and binding. erefore, it can adapt to different but compatible devices. (5) Application layer (APL): this is the top most layer connected with the application software or user. is layer can configure and access network layer parameters through Zigbee Device Objects (ZDO) and provides them to application sublayer.
We have explained acquisition of drama resources and established wireless sensor network protocol stack; now, an algorithm is needed to classify the provided database and to mine the drama resources. In the next section, we will explain Quantum Genera Algorithm to achieve the desired goals.
Proposed Quantum Generic
Algorithm for the Classification of Drama Resource. In this section, we discuss the classification algorithm (Quantum Generic Algorithm) for our proposed drama resource platform. As we know, classification is a form of data analysis that can be used to extract and describe important data categories. is analysis helps to understand the data better and comprehensively. ere are many classification methods, such as the establishment of decision tree classifiers, naive Bayes classifiers, Bayesian belief networks, rule-based classifiers, and quantum genetic algorithms [34]. In the Quantum Genetic Algorithm, individuals are coded with the probability amplitude of qubits, the phase rotation of qubits based on quantum gates is used to realize individual evolution, and quantum NOT gates are used to realize individual mutation to increase the diversity of the population.
A qubit is a two-state quantum system that serves as an information storage unit. It is a unit vector defined in a twodimensional complex vector space [35]. is space consists of a pair of specific orthonormal basis |0〉, |1〉 { }. erefore, it can be in the superposition of two quantum states at the same time. It is defined as |β〉 � φ|0〉 + λ|1〉, where φ and λ are two complex numbers, representing the probability amplitude of the corresponding state, and satisfying the normalization condition |φ| + |λ| � 1. A system containing n qubits can represent 2 n states at the same time. When observing, the system will form a certain state. Scientific Programming ere are many ways to encode chromosomes in traditional genetic algorithms: binary, decimal, symbolic encoding, etc. In the quantum genetic algorithm, an encoding method based on qubits is used. A qubit can be defined as φ by its probability amplitude, and similarly, k qubits can be defined as Among them, |φ i | + |λ i | � 1, i � 1, 2, . . . , k. is coding method makes the population have better diversity, and as |λ| and |λ| tend to 0 or 1, the chromosome converges to a single state.
Our proposed Quantum Genetic Algorithm is based on the expression of quantum state vectors. It applies the probability amplitude representation of qubits to chromosome encoding, so that one chromosome can express the superposition of multiple state vectors, and uses quantum revolving gates to achieve chromosome update operations, the introduction of quantum mutation to overcome the premature phenomenon, and finally achieve the goal of optimization solution [36].
In the proposed Quantum Genetic Algorithm, the probability amplitude of a qubit can be expressed as φ λ ; then, the probability amplitude of k qubits can be expressed by the following equation: Among them, the probability amplitude satisfies the normalization conditions given in the following equation: Here, i � 1, 2, . . . , k. If there is a quantum system with 3bit quanta and three pairs of probability amplitudes, it can be expressed by the following equation: en, the state of the system can be described by the following equation: erefore, the probability of the system appearing in states |000〉, |001〉, |001〉, and |101〉 is 1/8, 3/8, 1/8, and 3/8, respectively. erefore, the three-bit quantum system described by the above equation can contain 4 states of information at the same time.
For the above equation, one chromosome can describe 4 states [37]. But in traditional evolutionary algorithms, 4 chromosomes are needed to describe 4 states, namely, (000), (001), (100), and (101). Populations described based on This is the top most layer that is connected with the user. It provides network facilities and provides instruction to application sublayer. This is connected with the application layer and acts as a bridge between APL and NWK. This is connected with the application layer and acts as a bridge between APS and MAC. It provides routing facilities.
This is connected with the application layer and acts as a bridge between NWK and PHY. It provides error and flow control facilities.
This is the bottom layer which provides physical facilities. Scientific Programming quantum chromosomes also have diversity. When |λ| and |λ| tend to 0 or 1, the diversity will gradually disappear, and the quantum chromosome will converge to a certain state, which shows that the quantum chromosome has the ability to explore and develop at the same time.
Our proposed Quantum Genetic Algorithm is similar to the traditional Genetic Algorithm in that it is also a probabilistic search algorithm. Suppose a quantum population is given in the following equation: Here, t represents the genetic algebra, while w t l represents the l chromosome of the t generation, and the definition of w t l is as shown in the following equation: Here, m represents the qubit number, which is the length L � 1, 2, . . . , m of the chromosome. According to the above analysis, the process of our proposed platform drama resource classification algorithm based on quantum genetic algorithm is as follows.
Proposed Quantum Generic Algorithm.
In this section, we present an effective quantum generic algorithm for our proposed drama resource mining.
Explanation of the Proposed Quantum Generic Algorithm.
In this section, we explain the procedure of proposed Quantum Generic Algorithm. Here, the algorithm first initializes the population. When the population is initialized, all the probability amplitudes of 2 m of all chromosomes are initialized to 1/ � 2 √ , which means that when the current number t � 0, the linear superposition probability of each chromosome in all possible states is the same, which can be seen in the following equation: Among them, h c represents the cth state, which is described by the binary string (y 1 , y 2 , . . . , y g ), g � 1, 2, . . . , m.
Secondly, in the calculation process, the binary solution is set as follows:K(t) is generated by observing the state of the population W(t − 1). Each solution is a binary string of length L, and its value is determined by the observation probability of the corresponding qubit. en, calculate the fitness of each solution according to the obtained value to find the optimal solution.
In addition, in order to obtain a better chromosome, the binary solution is compared with the current optimal solution, and the population, is updated with an appropriate quantum gate R(t). Specific quantum gates can be designed according to specific problems. e commonly used quantum revolving gate is shown in the following equation: Here, θ represents the angle of rotation. Finally, the optimal solution of the binary solution set K(t) is selected. If the optimal solution is better than the optimal solution of the current platform drama resource classification, the optimal solution is used to replace the optimal solution of the current platform drama resource classification to realize the platform drama resource optimize classification.
Realization of the Mining of Drama Resources on the Network Platform.
In this section, we discuss mining of proposed drama resource on the network platform. By mining the semantic association feature quantity of frequent patterns of platform drama resources, combined with the fuzzy attribute feature detection method, the network platform drama resources mining is realized. Combined with the autocorrelation feature detection method, the statistical analysis of frequent pattern mining of platform drama resources is carried out, and the fuzzy correlation fusion model of frequent pattern mining of platform drama Scientific Programming 7 resources is established; the feature segmentation model of platform drama resource frequent pattern data is established [37], which is expressed by the following equation: . . .
Among them, r NM represents the global weighted value of the frequent pattern data mining of platform drama resources at the Nth point, constructs the STARMA (1, 1) statistical analysis model of the frequent pattern data of graph data, and performs optimization control of the frequent pattern data mining of platform drama resources, expressed by the following equation: Here, ρ represents the fuzzy rule feature quantity of the frequent pattern mining of the platform drama resource data, using the statistical information analysis method, establish the platform drama resource frequent pattern data mining associated feature distribution set, and express it by the following equation: Here, C ih represents the input space, C oh represents the output space, and C io represents the high-dimensional feature space. e calculation equations of the above three parameters are Here, NB represents the closed frequent item set and NS represents the semantic segmentation domain.
Finally, the big data fusion method is used to perform pattern matching and information fusion clustering of frequent pattern mining of platform drama resources. At feature point a, the frequent pattern distribution set of platform drama resources is expressed as in the following equation: Here, v represents the number of frequent pattern data of platform drama resources, and f represents the weighting coefficient of frequent pattern mining of platform drama resources.
rough the semantic dynamic feature segmentation method, the standard error coefficient of platform drama resource mining is obtained as in the following equation: Here, x i max represents the fuzzy constraint feature quantity of platform drama resource frequent pattern mining and optimization. Establish a storage module and an information query module for frequent pattern mining of platform drama resources, and establish a feature extraction and classification model for frequent pattern mining of platform drama resources to obtain the final mining output results given in the following equation: e result calculated by equation (19) is the output result of platform drama resource mining, thus completing the design of the network platform drama resource mining method based on wireless sensor technology.
Simulation and Experimental Work
In order to verify the effectiveness and comprehensiveness of the network platform drama resource mining method based on wireless sensor technology, simulation experiments are performed. Compared to the method used in [7,8], we have used mining time, the comprehensiveness of the mining results, and the accuracy of the mining results as the experimental indicators.
Experimental Environment.
We have carried out our experimental work for the proposed scheme, by using Linux Ubuntu 10.10 64-bit software, Intel Xeon E5606 4G memory 1T hard disk, VIM editor and CodeBlock development tools, and C++ as programming language.
Experimental Data.
e experimental data comes from a local drama database, which includes three subdatabases: an index database, a data database, and a video database. e data sublibrary records information such as drama types, repertoires, characters, documents, pictures, and the video sublibrary records audio-visual data of the play. e index sublibrary provides indexes for the data sublibrary and the video sublibrary. Figure 5 shows the structure diagram of the database.
Experimental Results.
In this section, first, we discuss the resource mining time, comprehensiveness of resource mining results, and accuracy of mining results. During our experimental work, we compare our scheme with the work of [7,8] Scientific Programming work of [7,8].
e rest will be explained in the coming section.
Analyzing Figure 6, it can be seen that the mining time consumed when mining resources using the method in this paper is significantly lower than that in reference [7] method and reference [8] method. e mining time of the method in this paper shows a continuous downward trend. When the number of the iterations is less than 6 times, the excavation time decreases significantly, and then the change trend of excavation time slows down, and the lowest value of excavation time is only 0.8 s. By comparison, it can be seen that the mining time of this method is shorter, which shows that the resource mining efficiency of this method is higher.
Comprehensiveness of Resource Mining Results.
Comparing the comprehensiveness of the resource mining results of different methods, the results are shown in Table 1.
Analysis of the data in Table 1 shows that the method in this paper can mine up to 19 types of drama resources, and the lowest is 13 types, while reference [7] method and reference [8] method can mine fewer types of resources, far below the method of this research work. Among them, reference [7] method can only mine 9 drama resources at most, and reference [8] method can only mine 8 drama resources at most. rough comparison, it can be seen that the resource types obtained by mining method in this paper are more comprehensive, which shows that its application effect is better.
Accuracy of Mining Results.
Comparing the accuracy of the mining results of different methods, the results are shown in Figure 6. In order to get the accurate results of mining, we have compared our proposed model with the work of [7,8]. e rest will be explained in the coming section.
According to Figure 7, the resource mining accuracy rate of this method is significantly higher than that of the traditional method. When the number of iterations is 5, the mining accuracy rate of this method reaches 60%, and the mining accuracy rate of the method in reference [7] method is 47%; the mining accuracy rate of reference [8] method is 43%. When the number of iterations is 14 times, the mining accuracy of the method in this paper reaches 82%, the mining accuracy of the method in [7] is 56%, and the mining accuracy of the method in [8] is 62%. rough comparison, it can be seen that the mining results of the method in this paper are more accurate, indicating that the mining results are more reliable.
Comprehensive analysis of the above experimental results shows that the minimum mining time of this method is only 0.8 s, which can mine up to 19 types of drama resources, and the Scientific Programming mining accuracy rate reaches 82%. is method has obvious advantages in mining time, mining comprehensiveness, and mining accuracy, which shows that the mining results of this method are more reliable and the mining efficiency is higher. e network efficiency of the proposed QGA technique in target allocation is examined, and QGA provides a comparison with the Particle Swarm Optimization (PSO) and Simulated Annealing (SA) algorithms in wireless sensor technology for target allocation. en, the efficiency of QGA is compared with different amounts of target points and number of sensors. e entire testing procedure is carried out on a computer using the same hardware and software. Figure 8 shows how much iteration is necessary for SA, PSO, and our proposed QG algorithm to reach convergence under various simulated conditions. e range variation percentage judgment approach is used to determine convergence. e variation limit specified in this experiment is 1% to 2%; that is, the algorithm can be regarded convergent if the rise percentage of network efficiency achieved by the algorithm is between 1% and 2%. As a result, our suggested Quantum Generic Algorithm appears to have a better convergence performance.
Conclusion
In conclusion, this paper puts forward an efficient Quantum Generic Algorithm (QGA) for drama resource mining based on wireless sensing technology. From the perspective of drama resource, this paper makes a mathematical model and then uses the proposed algorithm to solve the mathematical model. e results show that it is effective in solving the problem of resource mining time, comprehensiveness of mining results, and accuracy of the mining results into the sensor target allocation problem. e results prove the effectiveness of the QGA-based drama resource in optimizing network efficiency of the wireless sensing technology. By applying QGA, a more efficient drama resource mining scheme can be obtained when drama resource mining is carried out. Not only does the scheme obtain the maximum monitoring effect of the drama resource, but also it saves mining time and produces accurate results. rough the suitable deployment of sensors, limited resources can be used as much as possible. Future research should carefully consider other methods that can further improve the network efficiency of drama resource mining, such as combining routing optimization algorithms and clustering and machine learning techniques.
Conflicts of Interest
e author declares that he has no conflicts of interest regarding the publication of this paper. | 8,012.2 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Effects of vertical central stabilizers on nonlinear wind-induced stabilization of a closed-box girder suspension bridge with various aspect ratios
The aerodynamic shape of a closed-box girder plays an important role in the wind-induced stabilization of long-span suspension bridges. The purpose of this study is to investigate the effects of the combination of five aspect ratios and a downward vertical central stabilizer (DVCS) on nonlinear flutter and aerostatic behaviors of a super long-span suspension bridge with closed-box girders. Through conducting a series of wind-tunnel tests and nonlinear finite element analysis, the results show that the nonlinear self-excited forces and the critical wind speed (Ucr) gradually increase as the increase of the aspect ratio (i.e. the width to depth ratios). Furthermore, the application of 20% deck depth DVCS could significantly increase the nonlinear self-excited forces and Ucr for small aspect ratios of 7.9 and 7.1. Particularly, the installation of the DVCS could change the flutter divergence patterns of the bridge from soft flutter to hard flutter, especially for a relatively small aspect ratio. In addition, the aerostatic force coefficients and torsional divergence critical wind speeds of the larger aspect ratio with DVCS are significantly larger than that without DVCS. A relatively small aspect ratio of the bridge has better aerostatic performance than that with a larger aspect ratio.
Introduction
The construction of super long-span suspension bridges with a length of main span more than 1500 m long becomes increasingly common in recent years, such as the 1915 Canakkale Bridge (2023 m main span) in Turkey, the Nansha Bridge (1688 m main span) and Lingdingyang Bridge (1666 m main span) over the Pearl River in China. However, super long-span suspension bridges with closed-box girders are extremely sensitive to wind excitation, due to their flexibility, lightness, and low damping ratio [1,2]. Wind-induced instability (e.g., flutter and aerostatic instability) under strong wind which could directly lead to the safety of the bridges, is one of the most difficult problems encountered when designing super long-span closed-box girder suspension bridges [3][4][5][6][7].
As one of the critical factors in the aerodynamic shape of a closed-box girder, the aspect ratio of width to depth of the deck (B/H) could significantly affect the wind-induced instability of suspension bridges by determining structural parameters [8][9][10][11]. To further improve the wind-induced stabilization of the bridge, a range of passive aerodynamic countermeasures could be implemented to modify the aerodynamic shape, such as a vertical central stabilizer [12], central slot [13], and guide plate [14,15]. Although it has been known that the vertical central stabilizer (VCS) is one of the practical measures to alleviative the flutter instability of closed-box girder suspension bridges [16], the degree of its enhancement in nonlinear flutter and aerostatic behaviors of super long-span suspension bridges with various aspect ratios under strong wind is still uncertain. In order to guarantee the windinduced stabilization of super long-span suspension bridges, it becomes necessary to systematically study the effects of different aspect ratios and VCS combinations on nonlinear flutter and aerostatic behaviors of these bridges with closed-box girders.
On the one hand, previous studies have provided a better insight into the flutter instability of bridges concerning only the aspect ratio [17] or only the VCS [18], but the complex nonlinear behaviors of soft flutter [19,20] and post-flutter [21,22] for closed-box girder suspension bridges under strong wind are still challenging. During the last decades, a series of nonlinear aerodynamic force models, such as the band superposition model [23,24], Volterra model [25], polynomial model [26,27], nonlinear differential equations model [13,14] and neural-network-based model [28,29], were developed to predict the softflutter and post-flutter behaviors of bridges. Nevertheless, the effects of the aerodynamic shape modification (e.g. the combination of DVCS and aspect ratio) on the nonlinear flutter behaviors of closed-box girder suspension bridges should be further investigated. On the other hand, the nonlinear three-component displacement-dependent wind loads, and the geometric and material nonlinearities should be taken into account in evaluating the aerostatic stabilization of suspension bridges [30,31]. More and more studies have shown that suspension bridges' nonlinear aerostatic instability was investigated both experimentally [17,18,32] and theoretically [33][34][35] in recent years. However, the influence of the combination of aspect ratios and VCS on the nonlinear aerostatic behavior of closed-box girder suspension bridges have not been fully understood.
The purpose of this study is to conduct a series of wind-tunnel tests in conjunction with nonlinear numerical analysis to investigate the effects of various aspect ratios in the combination of downward VCS (DVCS) on the nonlinear flutter and aerostatic performance of a super long-span suspension bridge. By keeping the vertical frequency constant, a series of sectional-model flutter tests were carried out to obtain the critical wind speed (U cr ) of closed-box girders with five typical aspect ratios ranging from 7.0 to 10.4 and a DVCS of 20% deck depth. Subsequently, a threedimensional (3D) nonlinear FE model of the closedbox girder bridge was developed based on the nonlinear self-excited force model, and their timedependent displacements, frequencies, and oscillation modes of soft-flutter were compared. Finally, static wind-load coefficients and nonlinear aeroelastic behaviors were evaluated based on the force-measured testing results of the two important aspect ratios (i.e., 7.9 and 8.9) and a DVCS. The present study is helpful to optimize the aerodynamic shape of super long-span suspension bridges under strong wind.
2 Nonlinear flutter behavior with various aspect ratios and DVCS combination
Structural parameters of a super long-span suspension bridge
As shown in Fig. 1a, a super-long-span suspension bridge with the span arrangement of 580 ? 1756 ? 630 m was selected, in which the overall height of two side towers is 247.5 m and the longitudinal distance between two adjacent suspenders is 18 m. The 3D nonlinear finite element models of the bridges with various aspect ratios were established by using ANSYS software, and their natural frequencies and mode shapes of the bridge were calculated by using the block Lanczos algorithm. Table 1 shows the geometric and mechanical parameters of a closed-box girder with five typical aspect ratios (i.e., B/H = 10.4, 8.9, 8.3, 7.9, and 7.1). The details of the SEC2 and SEC4 of the bridge with the aspect ratio of 8.9 and 7.9, respectively, are shown in Fig. 1b, c. Since the change of aspect ratio could lead to the change of the mechanical parameters (e.g., mass and stiffness), both mass and mass inertia were adjusted to keep the important dynamic characteristics of the vertical frequency constant, in order to effectively study the effect of the aspect ratio as the aerodynamic shape on the nonlinear wind-induced stabilization of a closed-box girder suspension bridge. In this study, the vertical frequencies for the five aspect ratios are kept constant (i.e., 0.112 Hz), and the torsional frequency increases as the aspect ratio decreases. A series of flutter tests of sectional models were experimentally conducted for studying the aerodynamic shape modification effectiveness of five aspect ratios on the flutter instability in the closed-box girders without and with a DVCS. Figure 2 shows the details of the experimental setup of fluttering testing involving eight spring-supported rigid-sectional models of closed-box girders with a DVCS of 20% depth of the deck subject to wind attack angles of ? 3, 0, and -3 degree, respectively. The geometric scale ratio of SEC1, SEC3, and SEC5 is Table 2, the flutter tests in this study involve 30 testing cases in total considering five different aspect ratios, without and with DVCS under three different wind attack angles (i.e., ? 3°, 0°, and -3°). The tests were conducted in the TJ-1 boundary layer wind tunnel at Tongji University. The critical flutter wind speeds of the sectional models with five aspect ratios without and with DVCS under three wind-attack angles are presented in Fig. 3. It shows that the wind attack angle of a = 0°is the Frequency (Hz) most unfavorable for the closed-box girders among three angles for most of the cases. In particular, the critical wind speed U cr gradually increases as the aspect ratio increases. For example, the flutter performance of the bridge with SEC1 (aspect ratio = 10.4) is the best among the five aspect ratios, followed by SEC2 with a side ratio of 8.9, and the flutter performance of SEC5 (aspect ratio = 7.1) is the worst. The ratio of torsional frequency to vertical frequency for the closed-box girders is in inverse proportion to critical flutter wind speed. However, none of the sectional models with these five aspect ratios meets the requirement of wind-resistant design (U cr is less than the required minimum critical wind speed of 83.1 m/s). Thus, the application of DVCS becomes necessary to further modify the aerodynamic shape. The results in Fig. 3b show that the wind attack angle of a = ? 3°is the most unfavorable for the closed-box girders among three angles after installing the DVCS for most of the cases. It indicates that the application of 20% deck depth DVCS could significantly increase the minimum U cr for all the aspect ratios, particularly for the flutter performance of closed-box girders with smaller aspect ratios of SEC4 and SEC5.
Flutter testing of the aspect ratios and DVCS combination
upward VCS [12], the NSFM of the closed-box girders with various aspect ratios and a DVCS in Fig. 4 was further developed in terms of nonlinear differential equations, which are listed in the Eqs. (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) of reference [13]. The relative instantaneous wind velocity (u), relative instantaneous wind attack angle (h), and their derivatives are used as the input variables. u x and u y are the horizontal and vertical components of wind velocities, respectively. _ x _ y and _ a are the horizontal velocity, vertical velocity, and torsional angular velocity of the movement, respectively. F H ; F V and M Z represent the drag force, lift force and lift moment, respectively. B and H are the width and depth of the closed-box girder, respectively.
The expression of nonlinear self-excited force can be written in Eq. (1), that is The / m ; K m and / a ; K a denote the contribution matrix variables associated with structural translational acceleration and rotate acceleration, respectively. G a , G m , H m , G a , H a denote the contribution matrix variables associated with input variables of structural movement, respectively.
In order to identify the parameters of NSFM for the closed-box girders with five different aspect ratios and DVCS, First, the initial values of input variables under various wind velocities and oscillation displacements will be obtained by CFD simulations using forced oscillation. Then, these model parameters will be identified by implementing the Levenberg-Marquardt algorithm and fourth-order Runge-Kutta algorithm to minimize the difference between the outputs of timedependent aerodynamic forces (i.e. F H , F V , and M Z ) obtained from model prediction and the CFD simulation [12][13][14]. A series of forced oscillations were obtained based on the large eddy simulation (LES) method in conjunction with a Smagorinsky subgridscale model incorporated CFD model to determine the initial parameters of the NSFM [34,35]. The rectangle computational region of 75B 9 50B was used in CFD simulation, and the Reynolds number is set to be 5.0 9 10 5 based on the depth of closed-box girders. Figure 5a,b shows the simulation results without and with DVCS, respectively. Seven representative reduced wind velocities (U r ) (i.e., U r = U/fB = 2,4,6,8, 10,12,14), six representative maximum vertical displacements (y) of the deck (i.e., y = ± 1, ± 2, ± 3, ± 4, ± 5, ± 6), and six representative maximum torsional displacements (h) of the deck (i.e., h = ± 1°, ± 5°, ± 10°, ± 15°, ± 20°, ± 25°) were used as the input variables in NSFM. Figure 5c,d compares the predicted dimensionless time-dependent outputs, hysteresis loops, and amplitude spectrums of self-excited force (e.g., lift force F L and lifting moment F M ) of closed-box girders with five different aspect ratios. It shows the dimensionless time-dependent F L and F M of a closed-box girder without DVCS under a large reduced wind speed (U r = 10) and heaving amplitude (h = 6). The absolute amplitude of F L for SEC1 is the largest among the five aspect ratios. The absolute amplitude of F L gradually decreases with a decrease in aspect ratio. Similarly, the amplitude variation of F M is relatively large for a large aspect ratio. In addition, the area of hysteresis loops of lift force F L for SEC1 is the largest among the five aspect ratios, and the area of hysteresis loops decreases with a decrease in aspect ratio. This indicates that the unsteady features increase with the decrease of wind speed. As shown in Fig. 5e-f, there is a nonlinear relationship between the multi-frequencies phenomenon of F L and the self-excited force for all five aspect ratios, and their first frequencies are close to 0.2 Hz.
The dimensionless time-dependent F L and F M of the closed-box girder with DVCS under the same reduced wind speed (U r = 10) and heaving amplitude (h = 6) are illustrated in Fig. 6. It shows that the amplitudes of F L for SEC4 and SEC5 with DVCS are larger than those with the other three aspect ratios, and the absolute values of F M for SEC4 and SEC5 with a DVCS are larger than those with the other three aspect ratios. In addition, both the amplitudes of F L and F M of the closed-box girder with a DVCS are much larger than those without a DVCS. The areas of hysteresis loops of the lift force F L for SEC5 and SEC1 are the b Fig. 7 Time-dependent torsional displacement responses at the mid-span of the closed-box girder without DVCS. a, b SEC1; c, d SEC2; e, f SEC3; g, h SEC4; i, j SEC5 (c) (e) (f) largest and smallest, respectively. The area of hysteresis loops for a closed-box girder with a DVCS decreases with an increase in the aspect ratio but is larger than those without DVCS. The frequencies of F L for the closed-box girder with DVCS in Fig. 6d show that the DVCS can significantly change the frequency characteristics of the F L of SEC5 and SEC3.Therefore, the nonlinear aerodynamic force decreases with the increase of the aspect ratio after installing DVCS, and the DVCS can significantly influence the nonlinear characteristics of self-excited force.
Nonlinear flutter responses of the aspect ratios and DVCS combination
A three-dimensional closed-box girder bridge model with a total of 853 elements was developed based on the NSFM to investigate the nonlinear flutter responses of the bridge with different aerodynamic shapes. Based on the self-developed C# program, the 3D beam elements with 14 degrees of freedom (including 6 nodal translational degrees of freedom, 6 nodal rotational degrees of freedom, and 2 transverse shear degrees of freedom) are used to simulate the closed-box girder and main towers, while the 3D truss elements with 7 degrees of freedom (6 nodal translation degrees of freedom and 1 axial force degree of freedom) are used to simulate the main cables and hanger. Furthermore, the nonlinear aerodynamic force elements with 6 nodal degrees of freedom and a set of self-excited force subsystem degrees of freedom are used to simulate the NSFM on the nodes of the main girder. The boundary conditions of the main cables are the constraint in vertical, lateral, and longitudinal direction, respectively. The boundary conditions between the main girder and the left tower is the constraint in vertical, lateral, and torsional direction, respectively, while the boundary conditions between the main girder and the right tower is the constraint in vertical, lateral, longitudinal, and torsional direction, respectively. Figure 7 shows the time-dependent torsional displacement response at the mid-span point of the closed-box girder with five aspect ratios without and with DVCS. The soft flutter of five aspect ratios with the torsional displacement of about a = 2 was developed for the flutter divergence over the critical wind speed. Specifically, small limit cycle oscillation (LCO) occurred at a wind speed of U = 71 m/s for SEC1, U = 70 m/s for SEC2, U = 64 m/s for SEC3, U = 58 m/s for SEC4, and U = 56 m/s for SEC5. The torsional displacement of soft flutter for SEC3 was close to ± 1.5°at the beginning of U = 64 m/s, and then reached more than 20°at the extreme wind speed of U = 66 m/s. However, all the torsional displacements of the bridge without DVCS increase rapidly to the diverges at the flutter onset wind speeds of the closed-box girders with DVCS. Figure 8 shows that all the torsional displacements of the bridge with DVCS decrease rapidly at the flutter onset wind speeds of the closed-box girders without DVCS.After installation of DVCS, the critical wind speeds at which soft flutter became flutter divergence, were 81 m/s, 89 m/s, 85 m/s, 95 m/s, and 93 m/s, for SEC1 to SEC5 respectively. In addition, the torsional displacement responses of the hard flutter initially become larger, and then suddenly increase to a threshold (more than about 6°) which leads to structural failure. This indicates that there are different displacement responses in soft flutter for closed-box girders with various aspect ratios, and the installation of DVCS can change the flutter divergence patterns of a bridge from soft flutter to hard flutter. As shown in Fig. 7, there is no obvious LCO in the nonlinear response of the cross section with stabilizer as DVCS can significantly change the vertical DOF participation in the flutter divergence leading to the significant increase in the torsional displacements of the bridge as shown in Figs. 8 and 9. Figure 9a, b compares the frequencies of torsional displacement responses of closed-box girders with five aspect ratios without and with DVCS. The results demonstrate a single frequency phenomenon (i.e., around 0.16 Hz) for all responses. In addition, the frequency increases modestly with the increase of aspect ratio. Figure 9c, d shows the RMS of torsional displacements for five aspect ratios without and with a DVCS. The results show that the torsional amplitudes generally increase smoothly, and post-flutter instability is a nonlinear soft flutter in nature. Specifically, the largest torsional displacements happen from about 4 degree to 9 degree after installing DVCS. The reduced wind speeds of the SEC1 and SEC2 are relatively high ranging from 6.5 to 8 m, while the reduced wind speed of the SEC2, SEC4, and SEC5 with a DVCS are relatively high ranging from 8 to 10. Furthermore, the simulation results of the three displacement responses of the main girders are shown in Fig. 10. It shows that the torsional displacement responses of SEC2 are the largest among the three displacement responses. Most importantly, the first and second anti-symmetrical torsional modes play a dominant role in the structural coupled bendingtorsional oscillations of the main girder with and without a DVCS, respectively. The failure modes of the overall bridge at flutter divergence are shown in Fig. 10c, d. In summary, the installation of DVCS can significantly change the vertical DOF participation in the flutter divergence of the bridge, and thereby increase the critical flutter wind speed of the bridge.
3 Nonlinear aerostatic behavior with various aspect ratios and DVCS combination
Force-measured tests of the aspect ratios and DVCS combination
Force-measured testing of aerostatic forces on the closed-box girders with different aerodynamic shapes was performed in the TJ-2 boundary layer wind tunnel at Tongji University. Two important sectional models (SEC4 and SEC2 with two aspect ratios of 7.9 and 8.9, respectively) were 0.55 m wide (B) and two depths (H) of 0.061 m and 0.069 m, respectively, were used during flutter tests. In addition, the 20% depth DVCS for SEC2 and SEC4 was also experimentally investigated. In summary, a total of 100 testing cases of two aspect ratios without and with a DVCS under 25 windattack angles were experimentally tested. As shown in Fig. 11, a force-balance system was installed on one side of the tunnel to vertically support the two ends of a sectional model. A wind speed of 10 m/s was adopted in the test with the wind-attack angle ranging from -12°to ? 12°with an increment of 1°. The geometric details of the two models of (c) Fig. 11 The force-balance system used in this study. a Experimental set-up; b SEC4 without DVCS; c SEC2 with DVCS SEC2 and SEC4 without and with the DVCS are shown in Fig. 11b and c, respectively.
Static wind loading of the aspect ratios and DVCS combination
The drag-force coefficient (C D ), lift-force coefficient (C L ), and lifting-moment coefficient (C M ) with different aerodynamic shapes were measured in forcemeasured tests. As illustrated in Fig. 12a, the value of C D is unsymmetrical about a = 0°, and the value of C D at negative wind-attack angles is much larger than that at positive angles. Although most values of the C D for SEC4 without DVCS are close to those for SEC2 without a DVCS, all the values of C D for SEC4 with a DVCS are smaller than those of SEC2 with a DVCS. In addition, the values of C D for both the models without a DVCS are also much smaller than those with a DVCS from a = -9°to a = 9°. Figure 12b shows that the values of C L for SEC4 without a DVCS are slightly larger than those for SEC2 without a DVCS. This is consistent with the observation that the values of C L for SEC4 with a DVCS are larger than those for SEC2 with a DVCS. Both the values of C M for SEC4 without and with a DVCS are also slightly larger than those corresponding values for SEC2, respectively.
The value of C M generally becomes larger after installing the DVCS, as shown in Fig. 12c. Generally, the values of C L and C M gradually increase as the increases from a = -12°to a = 12°, whereas their values decrease as the aspect ratio increases. Most of the aerostatic force coefficients of both SEC4 and SEC2 with a DVCS are larger than those without DVCS.
Nonlinear aerostatic behavior of the aspect ratios and DVCS combination
Based on the measured aerostatic force coefficients, the nonlinear 3D aerostatic instability of a bridge with two aspect ratios without and with a DVCS is calculated for the nonlinear Eq. (2-3) using the optimum iteration method described in reference [31].
where K e and K g are structural elastic stiffness matrix and the geometrical stiffness matrix, respectively. F H (a), F V (a), M T (a) are drag-force, lift-force and lifting-moment, respectively. u f g is structural displacement vector. D and B is the depth and width of deck.
The torsional divergence critical wind speed (U td ) of the bridge was determined considering geometric nonlinearity and aerodynamic force nonlinearity. Figure 13 shows the three structural displacement responses at the midpoint of the bridge with two aspect ratios (SEC2 and SEC4) under three wind-attack angles. In addition, the displacement responses of the sectional model with a DVCS installed under the wind-attack angles of ? 3°were also shown in Fig. 13. It can be seen that the lateral displacements -12 -9 -6 -3 0 3 6 9 12 Fig. 13 Displacement responses at the midpoint of the bridge with two aspect ratios without/with a DVCS. a, b lateral displacements; c, d vertical displacements; e, f torsional displacements gradually increased with the increase of the wind speed, and a = ? 3°leads to the largest increase rate. The lateral displacements of SEC2 without a DVCS were slightly larger than those of SEC4 without a DVCS, while the displacements of SEC2 with a DVCS were smaller than those of SEC4 with a DVCS. Furthermore, the vertical and torsional displacements of SEC4 and SEC2 under a = ? 3°gradually increase with the increase of wind speed. In addition, the vertical and torsional displacements of SEC2 without DVCS were slightly larger than that of SEC4 without DVCS. The installation of DVCS could obviously increase the vertical displacements of SEC2, while the torsional displacements for both SEC4 and SEC2 increase after installing DVCS. For example, after the installation of DVCS, the U td of SEC4 and SEC2 increases from 160 m/s to 162 m/s and 150 m/s to 166 m/s, respectively. Therefore, a relatively small aspect ratio could improve the aerostatic performance of the closed-box girder suspension bridge.
Conclusions
This study investigated the effect of the DVCS on nonlinear flutter and aerostatic behaviors of a super long-span closed-box girder suspension bridge with five aspect ratios by conducting a series of wind-tunnel tests in conjunction with nonlinear finite element analysis. The following are some major conclusions: • The critical wind speed (U cr ) gradually increases as the increase of the aspect ratio due to the smaller ratio of natural torsional and vertical frequency, and the aspect ratio also affects the limit cycle oscillation development of soft flutter. For example, the flutter performance of the bridge with an aspect ratio of 10.4 is much better than that with an aspect ratio of 7.1. • The application of 20% deck depth DVCS could significantly increase the U cr for different the aspect ratios, and the enhancement in the flutter performance of the bridge is more obvious for a relatively small aspect ratio. The installation of the DVCS could change the flutter divergence patterns of a bridge from soft flutter to hard flutter. • The absolute values of C L and C M increase with the increase of wind-attack angle and aspect ratio of a suspension closed box girder bridge. In addition, the installation of DVCS can significantly improve three static wind-load coefficients. • The bridge with a larger aspect ratio has a higher torsional divergence critical wind speed. Installation of a DVCS also significantly improves the aeroelastic performance of the bridge, especially for a larger aspect ratio of 8.9. | 6,046.6 | 2023-03-08T00:00:00.000 | [
"Mathematics"
] |
Multilingual Communication Experiences of Foreign Migrants in China During the Covid-19 Pandemic
,
This study focuses on five foreign workers working for one of the biggest nightclubs in Kunming, Yunnan, examines their multilingual communication experience during the pandemic, tries to find challenges confronting these foreign migrants and their efforts to overcome barriers in crisis communication, and the limitation of their strategies and some practical suggestions are also analyzed.
Theoretical Framework
The concept of multilingual crisis communication (Piller, Zhang, & Li, 2020) was applied in this study.The outbreak of COVID-19 provides the opportunity to examine how multilingual communication was handled in emergent events, particularly to language minorities and foreign migrants.So many people worldwide have never engaged with such multilingual communication during COVID-19.Nonetheless, language barriers existing in multilingual communication mainly lie in the two lenses.Firstly, multilingual communication is limited only to a few languages.Second, English-centric multilingualism is rooted in people's minds globally.Some response strategies are also covered in the article.
Inadequate Languages in Crisis Communication
COVID-19 has revealed gaps in theory and practice in multilingual communication.Multilingual communication emphasizes language variety, but in practice, most of the world's 195 states operate in one or two national languages only, and linguistic minorities within those states, whether indigenous or migrant, face significant language barriers at the best of times (Piller, Zhang, & Li, 2020).Delivering public health information and other public service information does not present linguistic diversity.As the first battlefield of the virus, Wuhan in China encountered various linguistic problems.Although volunteers offered translation services, only seven languages were covered: English, French, German, Japanese, Korean, Russian, and Spanish.Moreover, English is in the highest status.During the pandemic, it is presented that imbalanced language services cannot meet the need for multilingual crisis communication (Zhang & Wu, 2020).In the worldwide pandemic, every individual is deemed by the World Health Organization (WHO) as the risk of the virus, which means everyone has the right to receive timely information and realize the constantly changing situation of the pandemic.However, the World Health Organization (WHO) only provides information in the six official languages of the United Nations (Arabic, Chinese, English, French, Russian, and Spanish) and three additional languages (German, Hindi, and Portuguese) ("Coronavirus disease (COVID-19) pandemic", 2020).The inadequacy of languages does not meet the healthy need of other language-speaking people.Putonghua is the only national language and standard variety in China, a multilingual and multicultural community.However, in daily communication, most people speak local dialects instead of Putonghua.During COVID-19, it is recognized that Putonghua was not sufficient in the country to deliver effective healthy information to every signal man and to stem the spread of the virus.
In short, in crises like COVID-19, many public health problems were caused by the lack of language or poor communication.People without sufficient knowledge of local dominant languages will be excluded from healthy matters, like containing the wide spread of COVID-19, getting vaccines, and advocating some related policies.COVID-19 not only results in physical problems but also brings mental problems.With insufficient and ineffective information, individuals easily get trapped in bad feelings.
English-Centric Multilingualism in Crisis Communication
English is the unquestioned lingua franca nowadays.In the past decade, besides Anglophone countries, most nations have significantly invested in increasing their English capabilities (Phillipson, 2003;Piller & Cho, 2013;Takahashi, 2013).Moreover, for China and the whole world, intercultural communication is closely related to English (Piller, 2017).English has a predominant status in education in China, making China, after the USA, India, Nigeria, and Pakistan, the fifth-largest national population of English speakers in the world (List of countries by English-speaking population, 2020).
English is just an imagined powerful language.This pandemic has revealed the fallacy of believing in English as the universal language for global communication problems.Although the World Health Organization (WHO) makes information available in several different languages, English is in the predominate status for providing timely information and holding press conferences globally (Coronavirus disease (COVID-19) pandemic, 2020).Linguistic minorities worldwide struggle to obtain such health in their native language (Avineri et al., 2018;Briggs, 2018;Flood et al., 2018;Rubin, 2014).To improve the efficiency of anti-COVID-19, providing multilingual logistics communication is necessary (Zhang & Wu, 2020).It is demonstrated that English cannot meet the demands of people on Earth, especially during the crisis period, and the pandemic has sped down the influence of English.
Strategies
Top-down and button-up efforts are the two main strategies to overcome the linguistic challenges rising from the pandemic (Piller, Zhang, & Li, 2020).
The top-down strategy is associated with national language competence.National language competence is the language competence of a state to deal with various domestic and foreign affairs (Li, 2011).In coping with a public crisis like COVID-19, mobilizing national emergency language competence (NELC) is necessary.The detailed capacity of NELC includes management, mobilization, intellectual, data, and technology capacity.Three stages are involved in constructing NELC, including the pre-emergency stage, the during-emergency stage, and the post-emergency stage.The aims of constructing NELC are achieving "barrier-free" language communication, providing emotional support to avoid mental damage after a disaster, and monitoring the crisis well.Some types of emergency languages are included in constructing NELC: standard national languages, non-standard varieties, minority languages, major international languages, cross-border languages and sign languages, and Braille (Li et al., 2020).All in all, the construction of NELC presents a nation's power in overcoming linguistic challenges coming from natural disasters or pandemics.As the initial epicenter of COVID-19 at the beginning of 2020, Wuhan met some language barriers because most Hubei residents speak Hubei dialects, which hindered the effectiveness of rescue.At that time, medical supplies were run out in China.Therefore, seeking foreign assistance was inevitable.By March 2020, 77 countries and 12 international organizations has donated emergency medical supplies to China.However, donation and assistance encountered linguistic barriers.To address this, "战疫语言服务团" [Language services group for epidemic prevention and control] was formed by China's Ministry of Education and the State Language Commission.This group made outstanding contributions to fighting against COVID-19.The experts in the group published the handbook of Hubei Dialects for Medical Assistance Teams, the Guide to Prevention and Control of COVID-19.These linguistic experts helped Hubei local patients with effective rescue and facilitated international medical services.
The button-up strategy is organized by grassroots.In Wuhan, to provide better multilingual logistics communication service during the COVID-19 pandemic, 250 colleges students and teachers, frontline responders, medical staff, procurement agents, overseas donors, and foundation officers in Wuhan and across the world self-organized the WeChat Group "疫区翻译服务义工小组" [volunteer translation services for epidemic areas] to carry out crisis translation tasks.The group provided nine language translation services, covering English, French, Japanese, Korean, Portuguese, Russian, Spanish, Thai, and Vietnamese.After receiving the interview, one of the volunteers in the group commented: "我觉得民间的速度好像更快一些, 比起政府一级级(审批)下 来, 我自己的感觉。"[I think the grassroots efforts are more efficient than the top-down approach requiring governmental (approval) formalities.My impression] (Zhang & Wu, 2020).In Inner Mongolia, China, the traditional Mongolian art khuuriin ülger ('fiddle story') carrying public health information by Mongolian folk singers was a button-up strategy to restrain the spread of COVID-19 and comfort local people's fear (Bai, 2020).
During the Covid-19 outbreak in Wuhan, some international students from South Asia and Southeast Asia, through their multilingual repertoire, also tried to combat the pandemic (Li et al., 2020).
Methodology
To gain a nuanced understanding of foreign migrants' experiences in China during the COVID-19 pandemic, we interviewed seven participants working in one of the biggest nightclubs in a Southwestern region of China.
Figure 1 is the group photo of us.Appendix A provides an overview of the participants, including five foreign dancers and their high-stake holders: a Chinese boss and a Chinese dance director.The designed questions are in Appendix B, which mostly are about their life and language experience during COVID-19 in China.Because a semi-structured interview was applied in the process, beyond the designed questions, other questions were also covered in the research, making contributions to the findings.
When collecting data for this research in September 2022, all foreign dancers have lived in China for five years, and three have the multilingual ability.English education experience, but these foreign dancers' friend circle is limited to nightclub workers and guests whose English is poor.These workers and guests would not make friends actively with them.D has a boyfriend working as a DJ in one nightclub, so they have similar life routines.Facing COVID-19, D could ask her boyfriend for help, such as venting bad feelings during the lockdown period.But the other four dancers do not have many Chinese friends, which caused them some trouble in keeping relaxed during quarantine time.
Fourth, their stereotype on learning Chinese.In informal conversation, they said that they prefer learning English to Chinese.There are two reasons for that.Firstly, in their mindset, English is used more widely than Chinese, and learning English is more beneficial.Second, some social media mislead learners that Chinese is hard to learn.Therefore, they have a stereotype about learning Chinese, which chases away their courage to study it.In China, most public health information is presented in Chinese.Understanding this information troubles them a lot.
Strategies to Overcome Their Language-Related Challenges
Living in China, these foreign migrants have some strategies to overcome their linguistic barriers.These strategies facilitated their life during the COVID-19 lot, but their strategies still need to be improved.
Firstly, the use of translation applications.During our interview and informal conversation, these foreign dancers used translation applications due to language barriers.And when we asked them whether they use translation applications in daily life, they responded that translation applications had become their lifestyle in China.With the development of translation technology, translation applications make conversations between different languages more convenient.However, the accuracy of machine translation is in question today, and in the circumstance of the virus, some new expressions and words appeared.These foreign dancers said they could not understand the translation's meaning sometimes, so they had to change their expressions or use another translation application to fulfill the complete meaning, which caused a series embarrassed situations and made them misunderstand some Chinese policies related to the pandemic.
Second, Chinese agencies are their language brokers.It is mentioned before that, through international market intermediaries, every foreign dancer comes to China for work, and they have their Chinese agency in each area in China.The Chinese dance director told us that if these foreign dancers have troubles, like seeing a doctor, changing jobs, or moving workplaces, they could ask the agency to help them, but the service is not free.Due to the high cost, if they encounter some problems, especially in language, they are unwilling to ask for help.During COVID-19, sometimes they have no salary because of the close of nightclubs.A expressed that he comes to China mainly to make money, so he will not pay for high and unnecessary services.
Third, seeking help from people with Chinese competence.D often asks for help from his boyfriend, but the other four dancers are less lucky than D. Their look makes Chinese people keep their distance, and making Chinese friends during the crisis is more complicated.In dealing with language challenges, they have to depend on themselves more.
Conclusion
This study examined the multilingual communication experiences of foreign dancers of diverse linguistic backgrounds during the COVID-19 pandemic in China.The findings demonstrate that their Chinese proficiency needs to be improved to meet their crisis communication.Their job specification, foreign look, and stereotype of Chinese learning hinder their mobility from learning Chinese, particularly during the COVID-19 crisis.Therefore, they must turn to translation applications, agencies, and people with Chinese competence.The utility of these, however, is limited due to insufficient language provision and language training programs.
To maintain every signal man's health rights and comfort foreign migrant's feelings in the pandemic, as well as maintain social justice, other smaller languages should be offered to foreign migrants of diverse backgrounds.Although China boasts its national language competence in 101 foreign languages (Zhao, 2016), the national language is the primary language to make the pandemic announcement and deliver public health information.
After the Chinese, influenced by the English-centric mindset, pandemic-related knowledge in English is also common in China.However, one or two language multilingualism has added heavy pressure and resulted in adverse social effects on other language speakers.In the modern era, translation applications can solve part of language-related problems, and access to translated information is not only a human right (Greenwood et al., 2017), but also is an approach to disaster prevention and relief that can increase individual and community-level resilience (Piller, Zhang, & Li, 2020).However, depending on translation application alone is not enough.The standard of translation information greatly influences accuracy, which will cause side effects.Offering limited languages, by the large, will badly affect people with a diverse background to obtain adequate and accurate information.
Governments and departments must see this difficulty and provide language training services for social integration.Apart from D, who has received formal English education in college, the other dancers and the Chinese dance director in the nightclub can be considered as grassroots language learners.Few grassroots learners can afford the time and tuition to require formal education and learn one language (Han, 2013).A significant gap exists between their language learning desire and high time and fee cost.
Finally, the foreign migrant is a symbol of a country's internationalization. Yunnan, as a border-line province, after the COVID-19 pandemic, will embrace more foreign migrants and shoulders more responsibility for facilitating internationalization.We should consider this group in constructing a new image of China worldwide and China's internationalization and globalization.Although it is proclaimed that the influence of COVID-19 has decreased in China in 2023, there are other disasters and crises.These foreign dancers' multilingual experience in China during COVID-19 could give us language-related ideas for future crisis prevention.
Table 1 .
Yunnan as the 2ed top foreign migrants' destination in China (Major Figures on 2010 population census of China, 2010; Major Figures on 2020 population census of China, 2020) | 3,173.8 | 2023-04-21T00:00:00.000 | [
"Linguistics"
] |
Study on Numerical Model and Dynamic Response of Ring Net in Flexible Rockfall Barriers
: Developing reliable, sustainable and resilient infrastructure of high quality and improving the ability of countries to resist and adapt to climate-related disasters and natural disasters have been endorsed by the Inter-Agency Expert Group on Sustainable Development Goals (IAEG-SDGs) as key indicators for monitoring SDGs. Landslides pose a serious threat to vehicle traffic and infrastructure in mountain areas all over the world, so it is urgent and necessary to prevent and control them. However, the traditional rigid protective structure is not conducive to the long-term prevention and control of landslide disasters because of its poor impact resistance, high material consumption and difficult maintenance in the later period. Therefore, this study is aimed at the flexible rock-fall barriers with good corrosion resistance, material saving and strong cushioning performance, and proposes a fine numerical model of a ring net. This model is used to simulate the existing experiments, and the simulation results are in good agreement with the experimental data. In addition, the numerical model is also used to study the influence of boundary conditions, rockfall gravity and rockfall impact angle on the energy consumption of the ring net. It is indicated that the fixed constraint of four corners increases the deformability, flexibility and energy dissipation ability of the ring net. Apart from that, the influence of gravity on the energy dissipation of the overall protective structure should not be neglected during the numerical simulation analysis when the diameter of rockfall is large enough. As the impact angle rises, the impact energy of the rockfall on the ring net will experience a gradual decline, and the ring at the lower support ropes will be broken. When the numerical model proposed in this study is used to simulate the dynamic response of flexible rockfall barriers, it can increase the accuracy of data and make the research results more credible. Mean-while, flexible rockfall barriers are the most popular infrastructure for landslide prevention and control at present, which improves the ability of countries to resist natural disasters to some extent. Therefore, the research results provide technical support for the better development and application of flexible rockfall barriers in landslide disasters prevention and control, and also provide an important and optional reference for evaluating sustainable development goals (SDGs) globally and regionally according to specific application goals.
Introduction
Developing reliable, sustainable and resilient infrastructure of high quality and improving the ability of countries to resist and adapt to climate-related disasters and natural disasters have been endorsed by the Inter-Agency Expert Group on Sustainable Development Goals (IAEG-SDGs) as key indicators for monitoring SDGs [1].The flexible rockfall barriers is one of the most popular protective structures in slope geological disaster prevention and control engineering at present.Given its advantages such as safety, reliability, good terrain adaptability, beauty, environmental protection, quick and convenient construction, good durability and so on, it is widely used in numerous fields such as avalanche protection, rockfall protection, debris flow protection, safety protection, etc.It plays a vital role in the safety protection of important facilities and buildings, avalanche prevention, rockfall interception and reduction of traffic system damage.Most importantly, the flexible rockfall barriers, as one of the most popular infrastructures for slope geological disaster prevention and control, can withstand impact loads many times and adopts galvanized anti-corrosion measures to increase the durability of the structure, which meets the requirements of SDGs.Typical examples are as follows: in 1951, the Bruker Company of Switzerland applied the wire rope net to avalanche protection for the first time, which was the prototype of a flexible rockfall barrier.The avalanche flexible rockfall barriers installed in 1954 in St. Gallen, Switzerland, successfully intercepted a falling rock about 3 m 3 in 1961, after which the flexible rockfall barriers began to be used for rockfall protection.During the "El Nino" storm in 1998, the flexible rockfall barriers installed nationwide in the United States, especially in California, successfully intercepted a large number of debris flow solid materials and reduced the damage of the traffic system in these areas.TECCO net is a load-bearing flexible net made of flat spiral net wires made of high-strength steel wires and twisted one by one in the form of chains.In 2009, the isolation fence composed of TECCO net successfully passed the certification of the Fédé ration Internationale del' Automobile (FIA); then it was formally used in the safety protection of high-grade racing tracks.
Landslides, like rockfalls, avalanches and debris flows, often seen in geological disasters, are sudden and destructive, threatening human life and infrastructure [2,3].Froude presents the statistical analysis of a global dataset of fatal non-seismic landslides, covering the period from January 2004 to December 2016.The data show that in total 55,997 people were killed in 4862 distinct landslide events [4].A large landslide happened in the Tapovan area in the south of the Himalayas on 7 February 2021, which triggered a large avalanche down the valley, entrained the deposits and river water, and evolved into a catastrophic debris flood in the Dhauliganga River, causing fatalities and severe damage to the local infrastructure [5].Therefore, it is necessary to design and use protective structures to mitigate the risk of landslide disasters.Rigid protection and flexible protection are commonly used in landslide disaster protection [6].Rigid protection differs from flexible protection.The former, a traditional form of slope disaster protection, is characterized by a huge foundation, small structural deformation and poor impact resistance, and it can be divided into masonry protection and reinforcement protection.By contrast, the latter is one of the most widely used protection structures at present for its environmental friendliness, long life span, terrain adaptability and good ductility.Flexible protection can be divided into active flexible protection, passive flexible protection and attenuator protection according to different disaster reduction mechanisms.Active protection can reinforce the slope with anchor rods and nets covering the surface of the slope to prevent disasters.Considered as the most effective protection measure to reduce landslide disasters, passive flexible protection dissipates the energy of landslide solid material to avoid damage caused by landslides mainly by the high ductility and deformation of the metal flexible net.The attenuator attenuates the kinetic energy of landslide material by friction between the landslide material and the net.
There are numerous mountains and hills in southwestern China.Frequent climate change leads to active natural disasters such as mudslides, landslides and rockfalls in this area.Figure 1 shows a scene where a mountain road has been hit by rockfalls and traffic is interrupted.The passive flexible rockfall barrier is widely used in the prevention and control of slope geological disasters because of its safety, applicability, flexible layout, beautiful appearance and environmental friendliness.As is shown in Figure 2, a passive flexible protective structure is installed on the hillside.It is a flexible safety rockfall barrier, which is composed of the following four parts: a metal flexible net, a fixing system (anchor rod, anchor cable, foundation and support rope, etc.), a brake ring and a steel column.The structure forms a weak tension system through the balance of tension and pressure between components, which can effectively reduce or prevent the hazards caused by geological disasters [6][7][8].The metal flexible net, as the main energy dissipation component in the passive flexible rockfall barrier, is usually regarded as two forms: a rhombic net and a ring net (see Figure 3).The ring net has more extensive application than the rhombic net because of its flexibility, energy dissipating ability and interconnection of the independent rings.However, there is usually destruction of the ring net when it comes to practical engineering applications.The reason, on the one hand, is the uncertainty of collapse and the complicated structural behavior of the ring net.The lack of literature results on the basic theory of ring net members is also a significant cause.Therefore, the systematic study of the mechanical properties and energy dissipation of ring net is of great importance to the design of metal flexible net.At present, there are many studies on passive flexible rockfall barriers at home and abroad, Volkwein setup a special discrete element for ring nets, the specially developed software application Faro simulates the dynamic behavior of a spherical rock stopped by such a protection barrier in many short time-steps by the central differences method [9].This enables a detailed view of the dynamics of the modeled barrier and also provides information on its loading and degree of utilization.The results of the simulations are compared to the field tests carried out within the research project.Grassl conducted a static test on a single ring in a ring net, in which the mechanical behavior of the ring net under the impact of rockfall was presented, and they conduct dimensional analysis of the ring net barrier's components in the application of empirical design procedures, and fullscale tests were performed using single and three-span net configurations, net deformations and cable forces over time were measured.In parallel to the experimental research, a simplified explicit finite element program was developed [10,11].This program was coupled with a structural reliability program and used to analyze the reliability of the protection systems.The structure of rockfall barriers was simulated and analyzed by an explicit finite element program and with different constraint forms being taken into account, and the results of numerical simulations and tests were compared.A numerical analysis tool was put forward to describe the energy dissipation performance of flexible rockfall barriers.Wendeler proposed a load model for debris flow loads on flexible ring net barriers in FAlling ROcks (FARO), a computer program based on finite element algorithm, which involves three-dimensional, highly nonlinear and dynamic simulation in the process of rock collection [12][13][14].The energy dissipation for area loads in FARO was estimated either with field tests or with numerical modeling.Liu proposed the load model of the landslide pressure and provided a reference for the design of the rockfall barriers to resist the landslide [15]; Wang studied the mechanical properties and energy dissipation of ring net [16,17].The energy dissipation formula of a single ring under tension was deduced.Subsequently, the energy dissipation of the two ring nets and the influence of the boundary conditions on the energy dissipation performance of the ring net were presented.However, in the theoretical formula of a single ring, the determination of plastic hinge length is based on numerical simulation results, and there is no certain theoretical basis.In summary, most of the current research results focused on the full-scale test of passive flexible rockfall barriers and the overall performance of the simplified numerical simulation, whereas the basic research on annular mesh components is very little.ROCCO ring net is a load-bearing flexible net made of high-strength steel wires coiled into rings and sleeved with each other.Based on the current situation, the theoretical energy dissipation formula of a single ROCCO ring under the action of force is derived, and the dynamic response, energy dissipation performance and failure mechanism of the ring net under the impact of falling stone are analyzed.It provides a reference for the basic research and design of ring net components.Qin developed a new FBG mini tension link transducer and introduced its working principle [18].In addition, FBG mini tension link transducers were applied to the impact test of single boulder and debris flow to study the dynamic response of flexible barrier under impact load.Boulaud made a comparative assessment of three commonly used models of ASM4 rockfall barrier, which provided a guide for designers to choose their models.Moreover, a model of sliding cable submitted to concentrated forces was proposed to simulate the "curtain effect" in the process of modeling the rockfall barrier.The present model is however limited to quasi-static loadings [19,20].A generic computational approach to rockfall barriers analysis is introduced by Coulibaly.Using this method, the influence of repeated impact on the rockfall barrier is studied [21].Julian developed finite element models calibrated by case study to simulate the interaction between debris and flexible barrier.These models are used to study the behavior of flexible barriers under debris impact from the point of view of force and energy [22].The understanding of an experimentally observed variability was investigated numerically using a non-linear spring-mass equivalence by Douthe.The sensitivity analysis of the global response of the flexible barrier is carried out from the variability caused by block-related parameters and network-related parameters [23].
In summary, most of the current research results are concentrated on comprehensive testing of the rockfall barrier and its overall performance through simplified numerical simulation, but there are still many deficiencies in how to construct the correct numerical model of the ring network, so this paper has studied an accurate method for numerical simulation of the ring network, and based on this numerical model, the effects of boundary conditions, rockfall gravity and impact angle on the energy dissipation of the ring network are studied.Finally, the theoretical energy consumption formula of the rockfall impact ring network is carried out analysis and argument.
Numerical Simulation of Ring Net under Impact of Rockfall and Comparative Analysis of Test Results
To perform a numerical simulation of the impact of rockfall on the ring net by using ANSYS/LS-DYNA precisely, the size of the ring net is 3.9 m × 3.9 m.Furthermore, the ring net is fixed on all sides, with 180 rings while the ring-type chosen is R7/3/300, which is a ring net made of steel wire with a diameter of 3 mm and coiled 7 times, and the diameter of the inscribed circle of the mesh is 300 mm.The following elements are selected in the numerical model.Beam161 element is used in the ring net, mainly considering that it still needs to bear a certain bending moment in the process of tensile deformation.Combi165 element is used for brake rings.The supporting rope adopts the link160 element, which only considers that the member is subjected to the action of axial force and cannot bear the action of bending moment.Solid164 element is used to simulate falling rocks.The analysis time is 0.3 s.In the experiment, the rockfall perpendicularly impacted the middle of the ring net, and the rockfall was a sphere with a mass of 830 kg and a density of 2600 kg/m 3 .The model material parameters are shown in Table 1.The finite element model of the ring net under the impact of rockfall is established.The net size and constraint are the same as the literature [12], and considering the influence of rockfall gravity, ignoring the relative slip between ring and ring.The rings are fixed, and the impact energy of 24 kJ and 45 kJ is simulated by endowing different velocities of rockfall.The finite element model is shown in Figure 4.The impact time is the time that the rockfall and the ring nets begin to contact with the speed reduced to 0; the maximum displacement is the distance that the rockfall and the ring nets begin to contact with the speed reduced to 0.
The numerical simulation results of the impact energy of rockfall under 24 kJ and 45 kJ are compared with the test results in the literature [12], as shown in Table 2. From the comparison of data in Table 2, it can be seen that the numerical calculation method in this paper is similar to that of the experimental results under the impact of rockfall.Therefore, the numerical analysis method proposed in this paper can be used to simulate the dynamic process of ring net under the impact of rockfall, which provides a certain reference value for the overall analysis of passive flexible protection.
Influence of Different Constraint Forms on Energy Dissipation Performance of Ring Net
According to the constraint of the boundary around the ring net, the boundary conditions can be divided into the following three forms: the four sides are fixed, the two sides (opposite sides) are fixed, and the four corners are fixed (see Figure 5).The constraint conditions are different, and the energy dissipation and destruction of the ring net can be different.Using the same numerical simulation method as the previous section, the finite element model of the ring net with 3.3 m × 6.6 m under three boundary conditions is established.The ring-type is R7/3/300, the ring net is surrounded by a 16mm diameter support rope, the rockfall perpendicularly impacted the middle of the ring net, and the rockfall was a sphere with a mass of 830 kg and a density of 2600 kg/m 3 .
In the process of numerical simulation, there are two cases to be studied: in the first case, the rockfall acts perpendicular to the center of the ring net with the same impact velocity, and then the dynamic response of the annular mesh is analyzed.In the second case, the maximum energy dissipation and its failure form are obtained by numerical simulation.
The numerical simulation of three boundary conditions is carried out with a rockfall velocity of 7 m/s.As is shown in Figure 6 for the relationship between normal impact displacement and time, three conditions occurs as follows: if the boundary is fixed on four sides, the maximum normal displacement of the ring net is 1.7 m at 0.19 s; if the boundary is fixed on both sides (opposite sides), the maximum normal displacement of the ring net is 1.51 m at 0.249 s; if the boundary is fixed at four corners, the maximum normal displacement of the ring net is 1.54 m at 0.25 s.The normal maximum displacement is compared: four corners fixed > both sides (opposite sides) fixed > four sides fixed; the time to achieve the maximum normal displacement is compared: four corners fixed > both sides (opposite sides) fixed > four sides fixed, that is, the release of peripheral constraints, increased the deformation capacity of the ring net, the interaction time between the rockfall and the ring net is prolonged, and the energy dissipation performance of the ring net is improved.It can also be seen from the relationship between the energy variation and the time that the release of the peripheral constraints will improve the flexibility of the ring net and prolong the interaction time (Figure 7).When the maximum energy dissipation of the ring net is analyzed under three kinds of constraints, assuming that the impact velocity of rockfall is lim v , there is no damage to the ring net.If the impact velocity of rockfall is increased by 1m/s, the impact velocity of rockfall is lim v 1 + , at this point, the ring net was damaged.According to the kinetic energy formula , the maximum energy dissipation of the ring net can be obtained.
The energy dissipation and damage form obtained by numerical simulation of ring net under three boundary conditions are shown in Table 3.The initial state and failure of the ring net with four corners fixed are shown in Figure 8.As can be observed from Table 4, the maximum energy dissipation is compared: four corners fixed > both sides (opposite sides) fixed > four sides fixed.
Influence of Rockfall Gravity on Energy Dissipation Performance of Ring Net
When the passive flexible protection structure is applied in practical engineering, it is generally erected according to the mountain, the erection direction is perpendicular or nearly perpendicular to the trajectory of rockfall.That is to say, the rockfall speed direction and its gravity direction are about 90°.When we did the model test, the trajectory of rockfall is generally consistent with the gravity direction of rockfall, the rockfall velocity direction is 0° with the direction of gravity.Numerous pieces of literature neglected the influence of rockfall gravity directly when it analyzed the energy-dissipating performance of passive flexible protective net structure by means of numerical simulation.
The study is divided into three conditions: in the first case, the gravity of rockfall is perpendicular to the direction of its velocity, which is 90°.In the second case, the gravity of rockfall is in the same direction as its velocity, which is 0°.The third case does not take into account the effect of rockfall gravity (see Figure 9).The equivalent diameters of rockfall were chosen as 0.8 m, 1.0 m and 1.2 m, respectively, and the numerical simulation is carried out under three different conditions.To study the influence of rockfall gravity on the energy dissipation performance of ring net and increase the limit energy dissipation of ring net as much as possible, the selected constraint form is fixed at four corners.From the analysis in Section 3.1, it can be seen that the ring net with four corners fixed constraint consumes the most energy.The study was conducted in two cases: In the first case, the rockfall of different diameters impacts the ring net at v 5m / s = , then the time-history curves of energy, displacement and velocity are analyzed.In the second case, the maximum velocity, displacement, failure mode and energy dissipation under three different conditions are obtained by numerical simulation.
Figures 10-12, respectively, represent the time-history curves of rockfall displacement, rockfall velocity and rockfall energy when the rockfall diameter is 0.8 m.From these three pictures, we can find a common rule: when the falling rock velocity is consistent with the direction of gravity, the peak values of rockfall displacement, velocity and energy are larger than those of the other two cases; the peak values of rockfall displacement, velocity and energy ignore rockfall gravity are smaller than those of the other two cases.(2) Rockfall diameter of 1.0 m.Figures 13-15, respectively, represent the time-history curves of rockfall displacement, rockfall velocity and rockfall energy when the rockfall diameter is 1.0 m.The common law shown in these three pictures is basically the same as that analyzed when the rockfall diameter is 0.8 m above.The only difference is that with the increase of falling rock diameter, the gravity of falling rock increases.Therefore, the peak value of rockfall displacement, velocity and energy time-history curve is larger than that of corresponding time-history curves with a rockfall diameter of 0.8 m. (3) Rockfall diameter of 1.2 m.
Figures 16-18, respectively, represent the time-history curves of rockfall displacement, rockfall velocity and rockfall energy when the rockfall diameter is 1.2 m.The common law shown in these three pictures is basically the same as that analyzed above when the falling rock diameter is 0.8 m.The only difference is that the rockfall gravity increases with the increase of falling rock diameter.Therefore, the peak values of the corresponding rockfall displacement, velocity and energy time-history curves are larger than those of corresponding time-history curves when the rockfall diameter is 0.8 m and 1.0 m.From Figures 10-18, it can be seen that the displacement, velocity and energy timehistory curves of rockfalls with different diameters are basically the same in three cases.The only difference is that with the increase of rockfall diameter, the amplitude of the corresponding time-history curve increases.When the direction of rockfall velocity (V) and gravity (G) are the same, compared with the other two cases, the energy and velocity time-history curves of the rockfall have an obvious rising stage., the other two cases show a downward trend from the beginning.In addition, in this case, the maximum value of the rockfall time curve is larger than that of the other two cases.When the rockfall velocity (V) and the direction of gravity (G) are perpendicular, compared to the case where gravity (G) is not considered, the time-history curves of the former are surrounded by the curve of the latter.It can be seen from that, when the direction of the velocity of rockfall and gravity is consistent, this case is the most disadvantageous of the three cases.
Therefore, a conclusion can be drawn.The diameter of rockfalls only affects the peak value, but not the variation law (Figures 10, 13 and 16).When the gravity of rockfalls is considered, and the direction of gravity is consistent with the direction of velocity, it is the most unfavorable situation in all working conditions.However, ignoring rockfall gravity, the actual rockfall displacement, velocity and energy peak value will be greater than the numerical simulation results, and if the numerical simulation results are used in engineering design, the structure will be dangerous.ANSYS/LS-DYNA finite element analysis software was used to analyze the maximum velocity, displacement, failure mode and energy dissipation of different diameter rockfall impact ring net (see Tables 4-6 and Figure 19).The fracture of the central ring at the connection of the lower support rope The results show that the maximum velocity of rockfall decreases with the increase of rockfall diameter.From Tables 4-6, it can be seen that the maximum velocity of falling rocks is closely related to the diameter of rockfalls.The peak velocity of rockfalls with a diameter of 0.8 m is higher than that of rockfalls with the other two diameters.In three cases, the maximum displacement of the rockfall is kept between 1.6 m and 1.8 m, that is, the maximum deformation of the ring net is kept within a certain range.For the same ring net, the maximum energy of rockfall is closely related to the diameter of rockfall.The kinetic energy of rockfall with a diameter of 1.0 m is greater than that of rockfalls with the other two diameters.It can be seen from Figure 19 that the failure mode of the flexible barrier is related to the rockfall diameter and the included angle between rockfall gravity and rockfall velocity under the constraint of fixed four corners.
In summary, When the direction of rockfall velocity (V) and gravity (G) are the same, it is the most unfavorable situation.The model test in the actual test site is conservative or safe.When the diameter of rockfall is large enough, in the process of numerical simulation, it is not possible to ignore the influence of rockfall gravity on the energy consumption of the overall protective structure.
Influence of Rockfall Impact Angle on Energy Dissipation Performance of Ring Net
In the analysis of flexible rockfall barrier, it is generally assumed that the rockfall vertically impacts the middle of the flexible rockfall barrier.However, rockfalls are usually dominated by rolling and bouncing [6].Hence, rockfalls will impact the ring net at a specific angle, which has different effects on the structure.The boundary conditions used in the numerical simulation are four corners fixed, and the velocity of the rockfall is v = 7 m/s, the impact time is 0.6 s, the quality of rockfall is 830 kg, and the rockfall density is 2600 kg/m 3 .According to the actual application in the project, the impact angle is selected from the following five situations: 0°, 15°, 30°, 45° and 60°.The dynamic response of ring net with different impact angles is analyzed by decomposition of velocity.The impact model is shown in Figure 20, and the corresponding finite element model is shown in Figure 21: The maximum velocity of rockfall and the destruction method of the ring net are analyzed under different impact angles (see Table 7).It can be seen from the above table that for the same ring net, the critical velocity and kinetic energy of rockfall decrease with the increase of impact angle.When the impact angle is 0°, the kinetic energy of rockfall is 59.76 kJ, and when the impact angle is 60°, the kinetic energy of rockfall is 26.56 kJ, the difference is 2.25 times.With the increase of the impact angle, the destruction method of the ring net is the fracture of the central ring at the connection of the lower support rope.
Because the trajectories of rockfalls do not always strike the ring net vertically, several additional measures should be taken to deal with the situation in the practical application of the overall protective structure, for example: adding brake rings in the support rope to increase the deformation of the flexible rockfall barrier and prolong the interaction time; adding the cross-sectional size of the ring connecting with the supporting rope; and adding the cross-section of the supporting rope, etc.
Summary and Conclusions
In this study, an accurate ring net numerical model is initially proposed.Then, the numerical model is used to simulate the experiments in the existing literature, and the accuracy is verified by experimental results comparison.The numerical model of the ring net can be used as a fine model to study the dynamic response of the ring net under impact load, which can be selected by designers and researchers.It is worth noting that there is a certain deviation in the simulation results mainly for tangent and fixedly connected rings without relative slip where the net is in the initial tension state in the model for application in modeling flexible barriers.By contrast, sleeved rings with relative slip where the net is in the initial relaxed state occurs during the experiment.Finally, the validated numerical model is used to study the influence on the energy dissipation of the ring net from such factors as constraint forms, rockfall gravity and rockfall impact angle.Based on the above analysis, conclusions are mainly drawn as follows: 1.The numerical model of the ring net proposed in this study can effectively reproduce the dynamic response of the ring net under the impact of rockfalls.The maximum displacement, maximum velocity and impact time of rockfalls are in good agreement with the experimental data.2. The influence of constraint forms on the energy dissipation of the ring net.With the reduction of peripheral constraints, the deformation capacity and flexibility of the ring net increase, and the energy dissipation performance of the ring net is improved.3. The influence of rockfall gravity on the energy dissipation of the ring net.Compared with the case ignoring rockfall gravity, no matter whether the gravity of rockfall is perpendicular to the direction of its velocity (90°); or the gravity of rockfall is in the same direction as its velocity (0°).Considering the gravity of rockfalls will increase the displacement, velocity and energy of rockfalls will increase in different degrees.
In other words, when the diameter of rockfalls is comparatively large, if the influence of gravity of rockfalls on the energy dissipation of the ring net is ignored, the whole passive flexible protective structure will be unsafe.4. The influence of rockfall impact angle on the energy dissipation of the ring net.As the rockfall impact angle increases, the failure mode of the mesh changes from rockfall breaks through a ring net to the fracture of the central ring at the connection of the lower support rope, the critical velocity and kinetic energy of rockfall experiences a gradual decline.
It should be noted that this study did not consider the impact of rockfalls on the energy dissipation of the ring net and the elongation response caused by the rockfall impact on the flexible rockfall barrier.Therefore, in order to optimize the design of flexible rockfall barriers, it is necessary to study these two aspects in future research and design work.
Figure 2 .
Figure 2. Application of passive flexible protective net structure.
Figure 3 .
Figure 3.The form of passive flexible protective net.(a) Ring net; (b) Rhombic net.
Figure 5 .
Figure 5. Three forms of the ring net.(a) Four sides are fixed; (b) Both sides (opposite sides) are fixed; (c) Four corners are fixed.
Figure 6 .
Figure 6.Relationship between normal impact displacement and time.
Figure 7 .
Figure 7. Relationship between rockfall energy and time.
Figure 8 .
Figure 8.Initial state and failure of the ring net with four corners fixed.(a) The initial state; (b) The ultimate damage.
Figure 9 .
Figure 9. Direction of gravity (G) and velocity (V).(a) V, G is perpendicular; (b) V is the same direction as G; (c) Ignore the effects of gravity.
Figure 10 .
Figure 10.Relationship between rockfall displacement and time.
Figure 11 .
Figure 11.Relationship between rockfall velocity and time.
Figure 12 .
Figure 12.Relationship between rockfall energy and time.
Figure 13 .
Figure 13.Relationship between rockfall displacement and time.
Figure 14 .
Figure 14.Relationship between rockfall velocity and time.
Figure 15 .
Figure 15.Relationship between rockfall energy and time.
Figure 16 .
Figure 16.Relationship between rockfall displacement and time.
Figure 17 .
Figure 17.Relationship between rockfall velocity and time.
Figure 18 .
Figure 18.Relationship between rockfall energy and time.
Figure 19 .
Figure 19.The failure mode of ring net: (a) Rockfall penetrating ring net; (b) The fracture of the central ring at the connection of the upper and lower support rope; (c) The fracture of the central ring at the connection of the lower support rope; (d) The fracture of the central ring at the connection of the lower support rope and the middle part of the ring.
Table 1 .
The material parameters.
Table 2 .
Comparison between numerical simulation and experimental results.
Table 3 .
Energy dissipation and failure form of annular mesh under three boundary conditions.
Table 4 .
Extreme value and destruction method of 0.8m diameter rockfall impact ring net.
Table 5 .
Extreme value and destruction method of 1.0 m diameter rockfall impact ring net.
Table 6 .
Extreme value and destruction method of 1.2 m diameter rockfall impact ring net.
Table 7 .
Dynamic response of ring net maximum. | 7,603.4 | 2022-04-07T00:00:00.000 | [
"Engineering"
] |
Numerical Analysis of the Light Modulation by the Frustule of Gomphonema parvulum: The Role of Integrated Optical Components
Siliceous diatom frustules present a huge variety of shapes and nanometric pore patterns. A better understanding of the light modulation by these frustules is required to determine whether or not they might have photobiological roles besides their possible utilization as building blocks in photonic applications. In this study, we propose a novel approach for analyzing the near-field light modulation by small pennate diatom frustules, utilizing the frustule of Gomphonema parvulum as a model. Numerical analysis was carried out for the wave propagation across selected 2D cross-sections in a statistically representative 3D model for the valve based on the finite element frequency domain method. The influences of light wavelength (vacuum wavelengths from 300 to 800 nm) and refractive index changes, as well as structural parameters, on the light modulation were investigated and compared to theoretical predictions when possible. The results showed complex interference patterns resulting from the overlay of different optical phenomena, which can be explained by the presence of a few integrated optical components in the valve. Moreover, studies on the complete frustule in an aqueous medium allow the discussion of its possible photobiological relevance. Furthermore, our results may enable the simple screening of unstudied pennate frustules for photonic applications.
Introduction
In recent years, our attention has been increasingly directed toward biological systems that produce micro-and nanostructured biomaterials that have been optimizedthrough evolution over billions of years-to find unique solutions to complex physical problems [1][2][3]. The ability of living organisms to produce these materials originates from the power of their cells to manipulate molecules, atoms, and ions through molecular mechanisms occurring at the nanoscale and involving what some define as the engines of creation, i.e., DNA and proteins [4]. Such materials are biosynthesized under ambient conditions in an aqueous solution and do not require large amounts of energy or toxic educts. This is an advantage compared to similar man-made materials, offering many organic and inorganic micro-to nanostructured biomaterials for studies and applications (e.g., [5][6][7][8]). Specifically, inorganic structured biomaterials-obtained via elegant biomineralization processes [9]-are often produced for specific purposes, mainly for mechanical support. The nanostructuring of these materials gives pronounced improvements in the mechanical stability of the skeletal systems compared to bulk material, such as the avoidance of the characteristic brittleness of calcite crystals in the shells of Coccoliths [10].
An impressive class of microorganisms that produces nanostructured silica is Bacillariophyceae (i.e., diatoms) [11]. It is considered the most diverse microalgal class globally Since detailed experimental investigations of diatom frustules smaller than 30 µm are difficult, numerical analysis of light modulation by these frustules might be particularly promising. Using numerical methods based on geometrical optics assumptions (such as in [46]) or strong approximations may exclude diffraction and interference or lead to inaccurate predictions, respectively. For relatively large valves in the range of a few tens to a few hundreds of µm, the beam propagation method (BPM) has been applied successfully in [28,30,[47][48][49]. Additionally, finite difference and finite element methods (FDM and FEM, respectively) are two common numerical approaches that have also been extensively used to solve optical problems related to diatoms [36,41,[50][51][52][53]. FD methods, such as the finite difference time domain (FDTD) method, are easy to handle but more suitable for regular structures, as discretization cannot deal efficiently with irregular shapes or edges. On the other hand, FEM-used in this study-has shown advantages as a powerful technique suitable for dealing with arbitrary geometries and even inhomogeneous media [54].
In this paper, we aim to expand the knowledge on the light modulation capabilities of diatom frustules by focusing on small pennates, which form a large and diverse group often underrated in such studies. The frustule of biraphid pennate Gomophonema parvulum (G. parvulum, GP) is used as a model, extending and explaining our preliminary results reported in [55] by employing numerical analysis using frequency domain FEM. To acquire the necessary 3D structural information for building a realistic and statistically representative model for simulations, we use a focus ion beam-scanning electron microscopy (FIB-SEM) workflow in addition to regular SEM analysis. Moreover, we use a novel analytical approach to enhance our understanding and minimize computational costs by (i) reducing the complex 3D structure (the complete valve or frustule) into 2D cross-sections and, further, by (ii) disassembling the distinct optical components. This approach helps us to understand the overlapping optical phenomena and reveals the role of integrated optical components within the solar spectrum range. Furthermore, by investigating the influence of different structural parameters (using analytical models) as well as refractive index contrast on the observed phenomena, this study opens the door for predicting the light modulation by other pennate frustules of similar structure but different dimensions.
Materials and methods
The GP structural parameters-necessary for the construction of a representative 3D model and for understanding the variability of distinct structural parameters between valves-were extracted from SEM and FIB-SEM data. For this, GP diatoms (obtained from Goettingen, Germany) were cultivated in 50 ml culture flasks (Greiner Bio-One GmbH) in Wright's cryptophyte medium [56] in a cultivation cabinet (Percival CFL LED) at 18 • C and a day/night light cycle of 12 h/12 h. The progress of cultivation was observed by an inverted light microscope (Zeiss, Axio Vert.A1).
Characterization with SEM
SEM micrographs of six different GP valves (two on the internal and four on the external view) were used to obtain structural information in the 2D plane. For this, GP frustules were cleaned from organic components using an acid cleaning procedure. In detail, 8 mL of the diatom culture was mixed with 10 mL of concentrated HNO 3 in a wide beaker and stirred for 3 h at 65 • C on a hot plate. After cooling, the resulting silica was centrifuged at 8500 rpm and washed three times with deionized water. After extraction, the silica material was freeze-dried until further use. For SEM analysis, several droplets of GP valves' aqueous suspension were placed on an alumina sample holder. SEM measurements were performed using a JSM-6060LV scanning electron microscope (JEOL) with 5 kV acceleration voltage (EHT) and a probe current I p of 8 nA using a secondary electron detector.
Characterization Using FIB-SEM
FIB-SEM data were used to provide 3D structural information of GP frustules. In total, ten frustules were analyzed. For FIB-SEM measurements, frustules need to be fixated. In Nanomaterials 2023, 13,113 4 of 32 the case of GP diatoms, fixation of cleaned frustules did not lead to desirable image quality due to an agglomeration of the individual valves and girdle bands which hampered the segmentation of the cell wall, and, thus, the fixation was carried out on concentrated (by centrifugation at 200 rpm) intact diatom cells using high-pressure freezing [57,58] and freeze substitution [59]. For this, copper planchettes (typ B) were filled with 5 µl cell suspension and closed with a second planchette. The sample was then loaded into a sample holder and injected into a high-pressure freezing system (Leica EM HPM 100). From that time, the samples were handled under liquid nitrogen to prevent unfreezing. The samples were transferred into a cryo vial containing reagents for cryo substitution (2% osmium tetraoxide + 0.1% uranyl acetate + 0.5% glutaraldehyde + 1.5% water in acetone), and an automatic freeze substitution in a Leica EM AFS 2 was started (starting at −120 • C and slowly heating to 4 • C). After freeze substitution, the planchettes were removed from the sample, and the sample was washed five times with acetone. Then, the fixed GP cells were embedded using a slow infiltration procedure over four days [60] in Epon 812 resin (4 g resin, 2.67 g DDSA (dodenyl succinic anhydride), 1.67 g NMA (nadic methyl anhydride), and 0.168 g BDMA (N,N-dimethylbenzylamine)), and were continuously stirred after mixing. The stepwise infiltration procedure started from 1, 2, 3, and 4 droplets of Epon 812/ ml acetone for 2 h each on the first day, 20%, 30%, 40%, 50%, and 60% resin in acetone for 1.5 h each on the second day, and 80% for 4 h on the third day. In the end, 100% resin was used for infiltration for 24 h. Afterward, the samples were filled into small plastic cups, and the resin was hardened in an oven at 65 • C for 36 h. After hardening, the sample blocks were polished, and an area for acquisition containing GP diatoms was located using SEM.
FIB-SEM was performed on a Crossbeam 540 (Zeiss) equipped with a gallium ion source and secondary electron and backscatter detectors. The sample was mounted on a sample holder and tilted to 54 • for milling, polishing, and imaging. First, a trench was made underneath the sample with a beam current of 65 nA, before polishing the surface of the hole with a beam current of 7 nA. Imaging (EHT = 2 kV, I p = 700 pA, working distance of 5.2 mm) was acquired in serial slicing mode with a slice thickness of 31.5 nm corresponding to the distance between the consecutive images in the obtained image stack. The resulting image pixel size was 24.83 nm x 24.83 nm at a magnification of 4.54 kX.
After the acquisition, the obtained image stack was aligned (registered) and denoised by home-written Python programs using OpenCV (v. 2.4.7), taking advantage of the Fourier shift theorem. The displacement vector between consecutive images was calculated using the phaseCorrelate function and then expressed as a shift with respect to the first image. In many cases, curtaining artifacts, giving rise to vertical contrast modulations (stripes), were visible in the stacks, which were corrected following the approach of Münch et al. [61]. For this, correction parameters (such as the depth of the wavelet transform, the wavelet family, and the width of the Gaussian filter) were optimized to visually obtain a corrected stack with the least contrast and information loss and the largest stripe removal. Finally, to improve the signal-to-noise ratio, the image stacks were denoised using the skimage.restoration local denoising filters of the scikit-image Python library (v. 0.14.0). Again, the filtering algorithms and parameters were chosen by visually optimizing the corrected images to obtain the largest noise removal with the lowest information loss.
In the corrected image stack, the silica cell wall was segmented from the cell organelles using the Amira segmentation editor. The wizard wand tool could be used due to the high-contrast differences between the silica and the surrounding resin ( Figure 1c). The segmented cell wall was cropped into single valves and converted into a surface. The surface was remeshed to obtain a more regular triangulation and optimized (aspect ratio, dihedral angle, tetrahedral angle) to generate an optimized tetrahedral grid.
Statistical 3D Model of a GP Valve for Numerical Analysis
To analyze the role of structural features in GP valves in a methodical fashion, a statistical model of a GP valve was created. This 3D model was constructed using the geometry building tools in COMSOL Multiphysics®. The required structural parameters (see further Table 1) needed for the construction of the 3D model were extracted from both 2D micrographs and 3D reconstructed valves using ImageJ software [62] by averaging a number of measurements in each image X av,i and calculating the corresponding standard deviation dX av,i . To obtain not only the variation of the parameter within one valve but also across different valves, the weighted mean X w of the parameter was calculated, including its internal and external errors, dX int and dX ext , respectively. If dX int was larger than dX ext , the variation within the valves (within dX av,i ) was larger than the variation between valves and vice versa. This analysis helped the estimation of the significance of the structural precision with the variation seen in the simulation results upon changes in the corresponding structural parameter.
Nanomaterials 2022, 12, x FOR PEER REVIEW their width, height, and spacing, which are estimated as Wgirdle = WM, Hgirdle = 2.84 dgirdle = 10-50 nm, respectively. This is in contrast to the girdle bands of some larger such as Coscinodiscus spp., in which their porous structure dramatically influe light propagation [41] and, thus, has to be considered during simulations of th frustule. Table 1. Statistical analysis of the structural parameters of GP valves (weighted mean, int external errors-the bold print indicates the significant error-and its precision in %). The mean is used for building the 3D model (see Figure S1).
Parameter Description
Xw ( Table 1. Statistical analysis of the structural parameters of GP valves (weighted mean, internal and external errors-the bold print indicates the significant error-and its precision in %). The weighted mean is used for building the 3D model (see Figure S1). The numerical calculations were performed on 2D cross-sections to allow greater structural variability and reduce the complexity in the obtained results, as well as reduce simulation time. For this, representative 2D longitudinal (CS long,i ) and vertical (CS ver,i ) cross-sections were extracted from the 3D model ( Figure S1). At this point, the crosssectioning across the 3D model produced sharp edges, which were smoothed to resemble the reconstructed data from FIB-SEM, as shown in Figure S1.
Numerical Analysis of the Wave Propagation across the GP Cross-Sections
Frequency domain FEM modeling of the wave propagation in the near field was performed using the wave optics module in COMSOL Multiphysics®5.5, inspired by the procedures in a COMSOL application note [63]. Unless otherwise stated, each 2D CS was placed into a rectangular simulation box (100 µm height (y-axis) and 40 µm width (x-axis)) at a distance of 4 µm from the input boundary, which was illuminated with a transverse plane wave of 80 µm size and an electric field strength E input of 1 V/m. The scattering boundary condition was applied to the input boundary on the left, while the remaining boundaries were set as perfectly matched layers (PMLs) to avoid nonphysical reflections. This configuration was chosen after optimization. Unless otherwise stated, the calculations were performed as parametric sweeps, changing the vacuum wavelength of the input wave λ vacc from 300 nm to 800 nm in 50 nm steps to cover the main radiation of the solar spectrum. Across all wavelengths, the refractive index of the amorphous silica constituting the GP valve n v was set to 1.46 [64], while the refractive index of the surrounding medium n m was set to n a = 1.00 or n w = 1.33, representing air or water, respectively. The physicscontrolled mesh size was, in all cases, much smaller than λ vacc (reaching a few nm) and automatically adapted to the complexity of the geometry of the CSs. To study the influence of the refractive index contrast, parametric sweeps of 1 < n m < 1.46 and 1 < n v < 1.9 were performed.
Moreover, the results were compared to simulation results of 2D analytical models of distinct optical components, such as thin slabs, as well as lens-like, grid-like, and fiber-like structures of silica, under identical conditions (vide infra). This allowed more detailed study of the effect of distinct structural parameters (e.g., length, thickness, striae spacing, and areolae diameter) on the light interference patterns. Furthermore, to understand the relevance of the observed optical phenomena of the valve's CSs to the photobiology of living cells, the effect of adding 4 girdle bands and a hypovalve to selected CSs was studied in water.
After computation, unless otherwise stated, two-dimensional images displaying the distribution of the normalized electric field strength E Norm within the simulation domain were obtained, with an intensity scale (0 to 2 V/m) depicted with a color code ranging from blue (E Norm < E input ) to white (E Norm = E input ) to red (E Norm > E input ). Where necessary, the E Norm values were extracted by creating 2D cutlines at precise (x, y) positions in the simulation domain.
Structural Analysis of the GP Frustule
GP is a benthic asymmetric biraphid pennate species. It is widely distributed in various aquatic ecosystems, mainly freshwater ecosystems, and has several varieties that differ, to some extent, in shape and size [65,66]. The frustules of the studied GP diatom (Figures 1, S1 and Table 1) are of elliptic to ovate shape (length L v = 7.1 ± 0.2 µm, width W v = 4.59 ± 0.07 µm) consistent with the previous structural description of some GP strains reported in [65]. The two valves, epivalve and hypovalve, have a face of thickness D v = 0.17 ± 0.01 µm, a curved mantle (i.e., an elevated edge of the valve) of height h M = 0.58 ± 0.08 µm and width W M = 0.184 ± 0.009 µm, and are connected by girdle bands (approximately four). The face of each valve is divided by a raphe slit (length L ra = 5.8 ± 0.2 µm, width W ra = 0.023 ± 0.004 µm), which lies in a thickened area along the apical axis (i.e., long axis) of the valve called the sternum with a maximum thickness D S = 0.26 ± 0.03 µm and a half-width 1 /2W S = 0.32 ± 0.02 µm. The raphe slit is interrupted at the zone of the central nodule, dividing it into two slits with a spacing d ra = 0.54 ± 0.06 µm. The nodule (L nod = 1.568 ± 0.002 µm, W nod = 0.86 ± 0.03 µm), which is not placed precisely in the center along the apical axis but shifted by about 0.18 µm towards one side, is a dome-shaped area that appears in the internal valve face and reaches a maximum thickness of D nod = 0.38 ± 0.02 µm. At the valve apexes, where the raphe slits end, the sternum slightly increases its thickness at the internal face, sometimes merging with the mantle, which might be associated with the so-called helictoglossa [11]. On both sides of the sternum (or nodule), rows of punctate areolae (i.e., pores) with a spacing of d a = 0.214 ± 0.008 µm, so-called striae, extend towards the mantle. The striae occur after the sternum or the nodule except for a single stria at the nodule zone shortened by 1 µm that gives the valve, along with the shifted position of the nodule, the asymmetry. The striae are slightly bent or tilted (see Figure 1) with an average striae spacing expanding from d str,min = 0.49 ± 0.03 µm to d str,max = 0.57 ± 0.02 µm, resulting in about 13 visible striae per valve. The areolae are visibly smaller on the external face of the valve compared to the internal face, with diameters of 2r a,ext = 0.100 ± 0.007 µm and 2r a,int = 0.15 ± 0.01 µm, respectively. The areolae are further covered with so-called flab-like pore occlusions (of a predicted thickness of D occ ≈ 0.02 µm), leaving a crescent-like slit, reaching a width of W occ = 0.017 ± 0.002 µm.
Statistical analysis shows that all studied valves are of comparable dimension, with structural parameters varying by less than 10%. The only exceptions (with deviations up to 17%) are the thickness of the sternum D s , as well as the width W ra and spacing d ra of the raphe slit, the mantle height h M , and the width of pore occlusion slit W occ . With the exception of D s and W occ , these parameters display dominant dX ext , indicating a relatively large variation between valves. This is also the case for the thickness of the nodule zone D nod , the valve length L v and width W v , and raphe slit length L ra but with a dX ext up to 5%. Interestingly many of these parameters are found not to influence the obtained interference patterns significantly. In contrast, structural parameters describing the dimensions and spacing of the areolae (2r a,ext , 2r a,int , and d a ), striae spacing (d str ), and the thickness of the valve D v display a comparably small variation between valves, evident by their dominant dX int of up to 7%. These are the parameters that can dramatically change the interference patterns in the simulations (vide infra).
The fine structure of the girdle bands is not studied in detail, as they are comparably simple and do not contain structural features relevant to the light propagation apart from their width, height, and spacing, which are estimated as W girdle = W M , H girdle = 2.84 µm, and d girdle = 10-50 nm, respectively. This is in contrast to the girdle bands of some larger species, such as Coscinodiscus spp., in which their porous structure dramatically influences the light propagation [41] and, thus, has to be considered during simulations of the whole frustule.
It should be noted that finer structural features, such as the undulations on the silica or apical pore field, are also not considered in this study, as these do not fall into the length scales close to the studied λ vacc range and are assumed not to be a determinant to the obtained near-field interference patterns. In general, FIB-SEM can deliver images of a resolution as low as 4 nm [67,68], and, if combined with AFM imaging [69], such structural information could be added in future studies.
Furthermore, the content of elements in the silica (available through energy-dispersive X-ray (EDX) mapping analysis) should be considered in the future, as additives can lead to spatial changes in n v (x, y, z) similar to those that have been reported for some pennate valves [64]. Changes in n v could significantly influence light propagation, especially in a low-contrast medium. However, the general trends and features seen here should also then be relevant.
Near-Field Simulation of the 2D Cross-Sections-The Role of Optical Components
The near-field light interaction of a number of representative vertical (five in total) and longitudinal (eight in total) CSs-displayed in Figure S1-is studied. All studied CSs show structural features in the length scales of visible light and induce an interference pattern in the near field, as is evidenced by the red and blue areas (e.g., Figure S2). It is evident that distinct structural features in the CSs induce a specific contribution to the interference pattern. Patterns of structurally complex CSs can be explained by the addition of interference patterns of "their distinct structural components" (see, e.g., Figure S3), such as slab-like, lens-like, grid-like, and fiber-like structures. The near-field interference patterns of such components can often be predicted by theory, e.g., the thin-film interference of thin slabs or guided-mode resonance of grid-like structures (vide infra). Therefore, we separately study a range of optical phenomena occurring in the CSs featuring these specific structural components. It should be noted that the presence or absence of pore occlusions in CSs with a grid-like structure do not have a significant effect on the near-field interference pattern either in the longitudinal or vertical CSs (see, e.g., Figure S2). Therefore, pore occlusions are not considered in further discussions.
The idea of "building" the CSs through the addition of optical components is sketched in Figure 2 (see also Figure S1). The simplest form of a longitudinal cross-section (CS long,5 or CS long,7 , which differ in length) is similar to a thin slab of corresponding thickness (A in Figure 2) with curved and extended edges (B in Figure 2). Slicing the valve across the areolae of consecutive striae leads to the addition of a grid-like structure with spacing d str (CS long,4 or CS long,6 , which differ in length (grid units) and striae spacing d str ). The presence of 1 µm shortened stria on one side of the valve leads to a defect in the grid-like structure that appears in CS long,4* . It should be noted that the areas between the areolae have a plano-convex lens-like structure (C in Figure 2) corresponding to the shape of the costae (i.e., the rips). When approaching the center of the valve, the grid is interrupted by the presence of the nodule zone, adding a larger plano-convex lens-like structure (D in Figure 2) slightly off-center to the grid (CS long,3 ). As soon as the sternum zone is approached, the grid-like structure disappears, but the overall thickness of the CS increases (CS long,2 , see Figure S1). Slicing directly along the apical axis, the slab-like structure is further cut by the raphe slits, leaving a CS featuring only a lens-like structure in the center and two curved edges (CS long,1 , Figure S1). It should be noted that the width of the curved and elongated edge varies slightly between the CSs depending on the position of slicing within the mantle (0.184 µm ≤ WM,CS ≤ 0.334 µm). Adding girdle bands to both sides of the CSs (while building 2D CSs across the complete frustule) further elongates the edge by adding these fiber-like structures (H in Figure 2).
In general, the existence of the thin-slab elements leads to the occurrence of two distinct interferences: (I) thin-film interference and (II) edge diffraction, overlaying in the transmittance region. The existence of fiber-like structures-the mantle or, additionally, the girdle bands in the complete frustule-leads to (III) waveguiding behavior, which affects the corresponding edge diffraction pattern. Moreover, the finite size of the CSs, or the presence of thickened protruding structures associated with the cutting of the nodule zone or the sternum, results in increased interference between the transmitted and the In the case of vertical CSs, the thin-slab elements with curved elongated edges (similar to, e.g., CS long,5 ) are divided at the center either by adding the raphe (with or without its slit, E 1 or E 2 in Figure 2, respectively) or the slightly thicker nodule zone (F in Figure 2) of a triangular-like structure (CS ver,4 /CS ver,5 or CS ver,2 , respectively). It should be noted that the raphe slit in these structures, which only leads to an interruption of around 23 nm between the two parts, does not lead to significant changes in the near-field interference pattern. In all cases, the thin-slab area in the vertical CSs on both sides of the raphe or nodule can be further divided by a grid-like structure (G in Figure 2) with spacing d a of varying length (CS ver,3 or CS ver,1 , respectively) corresponding to the slicing of the areolae within one stria.
It should be noted that the width of the curved and elongated edge varies slightly between the CSs depending on the position of slicing within the mantle (0.184 µm ≤ W M,CS ≤ 0.334 µm). Adding girdle bands to both sides of the CSs (while building 2D CSs across the complete frustule) further elongates the edge by adding these fiber-like structures (H in Figure 2).
In general, the existence of the thin-slab elements leads to the occurrence of two distinct interferences: (I) thin-film interference and (II) edge diffraction, overlaying in the transmittance region. The existence of fiber-like structures-the mantle or, additionally, the girdle bands in the complete frustule-leads to (III) waveguiding behavior, which affects the corresponding edge diffraction pattern. Moreover, the finite size of the CSs, or the presence of thickened protruding structures associated with the cutting of the nodule zone or the sternum, results in increased interference between the transmitted and the diffracted waves from two (or more) edges. This leads to additional phenomena: (IV) diffraction-driven focusing and, further, (V) photonic jet generation in the transmittance region. Furthermore, the grid-like structures lead to: (VI) a characteristic diffraction grating behavior as well as (VII) guided-mode resonance, which leads to dramatic changes in the interference pattern at a specific range of wavelengths.
The Observed Optical Phenomena: Description and Analysis
All cross-sections can modulate light effectively, and generally, the modulation strength is strongly dependent on λ vacc and ∆n. As mentioned, a number of distinct optical phenomena are observed and correlated to the optical components in the CSs. Here, these phenomena are separately demonstrated, accompanied by the theoretical expectation, elucidating the role of the corresponding structural parameters where necessary.
Thin-Film Interference
Thin-film interference results either from the interference of the reflected light at the first and second interface between a (with respect to λ) thin slab (of thickness D sl ) embedded in a homogeneous medium or the interference of the transmitted light, with a light wave being initially reflected at both internal interfaces, which occurs before leaving the thin slab [70,71]. The light waves interfere maximally constructively or destructively if their difference in path length ∆x corresponds to a multiple (N) of the wave's wavelength in the slab N*λ sl or (1/2 + N)*λ sl , respectively. This path length corresponds to the distance traveled within the slab, i.e., 2*D sl . Furthermore, in the case of the reflected light, a phase shift of π needs to be added [71], as one of the reflections happens at a boundary going from an optically thinner to an optically thicker medium. This leads to constructive interference of the reflected light (i.e., an increase in intensity), being concurrent with destructive interference of the transmitted light (i.e., a decrease in intensity) and vice versa (see Figure 14E in [71]). The wavelength λ sl , at which either maximally constructive or destructive interference occurs, can be extracted from the consideration above and related to the vacuum wavelength of the incident light λ vacc using the refractive index of the thin-slab element n sl (using λ vacc = λ sl *n sl ). Through this relation, the positions of the interference maxima (in relation to λ vacc ) also change with n sl in addition to D sl (vide infra). Using Fresnel equations [72,73], the theoretical dependency of the intensity value on λ vacc for slabs of D sl and n sl can be estimated (see dashed line in Figure 3 as well as Figure S4).
As can be seen in Figure 3c, the λ vacc associated with the maximal constructive interference shifts towards a longer wavelength by increasing D sl (shown for n sl = n v = 1.46) with only negligible changes in the maximal or minimal intensity values. This shift leads to the observation of more oscillations in the case of thicker slabs in our region of interest; as in all cases, the oscillations increase in density with decreasing λ vacc . For D sl = D v = 170 nm, the intensity of the reflected light is maximal at λ vacc ≈ 330 nm (and approaching the second maximum in the NIR), while destructive interference occurs at λ vacc ≈ 495 nm ( Figure 3a). This means that the amount of light, which transmits through the valve, is attenuated for UV wavelengths, while the green wavelengths are largely unaffected by thin-film interference effects. In contrast, the interference pattern of the thin-slab element of the sternum (D s = 260 nm) shows an attenuated transmission at λ vacc ≈ 305 nm and 505 nm, while the element at the nodule zone (D nod = 380 nm) mostly affects transmission around λ vacc ≈ 320 nm, 445 nm, and 740 nm (Figure 3a). (Table 1).
Neither the positions of the maxima and minima in the interference pattern nor its shape depend on the refractive index of the medium nm surrounding the thin slab when calculated relative to λvacc (Figure 4b, left). However, the obtained magnitude of the reflected light or the total attenuation of the transmitted light at a specific λvacc decreases quadratically when nm approaches nv (Figure 4a, left). Thus, the observed effect is strongly attenuated in water when compared to air.
In contrast to nm, the refractive index of the thin-slab element nv strongly influences the position of the spectrum (by changing λsl and, thereby, the ratio 2Dsl/λsl), leading to a red shift in the spectra, as seen in Figure 4 (right), similar to an increase in Dsl but with a simultaneous rise in magnitude. Norm,R and E 2 Norm,T of CS long,5 and CS long,2 compared to the theoretical calculation for the studied λ vacc range. The estimated error in the extracted E Norm is up to ± 0.006 or ± 0.01 V/m in CS long,5 or CS long,2 , respectively. (b) CS long,5 shows the maximum constructive interference at reflectance (λ vacc = 330 nm); the black arrow indicates the formation of standing waves between the reflected and the incident wavefronts. (c) The dependency of λ vacc positions of the constructive interference maxima on D sl . The grey-shaded areas in (c) show the significant error in the thickness of different valve parts ( Table 1). Figure 3a, the λ vacc dependency of the reflectance and transmittance intensity of the CSs with thin-slab elements is in good agreement with the discussed trend. However, it should be noted that the extraction of the data points from the complex interference pattern is not trivial. In the case of reflectance, the formation of a seemingly standing wave in the E Norm presentation is caused by the interference of the reflected and the incoming waves (see Figure 3b). This makes the E Norm value strongly position dependent. To overcome this problem, as well as to minimize the edge diffraction effects, we follow the strategy illustrated in Figure S4. Moreover, as the theoretical calculations expect the intensity of the reflectance and transmission, squaring the extracted E Norm values is required to obtain a match between both the shape and magnitude of the reflectance and transmission spectra ( Figure S4e,f, respectively). It should be noted that-using the method described in Figure S4-E Norm,R and E Norm,T values associated with the sternum of CS long,2 are extractable, while this is not the case for the nodule zone due to its finite length (L nod ). Despite this, the expected constructive and destructive maxima of the nodule zone are observed in the simulation results.
As illustrated in
Neither the positions of the maxima and minima in the interference pattern nor its shape depend on the refractive index of the medium n m surrounding the thin slab when calculated relative to λ vacc (Figure 4b, left). However, the obtained magnitude of the reflected light or the total attenuation of the transmitted light at a specific λ vacc decreases quadratically when n m approaches n v (Figure 4a, left). Thus, the observed effect is strongly attenuated in water when compared to air.
Edge Diffraction
Unlike macroscopic objects, where the light diffraction at the edges is neglectable, edge diffraction becomes significant and dominates the light modulation behavior of microscopic objects of a size comparable to λvacc [74]. For an optically transparent thin slab, the characteristic pattern of edge diffraction results from the interference of the secondary wavelet generated at the edge-by being a point source, as can be elucidated by the Huygens-Fresnel principle-and the incident wavefront above the edge or the transmitted wave through the thin slab [75], where, in both cases, there is a spatial phase difference ΔΦ that can be explained according to Fresnel-Kirchhoff diffraction [71,74]. This pattern includes bright fringes alternating with dark fringes, which appear in our simulationson the y-axis-above and below the edge, as in the case of the thin slab of length Lsl = 20 µm, Dsl = Dv, and nsl = nv (Figure 5a). In contrast to n m , the refractive index of the thin-slab element n v strongly influences the position of the spectrum (by changing λ sl and, thereby, the ratio 2D sl /λ sl ), leading to a red shift in the spectra, as seen in Figure 4 (right), similar to an increase in D sl but with a simultaneous rise in magnitude.
Edge Diffraction
Unlike macroscopic objects, where the light diffraction at the edges is neglectable, edge diffraction becomes significant and dominates the light modulation behavior of microscopic objects of a size comparable to λ vacc [74]. For an optically transparent thin slab, the characteristic pattern of edge diffraction results from the interference of the secondary wavelet generated at the edge-by being a point source, as can be elucidated by the Huygens-Fresnel principle-and the incident wavefront above the edge or the transmitted wave through the thin slab [75], where, in both cases, there is a spatial phase difference ∆Φ that can be explained according to Fresnel-Kirchhoff diffraction [71,74]. This pattern includes bright fringes alternating with dark fringes, which appear in our simulations-on the y-axis-above and below the edge, as in the case of the thin slab of length L sl = 20 µm, D sl = D v , and n sl = n v (Figure 5a). Moreover, unlike the diffraction that occurs at straight edges as in Sana,long5, the edge diffraction in CSs involves curved edges, i.e., the mantle, where the interference pattern includes countless secondary wavelets produced from infinitesimal points at the edge part fronting the incident wave [76]. Further, due to the tilt of the mantle with respect to the thin-slab element (θM ≈ 70°, Figure S5a), a portion of the diffracted wave deflects at the edge (indicated with a green arrow in Figure S5a), which is significant at shorter λvacc. By rotating a thin-slab equivalent to the mantle dimensions (Sana,M), a similar trend is observed in a tilted position (the same tilt as the mantle) if compared to the in-plane alignment ( Figure S5c vs. S5d, respectively). Altogether, the mantle geometry and its tilt modulate the CSs' edge diffraction fringes, which may involve waveguiding behavior (vide infra) that leads to changes in ΔΦ and can be changed by changing the edge geometrical parameters. This becomes evident when comparing the bright fringes outside and inside the transmittance region of CSs, if compared to those of Sana,long5. This modulation includes: (i) increasing ENorm amplitude of some fringes while decreasing others (Figures S5a vs. S5b, and also Figure S5e), (ii) the change of ENorm,in/ENorm,out ratio from ≥ 1 in Sana,long5 to < 1 in the CSs ( Figure S5f), (iii) a spatial delay of the inside diffraction fringes (clear for the first and second fringes in Figure S6a vs. S6b), and (iv) λvacc dependency of the shape and intensity of the first diffraction fringe inside ( Figure S6). In all cases, the ENorm of diffraction fringes decreases with increasing λvacc (Figures S5e and S6), as expected from the Fresnel-Kirchhoff integral [71].
Waveguiding through Fiber-Like Components
The contribution of the fiber-like components (the mantle or girdle bands) to the interference pattern of the CSs is not straightforward due to their waveguiding behavior. By reducing L sl (as in the CSs), the secondary wavelets produced at the two opposite edges increasingly interfere with each other inside the transmittance region, as can be seen in S ana,long5 (i.e., a thin slab equivalent to CS long,5 but with straight edges (Figure 5b)), CS long,5 (Figure 5c), and CS long,7 (Figure 5d). This disturbs the characteristic interference fringes of the edge diffraction, leading to a diffraction-driven focusing and, further, a photonic jet generation that is discussed separately (vide infra).
Moreover, unlike the diffraction that occurs at straight edges as in S ana,long5 , the edge diffraction in CSs involves curved edges, i.e., the mantle, where the interference pattern includes countless secondary wavelets produced from infinitesimal points at the edge part fronting the incident wave [76]. Further, due to the tilt of the mantle with respect to the thin-slab element (θ M ≈ 70 • , Figure S5a), a portion of the diffracted wave deflects at the edge (indicated with a green arrow in Figure S5a), which is significant at shorter λ vacc . By rotating a thin-slab equivalent to the mantle dimensions (S ana,M ), a similar trend is observed in a tilted position (the same tilt as the mantle) if compared to the in-plane alignment ( Figure S5c vs. Figure S5d, respectively).
Altogether, the mantle geometry and its tilt modulate the CSs' edge diffraction fringes, which may involve waveguiding behavior (vide infra) that leads to changes in ∆Φ and can be changed by changing the edge geometrical parameters. This becomes evident when comparing the bright fringes outside and inside the transmittance region of CSs, if compared to those of S ana,long5 . This modulation includes: (i) increasing E Norm amplitude of some fringes while decreasing others ( Figure S5a vs. Figure S5b, and also Figure S5e), (ii) the change of E Norm,in /E Norm,out ratio from ≥ 1 in S ana,long5 to < 1 in the CSs ( Figure S5f), (iii) a spatial delay of the inside diffraction fringes (clear for the first and second fringes in Figure S6a vs. Figure S6b), and (iv) λ vacc dependency of the shape and intensity of the first diffraction fringe inside ( Figure S6). In all cases, the E Norm of diffraction fringes decreases with increasing λ vacc (Figures S5e and S6), as expected from the Fresnel-Kirchhoff integral [71].
Waveguiding through Fiber-like Components
The contribution of the fiber-like components (the mantle or girdle bands) to the interference pattern of the CSs is not straightforward due to their waveguiding behavior. Generally, light guiding relies on the total internal reflection principle. This means the light is confined to an optically thicker medium (the core) surrounded by an optically thinner medium (the cladding) through the total internal reflections at the core/cladding boundaries if the angle of the incident light θ inc at these boundaries ≥ the so-called critical angle θ c .
The fiber-like components in CSs are of dimensions comparable to λ vacc (considered as the core), and, if embedded within a homogenous medium of lower refractive index (considered as the cladding), symmetric waveguide behavior is expected [77]. Although the complete analysis of the waveguiding behavior is left for future work, some related facts and observations are summarized here. Within the waveguide, the light propagates in discrete modes, and the number of supported modes depends on the ratio between its width W wg and λ sl (2*W wg /λ sl = 2*W wg *n sl /λ vacc ) [77]. For instance, in our case (W wg = W M = W girdle = 0.184 µm, n sl = n v = 1.46), the cut-off λ vacc of the first mode ≈ 537 nm. This is changed in the CSs with changing W M,CS , e.g., CS long,7 (W M,CS = 0.334 µm), where the cut-off λ vacc of the first mode ≈ 975 nm. Nevertheless, the zero mode can be supported within the waveguide regardless of this ratio [77].
In the waveguide, the supported modes have a standing wave pattern associated with an evanescent field [78]. The confinement of a given mode inside the core, as well as the penetration depth of its evanescent field into the surrounding medium, is related to W wg and ∆n [77]. In Figure S5d, a standing wave pattern appears inside S ana,M (W sl = W M = 0.184 µm), limited by its height h sl = h M = 0.58 µm, across the whole studied λ vacc range. By increasing the height h sl of S ana,M to 2 µm, the standing wave pattern becomes more evident (Figure 6a). This pattern changes by rotating S ana,M and, thus, changing the θ inc , as in the case of tilted S ana,M ( Figure S5c), similar to the mantle in the CSs, with simultaneous changes in its associated diffraction fringes.
As shown in Figure 6b, by approaching the structure of a cross-section across the complete frustule via adding four girdle bands (H girdle , W girdle )-two adjacent to the epivalve and the other two adjacent to the hypovalve with a step difference equaling W girdle on the y-axis and a spacing d girdle = 10 nm on the x-axis-further modulation in edge diffraction fringes is observed. This includes an additional spatial delay, especially the inside fringes (indicated by the blue arrows in Figure 6b,c).
In most simulations carried out here, the incident wave falls onto the CSs' external faces. Alternatively, as shown in Figure S7, by rotating the CSs 180 • , the incoming wave falls onto the CSs' internal faces, reaching the mantle first. As there are two opposite curved edges for each CS, the standing wave extends from each curved edge to the rest of the CS, interferes inside it, and seemingly causes guided-mode resonance (GMR)-like behavior, occurring across the whole studied λ vacc range in air and appearing in all CSs. This is accompanied by an evanescence with simultaneous changes to the transmittance and reflectance interference patterns ( Figure S7). The verification and analysis of this mantle-coupled GMR-like behavior-distinct from grid-coupled GMR (vide infra)-are left for future work.
Diffraction-Driven Focusing in the Near Field
As explained, a consequence of reducing the LSl of the thin-slab element is the arising of a distinct interference pattern with bright (red) spots alternating with dark (blue) spot in the transmittance region and influenced by the curved edges of the CSs and their wave guiding behavior ( Figure 5). The bright (i.e., focusing) spots are distinctly visualized in Figure 7a and are quite similar to the pattern of the transmitted light through an apertur (see Figure 1 in [79]). The intensity of these spots, as well as their size, depends on th diffraction fringes they involve. This could be why the highest intense spots appear at th right-hand side of the interference pattern associated with the more intense diffraction fringes inside the transmittance region (e.g., Figure 7a). Further, adding more edges to th CSs, for instance, in CSlong,1, where two raphe slits (representing four additional edges) ar introduced, or in CSver,1 and CSver,2, where an increased thickness (associated with the nod ule) is introduced at the center of CSs, leads to the presence of additional point sourcessecondary wavelets-interfering with the transmitted wave. This leads to splitting the as sociated CS's interference pattern into two separate but smaller patches of these spots, a can be seen in CSlong,1 ( Figure S8c) and CSver,2 ( Figure S9a).
Three parameters can be considered to study these focusing spots, as illustrated in Figure 7a: the distance between the CS and a given focusing spot Zf, its length Lf, and strength ENorm,f. With increasing λvacc, all these spots move toward the CS, decreasing Z ( Figure 7c) and Lf (Figure 7d) and fading in its intensity (Figure 7b). In this case, ENorm,f i
Diffraction-Driven Focusing in the near Field
As explained, a consequence of reducing the L Sl of the thin-slab element is the arising of a distinct interference pattern with bright (red) spots alternating with dark (blue) spots in the transmittance region and influenced by the curved edges of the CSs and their waveguiding behavior ( Figure 5). The bright (i.e., focusing) spots are distinctly visualized in Figure 7a and are quite similar to the pattern of the transmitted light through an aperture (see Figure 1 in [79]). The intensity of these spots, as well as their size, depends on the diffraction fringes they involve. This could be why the highest intense spots appear at the right-hand side of the interference pattern associated with the more intense diffraction fringes inside the transmittance region (e.g., Figure 7a). Further, adding more edges to the CSs, for instance, in CS long,1 , where two raphe slits (representing four additional edges) are introduced, or in CS ver,1 and CS ver,2 , where an increased thickness (associated with the nodule) is introduced at the center of CSs, leads to the presence of additional point sourcessecondary wavelets-interfering with the transmitted wave. This leads to splitting the associated CS's interference pattern into two separate but smaller patches of these spots, as can be seen in CS long,1 ( Figure S8c) and CS ver,2 ( Figure S9a). This phenomenon is directly relevant to the distance between the edges; therefore, increasing the LSl of Sanal,long5, and, thus, the distance between the generated secondary wavelets, dramatically increases Zf ( Figure S8b) and Lf while slightly decreasing ENorm,f. For a much larger thin-slab component, as the edge diffraction becomes insignificant again, this phenomenon becomes neglectable. Changing DSl (from 50 to 400 nm) of Sanal,long5 at λvacc 330 nm leads to negligible changes in Zf and Lf, while ENorm,f increases from 1.04 V/m (DSl = 50 nm) to 1.38 V/m (DSl = 400 nm). From the observed edge diffraction and waveguiding behavior, it can be concluded that changing the edge geometry can dramatically affect these focusing spots; for instance, the increased WM,CS in CSlong,7 might contribute to its increased E 2 Norm,f if compared to that of CSlong,5 (Figure 7b). Studying the effects associated with edge geometry is left for future work.
Photonic Jet
A photonic jet (PJ, also known as a photonic nanojet) is observed as a distinct focusing phenomenon in some CSs (all vertical CSs, CSlong,1, CSlong,2, and CSlong,3), linked to the Three parameters can be considered to study these focusing spots, as illustrated in Figure 7a: the distance between the CS and a given focusing spot Z f , its length L f , and strength E Norm,f . With increasing λ vacc , all these spots move toward the CS, decreasing Z f (Figure 7c) and L f (Figure 7d) and fading in its intensity (Figure 7b). In this case, E Norm,f is affected by two factors: the reduction of the strength of the secondary wavelets produced from the edges (as expected from the Fresnel-Kirchhoff integral) and the change caused by thin-film interference affecting the transmittance. In Figure 7b, the correlation between the transmittance intensity, as calculated from thin-film interference theory (vide supra), and the E 2 Norm,f of the focusing spots is-to some extent-evident. All spots follow a similar trend (see, e.g., Figure S9), even the spots that result from the interference with the so-called photonic jet, e.g., spots 1 and 3 in Figure S9a.
Increasing n m (n v = 1.46), and, thus, decreasing ∆n, leads to a significant increase in Z f ( Figure S8a) and L f and a simultaneous fading in E Norm,f . This is not the case for changing n v (n m = 1.00), as the changes in Z f and L f are negligible.
This phenomenon is directly relevant to the distance between the edges; therefore, increasing the L Sl of S anal,long5 , and, thus, the distance between the generated secondary wavelets, dramatically increases Z f ( Figure S8b) and L f while slightly decreasing E Norm,f . For a much larger thin-slab component, as the edge diffraction becomes insignificant again, this phenomenon becomes neglectable. Changing D Sl (from 50 to 400 nm) of S anal,long5 at λ vacc 330 nm leads to negligible changes in Z f and L f , while E Norm,f increases from 1.04 V/m (D Sl = 50 nm) to 1.38 V/m (D Sl = 400 nm).
From the observed edge diffraction and waveguiding behavior, it can be concluded that changing the edge geometry can dramatically affect these focusing spots; for instance, the increased W M,CS in CS long,7 might contribute to its increased E 2 Norm,f if compared to that of CS long,5 (Figure 7b). Studying the effects associated with edge geometry is left for future work.
Photonic Jet
A photonic jet (PJ, also known as a photonic nanojet) is observed as a distinct focusing phenomenon in some CSs (all vertical CSs, CS long,1 , CS long,2 , and CS long,3 ), linked to the presence of an increased thickness in the CS either by introducing the nodule zone (e.g., Figure 8) or the sternum (e.g., Figure S10a), which significantly affects the associated interference pattern. By separating the related optical component (e.g., CS ver,2/nodule and CS long,3/nodule in Figure 8c,d, respectively), their correlation to this phenomenon becomes evident. This type of focusing occurs when a plane wave is incident on a microscopic dielectric object of comparable size to λ vacc (for example, microspheres of radius ≈ 1-30 λ vacc [80][81][82][83]). PJ has been extensively studied during the last decade, both numerically and experimentally, for a wide range of dielectric microparticles of symmetric and asymmetric geometries, especially microspheres and microcylinders [80,82,84]. The mechanism of PJ focusing can be explained via the Mie scattering theory [82] as well as near-field diffraction approximations [79]. Our simulation results indicate the correlation of edge diffraction to this phenomenon, similar to the diffraction-driven focusing spots. presence of an increased thickness in the CS either by introducing the nodule zone (e.g., Figure 8) or the sternum (e.g., Figure S10a), which significantly affects the associated interference pattern. By separating the related optical component (e.g., CSver,2/nodule and CSlong,3/nodule in Figures 8c and d, respectively), their correlation to this phenomenon becomes evident. This type of focusing occurs when a plane wave is incident on a microscopic dielectric object of comparable size to λvacc (for example, microspheres of radius ≈ 1-30 λvacc [80][81][82][83]). PJ has been extensively studied during the last decade, both numerically and experimentally, for a wide range of dielectric microparticles of symmetric and asymmetric geometries, especially microspheres and microcylinders [80,82,84]. The mechanism of PJ focusing can be explained via the Mie scattering theory [82] as well as nearfield diffraction approximations [79]. Our simulation results indicate the correlation of edge diffraction to this phenomenon, similar to the diffraction-driven focusing spots. It is expected from the previous work on PJ generation by artificial structures that the characteristic features of the PJ beam (i.a., the position, length, waist size, and maximum intensity) can be changed with changing parameters such as nv, nm, and incident λvacc, as well as the structure size [79,80]. As illustrated in Figure 8e, the maximum intensity of the PJ (ENorm,PJ) generated by CSlong,3/nodule decreases exponentially with increasing λvacc combined with a slight decrease in its distance from CSlong,3/nodule and an increase in its waist size ( Figure S10b). Furthermore, changing the refractive index contrast Δn by either vary- It is expected from the previous work on PJ generation by artificial structures that the characteristic features of the PJ beam (i.a., the position, length, waist size, and maximum intensity) can be changed with changing parameters such as n v , n m , and incident λ vacc , as well as the structure size [79,80]. As illustrated in Figure 8e, the maximum intensity of the PJ (E Norm,PJ ) generated by CS long,3/nodule decreases exponentially with increasing λ vacc combined with a slight decrease in its distance from CS long,3/nodule and an increase in its waist size ( Figure S10b). Furthermore, changing the refractive index contrast ∆n by either varying n v (n m = 1) or n m (n v =1.46), but keeping n v > n m , leads to similar changes in the PJ parameters ( Figure S10c vs. Figure S10d). Where the E Norm,PJ increases, its waist size decreases, and the distance to CS long,3/nodule slightly decreases with increasing ∆n. Similar trends were observed by Salhi et al. [79] that the PJ intensity decreases while its waist size increases with increasing λ vacc (see Figure 6 in [79]) or reducing ∆n (see Figure 5 in [79]).
Interestingly, an intense beam observed inside the simulation domain for a thin slab of L sl = 5 µm, D sl = D v , and n sl = n v appears as a PJ emerging after the diffraction-driven focusing spots within the transmittance region. By further reducing L sl , the generation of PJs occurs very close to the surface of the rectangular slab ( Figure S8d), leaving no space for the formation of the diffraction-driven focusing spots, which could also be the case for PJ generation by the nodule or the sternum of small dimensions. This means that even the CSs without the increased thickness at the middle can generate a pronounced PJ if their length, thickness, and edge geometry enable it. A pronounced PJ is also observed in the transmittance region beyond the 2D CSs of the intact frustule, such as in the examples shown in Figure 11 (vide infra).
Diffraction Grating Behavior in the near Field: The Talbot Effect
The grid-like element in some CSs-as it is composed of optically transparent materialconsiders a transmission grating [74,85], where the diffraction orders mainly appear in the transmittance. As our simulations provide high-resolution information in the near field, we thus expect to obtain a periodic interference pattern matching the so-called Talbot effect [86,87] in the transmittance associated with the grid-like element. For a one-dimensional grid, in our 2D simulation domain, this effect leads to an interference pattern consisting of linear arrays of periodically arranged bright fringes (i.e., high-intensity fringes)-on the y-axis-alternating with dark fringes, where the two consecutive bright fringes have a spacing equal to the grid period d. Each linear array of bright fringes represents an image copy of the grating repeated at fixed distances-on the x-axis-equal to "Talbot length Z T " [86] alternating with other secondary copies occurring at Z T /2 with a lateral shift-on the y-axis-equaling d/2. The Talbot length Z T can be calculated according to Equation (1), specified for the small-period diffraction gratings [86].
This phenomenon occurs due to the interference of the wavefronts of different orders of diffraction in the near field, where the occurrence of additional sub-images at fractions of Z T depends on the number of diffraction orders [87]. This can explain the intense interference pattern which dominates the transmittance region of CS long,3 , CS long,4 , and CS long,6 at a range of wavelengths (e.g., CS long,4 in Figure 9a), which interrupts the edge diffraction pattern and the associated diffraction-driven focusing. The results show that this pattern occurs only where at least ± 1 st orders of diffraction are present in the transmittance beside the 0 th order. The possible number of diffraction orders can be calculated under normal incidence using the grating Equation (2). By solving this equation for the CSs, the presence of ± 1 st orders of diffraction is possible only at λ vacc < d*n m (d = d str or d a in the case of longitudinal or vertical CSs, respectively), while the ± 2 nd orders of diffraction can occur only at λ vacc < (d*n m )/2, which falls mostly outside the studied λ vacc range. This explains the observation of Talbot pattern in CS long,3 (d str = 490 nm), CS long,4 (d str = 500 nm), and CS long,6 (d str = 565 nm) at λ vacc < 490 nm, 500 nm, and 565 nm, respectively, in the air (n a = 1.00) and λ vacc < 652 nm, 665 nm, and 752 nm, respectively, in water (n w = 1.33). However, it cannot be obtained in CS ver,1 (d a = 214 nm) ( Figure S11b) at the studied λ vacc range either in air or water, which is confirmed for an equivalent analytical grid without nodule G ana,ver1 ( Figure S11c).
θ m is the diffraction angle for a given diffraction order, and m is an integer that refers to the diffraction order (0, 1, 2, etc.).
while the ZT dependency on structural parameters is further studied on an equivalent analytical grid Gana,long4 of rectangular grooves. The distinct difference between Gana,long4 ( Figure S11a) and CSlong,4 (Figure 9a) is the presence of the mantle as well as the lens-like grid units in CSlong,4, where the curved surface of the lens occurs at the internal valve face and has a depth of 70 nm out of the total valve thickness Dv = 170 nm (Figures 2 and S1). Despite this, no significant change is observed on ZT between the actual and analytical grid of the same d (Figures 9a vs.S11a), except for the increased deformation of the fringes in the case of CSlong,4. In all cases, the extracted ZT has a slight deviation compared to theoretical predictions (Figures 9 and S11) due to the Talbot fringes' deformation caused by the edge diffraction, a consequence of the limited number of grid units [88]. This deformation leads to uncertainty while defining the Talbot planes (indicated by black dashed lines in Figures 9a and S11a). Thus, the extraction of ZT is based on the average, on the x-axis, accompanied by a deviation representing the uncertainty in the position of the Talbot planes. The extraction of ZT from CSlong,3 or CSlong,6 faces more difficulties related to the presence of the nodule zone or the stronger edge effect, respectively. As illustrated in Figure 9, the Talbot length ZT associated with CSlong,4 increases with decreasing λvacc or increasing nm (nv = 1.46), which perfectly matches the theoretical The Z T dependency on λ vacc and ∆n is investigated utilizing CS long,4 (d str = 500 nm), while the Z T dependency on structural parameters is further studied on an equivalent analytical grid G ana,long4 of rectangular grooves. The distinct difference between G ana,long4 ( Figure S11a) and CS long,4 (Figure 9a) is the presence of the mantle as well as the lens-like grid units in CS long,4 , where the curved surface of the lens occurs at the internal valve face and has a depth of 70 nm out of the total valve thickness D v = 170 nm (Figures 2 and S1). Despite this, no significant change is observed on Z T between the actual and analytical grid of the same d (Figure 9a vs. Figure S11a), except for the increased deformation of the fringes in the case of CS long,4 . In all cases, the extracted Z T has a slight deviation compared to theoretical predictions (Figure 9 and Figure S11) due to the Talbot fringes' deformation caused by the edge diffraction, a consequence of the limited number of grid units [88]. This deformation leads to uncertainty while defining the Talbot planes (indicated by black dashed lines in Figure 9a and Figure S11a). Thus, the extraction of Z T is based on the average, on the x-axis, accompanied by a deviation representing the uncertainty in the position of the Talbot planes. The extraction of Z T from CS long,3 or CS long,6 faces more difficulties related to the presence of the nodule zone or the stronger edge effect, respectively.
As illustrated in Figure 9, the Talbot length Z T associated with CS long,4 increases with decreasing λ vacc or increasing n m (n v = 1.46), which perfectly matches the theoretical expectation from Equation (1) and is correlated with the changes in θ m , which correspond to the ±1 diffraction orders that can be calculated from Equation (2). While a dramatic change in Z T can be induced by changing the grid period d of G ana,long4 (Figure 9g). In contrast, neither changing the fill factor ff (i.e., the grid unit width/d) (Figure 9g), the number of grid units ( Figure S11e), nor the thickness D of G ana,long4 ( Figure S11f) changes Z T . The same conclusion is obtained for changing the n v (n m = 1) of CS long,4 ( Figure S11d).
Interestingly, no significant λ vacc dependency is noticed for the maximum strength of the Talbot bright fringes E Norm,Talbot of CS long,4 , which vary within the range of 1.36-1.56 V/m (with errors up to ± 0.07 V/m). Generally, the E Norm,Talbot shows dependencies on ∆n, D, and ff but is independent of the grid unit number. Decreasing ∆n (from 0.46 to 0.06) either by changing n v or n m decreases the E Norm,Talbot of CS long,4 with the same magnitude, e.g., E Norm,Talbot reaches 1.14 ± 0.02 V/m in water (n v = 1.46) at λ vacc 350 nm. Additionally, increasing the D of G ana,long4 (from 50 nm to 400 nm) increases E Norm,Talbot (from 1.11 ± 0.01 V/m to 1.71 ± 0.05 V/m), while decreasing ff increases the transmitted light through the areolae and, thus, enhances E Norm,Talbot significantly.
Guided-Mode Resonance
For a specific range of wavelengths, the grid-like element exhibits another phenomenon, guided-mode resonance (GMR), which is widely reported for dielectric resonant gratings and photonic crystals of period d comparable to λ vacc [89][90][91]. At a specific combination of various parameters related to the incident light, including λ vacc , θ inc , and polarization, and the grid, including thickness D, pore spacing d, filling factor ff, and its material refractive index n, the grid-coupled GMR can be obtained [90,[92][93][94]. It occurs when the grid (considered as an inhomogeneous waveguide) couples the incident wavefront into guided modes, which cannot be sustained within, and leak out [92] to interfere constructively or destructively with the reflectance or transmittance, respectively, leading to the characteristic spectrum of GMR [90]. During GMR, a standing wave pattern is expected inside the grid-like element (e.g., Figure 10), with intense nodes reaching the maximum at the middle of the grid and decaying towards the edges with a simultaneous evanescence in the proximity of the grid surface.
In the CSs with a grid-like element (CS long,3 , CS long,4 , and CS long,6 ), the E Norm,T dramatically drops at specific ranges of λ vacc , reaching a minimum value at λ vacc,GMR before returning to its normal limits with a simultaneous increase in E Norm,R that matches the expected behavior of GMR. Nevertheless, the extraction of E Norm,T or E Norm,R employing the same method described in Figure S4 is complicated due to the finite size of the grid as well as the presence of other overlayed phenomena, such as the Talbot effect in the case of the first mode. Alternatively, to study the GMR behavior and to define λ vacc,GMR , very fine sweeps-down to 1 nm steps-are applied concurrently with observing the E Norm strength inside the waveguide, which reaches its maximum E Norm, GMR at λ vacc,GMR . The first mode of GMR is observed at λ vacc,GMR 295 nm, 303 nm, and 339 nm for CS long,3 , CS long,4 , and CS long,6 , respectively, while the zero modes are obtained at λ vacc,GMR 523 nm, 553 nm, and 613nm, respectively, in the air. The presence of a defect in CS long,4* causes a slight shift in λ vacc,GMR to occur at 559 nm and 307 nm for the zero (Figure 10c) and first modes (Figure 10d), respectively, where the maximum E Norm, GMR is associated with the defect rather than the geometrical center of the grid-like element. Further, by increasing θ inc , a splitting in the modes is observed (e.g., zero mode of CS long,4 in Figure S12a) combined with a decrease in E Norm, GMR at λ vacc,GMR . Such behavior is expected (e.g., see Figure 3 in [92]). Furthermore, the influence of the structural parameters on GMR is studied Gana,long4, where only slight shifts occur for λvacc,GMR (zero and first modes at 556 nm and 3 nm, respectively) compared to λvacc,GMR of CSlong,4. Although the further analysis is focus on the zero mode, similar trends are expected for the first mode. Increasing the grid u number of Gana,long4 leads to slight red shifts in λvacc,GMR, which is more significant for g units < 20 ( Figure S12d), combined with a dramatic increase in ENorm, GMR, reaching 11 V/m for 42 grid units. By increasing the D of Gana,long4, a red shift in λvacc,GMR occurs (Figu S12e), accompanied by less dramatic changes in ENorm, GMR, with a maximum (6.14 V/m) D = 250 nm. Finally, increasing the d of Gana,long4 (at the same ff) causes a dramatic red sh in λvacc,GMR ( Figure S12f) and a simultaneous decrease in ENorm, GMR. This explains the λvacc, shift observed for CSlong,3 and CSlong,6 associated mainly with the change in dstr. Decreasi the fill factor ff (from 0.8 to 0.4) causes a blue shift in λvacc,GMR ( Figure S12f) combined w a decrease in ENorm, GMR.
The Case of 2D Cross-Sections of a Complete Frustule Immersed in Water
As previously illustrated, adding the girdle bands spatially delays the inner edge d fraction fringes, probably due to the delay of the interference between the seconda wavelets generated at the edges-due to waveguiding behavior-and the transmitt wavefront through the CS (Figures 6b and c). By further adding the hypovalve-to fo a CS in the complete frustule-almost all inner edge diffraction fringes, and, as a con quence, the diffraction-driven focusing spots, are removed from inside the frustule to pear beyond the hypovalve, as can be clearly seen in the CSlong5,frustule (Figure 11a), an therefore, become irrelevant to photosynthesis. At the same time, the thin-film interf ence still affects the area inside the frustule, where the edge diffraction effect is dim ished.
Moreover, the generated PJ by the nodule, or the sternum, is still observed inside frustule (e.g., CSlong3,frustule in Figure 11b). Interestingly, the direction of this PJ follows Moreover, increasing ∆n by increasing n v (n m = 1.00) leads to a red shift in λ vacc,GMR of the zero mode of CS long,4 ( Figure S12b) and a simultaneous increase in E Norm, GMR from 1.50 V/m (n v 1.13) to 8.04 V/m (n v 1.80). In contrast, increasing ∆n with decreasing n m (n v = 1.46) leads to a blue shift in λ vacc,GMR ( Figure S12c) associated with an increase in E Norm, GMR from 1.38 V/m (n m 1.33) to 6.31 V/m (n m 1.00).
Furthermore, the influence of the structural parameters on GMR is studied on G ana,long4 , where only slight shifts occur for λ vacc,GMR (zero and first modes at 556 nm and 303 nm, respectively) compared to λ vacc,GMR of CS long,4 . Although the further analysis is focused on the zero mode, similar trends are expected for the first mode. Increasing the grid unit number of G ana,long4 leads to slight red shifts in λ vacc,GMR , which is more significant for grid units < 20 ( Figure S12d), combined with a dramatic increase in E Norm, GMR , reaching 11.70 V/m for 42 grid units. By increasing the D of G ana,long4 , a red shift in λ vacc,GMR occurs ( Figure S12e), accompanied by less dramatic changes in E Norm, GMR , with a maximum (6.14 V/m) at D = 250 nm. Finally, increasing the d of G ana,long4 (at the same ff ) causes a dramatic red shift in λ vacc,GMR ( Figure S12f) and a simultaneous decrease in E Norm, GMR . This explains the λ vacc,GMR shift observed for CS long,3 and CS long,6 associated mainly with the change in d str . Decreasing the fill factor ff (from 0.8 to 0.4) causes a blue shift in λ vacc,GMR ( Figure S12f) combined with a decrease in E Norm, GMR .
The Case of 2D Cross-Sections of a Complete Frustule Immersed in Water
As previously illustrated, adding the girdle bands spatially delays the inner edge diffraction fringes, probably due to the delay of the interference between the secondary wavelets generated at the edges-due to waveguiding behavior-and the transmitted wavefront through the CS (Figure 6b,c). By further adding the hypovalve-to form a CS in the complete frustule-almost all inner edge diffraction fringes, and, as a consequence, the diffraction-driven focusing spots, are removed from inside the frustule to appear beyond the hypovalve, as can be clearly seen in the CS long5,frustule (Figure 11a), and, therefore, become irrelevant to photosynthesis. At the same time, the thin-film interference still affects the area inside the frustule, where the edge diffraction effect is diminished. omaterials 2022, 12, x FOR PEER REVIEW 23 o Figure 11. The interference pattern of CSlong5,frustule (a) and CSlong3,frustule (b), which represen complete frustule consisting of CSlong,5 or CSlong,3, respectively, immersed in water at different λ With increasing λvacc, the diffraction-driven focusing fringes that appear outside the CSs mo toward them. The hypovalve (of the same structure) is slightly smaller than its epivalve. The bl arrows indicate what seems to be a PJ.
Furthermore, the Talbot fringes-associated with the grid-like element-remain side the frustule but also appear beyond the hypovalve, as shown in Figure 9d, wh minimizing the edge effect inside the frustule weakens the lateral deformation of the side Talbot fringes. This is clear when comparing CSlong4,frustule ( Figure 9d) to CSlong,4 (Figu 9b), given that both are in the water at λvacc 350 nm.
Finally, there are no shifts in λvacc,GMR of the zero mode observed for the epivalve CSlong4,frustule either in air ( Figure S13c) or in water ( Figure S13d), while the standing wa extends from the epivalve to the hypovalve through the mantle and the girdle bands. U derwater, the transmittance beyond the frustule does not attenuate during GMR wh compared to air ( Figure S13 d vs. c).
Discussion
The comprehensive structural analysis of GP frustules utilizing FIB-SEM analysis addition to the regular SEM, offers high-resolution structural details crucial for predicti their light modulation abilities. While statistical analysis suggests that some critical stru tural parameters, such as the valve thickness Dv and striae spacing dstr, are reproduci between different valves within the studied culture, this likely means they might be bu and optimized by the living cells on purpose to contribute to potential photobiologi roles. In future work, our proposed FIB-SEM workflow could-using the cryo-fixation complete cells-provide detailed information about the exact location of, e.g., chloropla with respect to the frustule, or the existence of other materials and layers that could inf ence the refractive index contrast in the vicinity of the frustule and, thus, their interacti with light. This method enables data generation as close to the real state as possible, sho ing the cell's interior features without significant alteration.
Through extensive numerical analysis, the ability of the GP valve, as well as the co Figure 11. The interference pattern of CS long5,frustule (a) and CS long3,frustule (b), which represent a complete frustule consisting of CS long,5 or CS long,3 , respectively, immersed in water at different λ vacc . With increasing λ vacc , the diffraction-driven focusing fringes that appear outside the CSs move toward them. The hypovalve (of the same structure) is slightly smaller than its epivalve. The black arrows indicate what seems to be a PJ.
Moreover, the generated PJ by the nodule, or the sternum, is still observed inside the frustule (e.g., CS long3,frustule in Figure 11b). Interestingly, the direction of this PJ follows the θ inc of the incident wave ( Figure S13a). This feature is confirmed in CS long,3/nodule ( Figure S13b) but with more stability inside the complete frustule, which extends to a larger θ inc ( Figure S13a). This also leads to the generation of a PJ beyond the hypovalve associated with the nodule integrated into the hypovalve of CS long3,frustule (Figure 11b). Another stronger PJ appears beyond the hypovalve observed in all CSs of the complete frustule associated with the frustule edge diffraction (black arrows in Figure 11).
Furthermore, the Talbot fringes-associated with the grid-like element-remain inside the frustule but also appear beyond the hypovalve, as shown in Figure 9d, while minimizing the edge effect inside the frustule weakens the lateral deformation of the inside Talbot fringes. This is clear when comparing CS long4,frustule (Figure 9d) to CS long,4 (Figure 9b), given that both are in the water at λ vacc 350 nm.
Finally, there are no shifts in λ vacc,GMR of the zero mode observed for the epivalve in CS long4,frustule either in air ( Figure S13c) or in water ( Figure S13d), while the standing wave extends from the epivalve to the hypovalve through the mantle and the girdle bands. Underwater, the transmittance beyond the frustule does not attenuate during GMR when compared to air (Figure S13d vs. Figure S13c).
Discussion
The comprehensive structural analysis of GP frustules utilizing FIB-SEM analysis, in addition to the regular SEM, offers high-resolution structural details crucial for predicting their light modulation abilities. While statistical analysis suggests that some critical structural parameters, such as the valve thickness D v and striae spacing d str , are reproducible between different valves within the studied culture, this likely means they might be built and optimized by the living cells on purpose to contribute to potential photobiological roles.
In future work, our proposed FIB-SEM workflow could-using the cryo-fixation of complete cells-provide detailed information about the exact location of, e.g., chloroplasts with respect to the frustule, or the existence of other materials and layers that could influence the refractive index contrast in the vicinity of the frustule and, thus, their interaction with light. This method enables data generation as close to the real state as possible, showing the cell's interior features without significant alteration.
Through extensive numerical analysis, the ability of the GP valve, as well as the complete frustule, to modulate light in the near field is demonstrated and explained. Using 2D CSs and disassembling distinct optical components enable understanding of the complex interference patterns (e.g., Figure S3) and the finding of their correlation to the known optical phenomena in the near field and micro-optics. Further, using analytical models allows the determination of the significance of structural parameters to the observed phenomena but also enables future prediction of the light modulation capabilities of other unstudied pennate species.
At that point, it should be noted that, although the numerical analysis of 2D CSs gives a deeper analytical understanding of the involved optical phenomena and the general trends, there are some limitations. The actual shape and light intensities, e.g., those of PJ and Talbot fringes, occurring in 3D cannot be obtained from 2D simulations. Moreover, in the case of grid-coupled GMR, λ vacc,GMR and E Norm,GMR are expected to be shifted when transferring from the 2D to 3D situation. Such information can be obtained from 3D simulations, which are left for a follow-up study.
In the following subsections, we-in light of the obtained results and recalled previous work-try to predict near-field light modulation by an intact three-dimensional GP valve, its potential for applications (4.1), and, further, the hypothetic photobiological relevance of their frustules (4.2).
The Light Modulation by GP Valve: The Competing Phenomena and Potential for Applications
The light modulation in 3D, with the presence of all integrated optical components in such small-size valves, is expected to give a more complicated, probably more intense, interference pattern but will also show how different physical phenomena are competing.
The GP valve, like the valve of many other pennates, consists of a single optically thin, porous silica layer of a thickness D v ≤ the visible light wavelengths; thus, thin-film interference is expected. This is distinct from the multilayer structure of some centric and pennate valves associated with the presence of loculated areolae [11], which probably leads to multilayer interference behavior. To the best of our knowledge, this phenomenon has not been studied before for pennate valves. In contrast, the interference fringes have been witnessed in the reflectance spectrum of a centric valve of Coscinodiscus wailesii with a multilayer structure (see Figure 2 in [95]). In GP, although the thin-film interference associated with the thin-slab element is disturbed by (I) the edge diffraction and (II) the presence of integrated optical components (including the 1D grid-like and lens-like components) which cover the valve area, as can be concluded from Figure 1 and Figure S1, the intensity of the reflected and transmitted light is expected to affect by this phenomenon significantly. This, in turn, is expected to affect the strength of the final interference pattern, which is evident in the case of the diffraction-driven focusing spots' intensity E 2 Norm,f (Figure 7b). This means that, under normal incidence, the thickness D v , as well as n v , may be crucial for determining the λ vacc -dependent reflectance/transmittance ratio of the valves.
Moreover, unlike in large valves, such as those of Coscinodiscus spp., where the edge diffraction is less significant [96], in finite-size valves, the edge diffraction significantly contributes to the light interference pattern. A similar conclusion was obtained for the valve of pennate diatom Ctenophora pulchella of narrow width (see Figure 2 in [29]). The contribution of thickened areas within the valve (such as the nodule in GP) to the obtained interference pattern suggests that, even in the case of large valves with complex ultrastructures, such as the valves of Arachnoidiscus spp. (see Figure 3 in [47]), the diffraction from those additional edges is expected to play a significant role in their light modulation behavior. Across all the studied λ vacc , the PJ is expected to be dominant at the apical axis associated with the presence of the nodule and the sternum, which might be the case for all biraphid pennates and can be considered a special case of diffraction-driven focusing. The PJ associated with the nodule (the maximum D) is expected to be higher in intensity and interrupts the interference pattern, as shown in Figure 8. This PJ is similar to the focusing beam observed by De Tommasi et al. [29]. The intense light focalization might give the PJ-especially that associated with the nodule-potential in several applications to, for instance, enhance the resolution of optical microscopy, reach super-resolution imaging, enhance Raman scattering, improve fluorescence spectroscopy, enable subwavelength photolithography, and enhance the optical absorption in optoelectronic devices [80,81]. Additionally, by changing ∆n, either via changing n m or n v , the features of the PJs can be tailored, as shown in (Figure S10c,d), which could be interesting for future applications.
Furthermore, inside the transmittance region of GP valve, the Talbot interference pattern is expected to dominate at a range of wavelengths, where additional diffraction orders (± 1 st orders) besides the 0 th order can be present. For a clean GP valve immersed in air, the normal incident light of λ vacc < 500 nm is expected to generate Talbot fringes, representing an image copy of the 1D grid-like structure, which consists of the valve's costae alternating with striae of increasing spacing (d str ) toward the edges (Figure 1). For λ vacc between 500-565 nm, only a smaller part of the grid (closer to the edges) might be able to diffract the light into ± 1 orders of diffraction and, thus, might not generate a clear Talbot pattern. In general, the Talbot fringes are expected to be distorted near the valve edges, influenced by the edge diffraction and, near its apical axis, influenced by the PJ (Figure 8b). Although the Talbot effect is well known in the near-field optics of diffraction gratings [86,97], it is often not mentioned-apart from by De Stefano et al. [28]-or analyzed in diatom-related studies. On the contrary, the analysis of far-field diffraction grating behavior is more frequent, as in [38], which, unlike near-field behavior, is not directly relevant to photobiology or most applications. Recently, the Talbot phenomenon has been utilized in several applications, for instance, in fluorescence Talbot microscopy [98,99], displacement Talbot lithography [100,101], and image sensors [102]. It should be noted that the Talbot pattern produced by GP valves may not be appropriate for such applications if compared to the valves of larger-size pennate species such as Nitzschia and Hantzschia spp. (see Round et al. [11]), where the grid-like component has an almost fixed d str and is not interrupted by a sternum or nodule at the valve apical axis.
Additionally, at a narrower range of wavelengths, the 1D grid-like element is expected to initiate grid-coupled GMR, where the transmittance drops dramatically and, thus, affects the intensity of the Talbot fringes at the first mode or diffraction-driven focusing spots at the zero mode. A consequence of changing d str is the changing of the associated λ vacc ranges toward the edges (as can be concluded from Figure S12f) with a simultaneous decrease in the efficiency of the grid-like element-due to the decrease in grid units from CS long,3 to CS long,6 -to couple the incident waves into guided modes. The GMR has previously been reported in diatom valves associated with the periodic pore arrays of some species, such as Pinnularia spp. [36]. Most previous studies focused on coupling GMR and surface plasmon resonances of the metallic nanoparticles or thin films to obtain efficient hybrid substrates for surface-enhanced Raman spectroscopy (SERS) [103,104]. In general, GMR can also be of interest for sensing applications and optoelectronic devices [89,94,105] associated with enhancing electromagnetic fields near the valve surface through the simultaneous evanescence field. Due to the GP valve's finite size, and, thus, small grid unit number, the quality of GMR is expected to be reduced. Hence, larger pennate valves, especially of small areolae size (large ff ), might be more appropriate for GMR-based applications.
For the wavelengths away from the ability of the grid-like element to diffract the light or initiate GMR, the diffraction-driven focusing spots are expected to dominate the interference pattern in the near field, similar to this of a homogenous thin slab. As mentioned, the nodule zone is expected to further divide the interference pattern into separated parts (e.g., Figure S9a). The observed behavior of these focusing spots is in good agreement with the focusing behavior reported in the previous work for the valves of Coscinodiscus spp. and Arachinodiscus sp. [28,30,47,106,107]. The distinct difference between the focusing spots observed in GP valve and those of large centric valves is their complete dependency on the edges (no pores needed). In contrast, the focusing spots generated by the large centric valves are more related to the superposition of the diffracted light from all pores' edges [28,96]. This could be why the focusing spots remain inside the frustule of Coscinodiscus spp. [30], which is not the case in the GP frustule. It should be noted that the foramen pore diameters-and the period in between-in Coscinodiscus spp. valves (≈1-3 µm) are comparable to the size of the whole GP valve (≈4-7 µm).
Finally, GP valves might be utilized in photonic applications based on and within the limits of the discussed competing phenomena. In such applications, the valves are often spread over a substrate to form a monolayer [42,108], where they occur in two configurations, either showing the external or internal face (Figure 12a or Figure 12b, respectively). Recent reports showed a degree of control over the valve orientation on the substrates that could be helpful for specific applications [108]. Most of our simulations are focused on the illumination of the external face. In this case, our results suggest that the thin-film interference, curved edge diffraction, and grid-coupled GMR collectively might lead to a λ vacc -dependent shielding effect of the valves that significantly attenuates shorter λ vacc , especially at higher ∆n (e.g., in air), while adding the valve in that orientation to a substrate will lead to changes in, for instance, thin-film interference by adding a thin layer of medium trapped between the valve and substrate (Figure 12a). Although these presumed changes, the dense monolayer film might acquire a colligative shielding effect against a specific range of λ vacc similar to the colligative UV-shielding effect reported by Su et al. [109] for dense films of different centric valves. pore diameters-and the period in between-in Coscinodiscus spp. valves (≈1-3 µm) are comparable to the size of the whole GP valve (≈4-7 µm). Finally, GP valves might be utilized in photonic applications based on and within the limits of the discussed competing phenomena. In such applications, the valves are often spread over a substrate to form a monolayer [42,108], where they occur in two configurations, either showing the external or internal face (Figures 12a or b, respectively). Recent reports showed a degree of control over the valve orientation on the substrates that could be helpful for specific applications [108]. Most of our simulations are focused on the illumination of the external face. In this case, our results suggest that the thin-film interference, curved edge diffraction, and grid-coupled GMR collectively might lead to a λvaccdependent shielding effect of the valves that significantly attenuates shorter λvacc, especially at higher ∆n (e.g., in air), while adding the valve in that orientation to a substrate will lead to changes in, for instance, thin-film interference by adding a thin layer of medium trapped between the valve and substrate (Figure 12a). Although these presumed changes, the dense monolayer film might acquire a colligative shielding effect against a specific range of λvacc similar to the colligative UV-shielding effect reported by Su et al. [109] for dense films of different centric valves. On the other hand, when illuminating the internal valve face (Figure 12b), the incoming light might be coupled-across all optical wavelengths-into the valve through the mantle, initiating what we call mantle-coupled GMR-like behavior, as shown in Figure S7. This is less intense and has a different distribution pattern than the grid-coupled GMR but is expected to enhance the electromagnetic fields of all optical wavelengths in the proximity of the valve surface associated with the concurrent evanescence field. Despite this, the highest enhancement can be obtained at λvacc,GMR of grid-coupled GMR, which is slightly shifted in this configuration. Further changes observed in this configuration ( Figure S7) for the other optical phenomena are indeed interesting and can be explained according to the underlying physics but are left for future work.
Hypothetic Photobiological Relevance of GP Frustules
As GP living cells live underwater, understanding the case of a complete frustule in water, illustrated in Section 3.4, is crucial for correlating the observed phenomena and the designated optical elements to hypothetical photobiological roles. Although there is significant reduction in the light modulation strength, all phenomena still occur and might presumably have photobiological functions. It should be noted that GP is a benthic species living near the bottom of the basins, where the blue-green spectral ranges dominate due to the strong absorption of the red and infra-red wavelengths. The phenomenon increases in significance with increasing of water column depth that the light penetrates before reaching the living cells [110]. The expected photobiological relevance might not only be limited to photosynthesis enhancement-by attenuating harmful radiation while On the other hand, when illuminating the internal valve face (Figure 12b), the incoming light might be coupled-across all optical wavelengths-into the valve through the mantle, initiating what we call mantle-coupled GMR-like behavior, as shown in Figure S7. This is less intense and has a different distribution pattern than the grid-coupled GMR but is expected to enhance the electromagnetic fields of all optical wavelengths in the proximity of the valve surface associated with the concurrent evanescence field. Despite this, the highest enhancement can be obtained at λ vacc,GMR of grid-coupled GMR, which is slightly shifted in this configuration. Further changes observed in this configuration ( Figure S7) for the other optical phenomena are indeed interesting and can be explained according to the underlying physics but are left for future work.
Hypothetic Photobiological Relevance of GP Frustules
As GP living cells live underwater, understanding the case of a complete frustule in water, illustrated in Section 3.4, is crucial for correlating the observed phenomena and the designated optical elements to hypothetical photobiological roles. Although there is significant reduction in the light modulation strength, all phenomena still occur and might presumably have photobiological functions. It should be noted that GP is a benthic species living near the bottom of the basins, where the blue-green spectral ranges dominate due to the strong absorption of the red and infra-red wavelengths. The phenomenon increases in significance with increasing of water column depth that the light penetrates before reaching the living cells [110]. The expected photobiological relevance might not only be limited to photosynthesis enhancement-by attenuating harmful radiation while maximizing absorption of PAR-but also might extend to perform putative signaling and sensing mechanisms [22,110].
In GP living cells, the chloroplast occurs adjacent to the valve-where the grid-like element (i.e., the striae and costae) covers most of its area-and to the fiber-like elements (i.e., mantle and girdle bands) [111]. This gives significant importance to the Talbot effect, grid-coupled GMR, and waveguiding behavior. In parallel, thin-film interference generally affects the transmittance inside the cell. In contrast, diffraction-driven focusing is not relevant to photosynthesis. PJ generation by the nodule and the sternum might also not be directly relevant to photosynthesis as it occurs along the apical axis within a narrow area compared to the chloroplast area.
Moreover, a PJ is expected inside the frustule along the apical axis associated with the sternum and nodule but is of a higher strength at the nodule zone and shorter λ vacc (e.g., Figure 8). If we couple this fact with the observed θ inc -dependent direction of the PJs ( Figure S13a) and the fact that the nodule is in a relevant position to the nucleus, which has a richness of aureochrome and cryptochrome photoreceptors capable of sensing blue light [112]. Given that blue light, which is especially dominant in the benthos region, is crucial for many physiological processes of living cells, including the cell cycle [22]. All these facts together suggest a hypothetical PJ-based sensing mechanism for the light direction. This is inspired by a similar mechanism that has been suggested for the spherical cells of cyanobacterium Synechocystis sp. [113,114]. The illumination of these cells generates PJs at the shadow side, which are assumed to be perceived by a putative, well-distributed network of photoreceptors fixed on the plasma membrane, triggering a cellular signal transduction cascade ending by the flagella movement toward or outward from the light (see Figure 5 in [113]). However, unlike Synechocystis sp., the motility in raphid pennate diatoms, and, thus, GP, does not involve a flagellum, but rather gliding motility through the secretion of mucilage from their raphe slits [115,116].
Conclusions
Despite the 3D, complex ultrastructure of diatom frustules, the as-followed analysis logic flow could be promising for understanding their light modulation capabilities. Even tiny pennate frustules, such as our studied example, can modulate light effectively based on their ultrastructure, with strong dependencies on λ vacc and ∆n. This might not be surprising from physicists' point of view, but it definitely is for diatomists and biologists. Our findings indicate that some optical phenomena are linked to the presence of integrated optical components, such as GMR and the Talbot effect associated with the grid-like structure, while other phenomena are more linked to the size parameter than the optical component itself, for instance, the generation of PJs. Studying the change of structural parametersdespite the ability of the studied GP strain to build many important parameters preciselywould help to predict the behavior of other GP strains and different species of the same genus but also different pennate genera of similar structure but different dimensions. Moreover, the separated GP clean valves might be valuable for some photonic applications, while their complete frustules might presumably be relevant to photobiological functions. Finally, our results could inspire ongoing research on the optics of diatom frustules, as well as artificial dielectric microstructures, in addition to their influences on photobiology.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/nano13010113/s1, Figure S1: (a) the selected vertical & longitudinal CSs across the statistically representative 3D valve model that is shown in the center (at the external top view). The five vertical CSs and the eight longitudinal CSs schemed at the right and left, respectively, along with some 2D cross-sections from the 3D reconstructed data to show comparable places in the valve, reflecting the similarity between the final 2D CSs and the actual structure. (b) the important structural parameters mentioned in Table 1. Figure S2: The effect of pore occlusions on interference pattern of CS long,4 and CS ver,1 at λ vacc = 350 nm. Figure S3: A schematic diagram shows the complex interference pattern observed in CS long,4 at λ vacc = 350 nm, while disassembling its distinct structural components shows the contribution of each element to the interference pattern. Figure S4: An example shows the interference pattern of CS long,5 at λ vacc 330 (a) and 495 (b), besides the extraction method of the reflectance E Norm,R that shows a constructive interference maximum (c) and minimum (d). The reflectance and transmittance should produce a flat wavefront, confirmed through the simulation of an extended thin slab (of L sl 80 µm in a larger simulation box). Therefore, the oscillations that appear in the extracted reflectance in (c and d) were associated with edge diffraction. Thus, to read out the CSs' reflectance and transmittance with the presence of overlayed edge diffraction, we estimated the position of the baseline (associated with the expected flat wavefront) as an average between the maxima and minima in the observed oscillations (dashed black line in c and d). The error in the measurements was estimated by averaging the extracted E Norm from three consecutive x-lines close to the CS. Squaring the E Norm -after the subtraction of E input from the extracted E Norm only in the case of reflectance -gave the best match to theoretical calculations (e and f). Figure S5: The edge diffraction fringes in CS long,5 (a) and S ana,long5 (b) at λ vacc 300 nm in the air. Tilted S ana,M in (c) compared to in-plane alignment in (d) at λ vacc 300 nm in air. The plot in (e) shows λ vacc dependency of the edge diffraction fringes strength as well as the changes in the E Norm,in and E Norm,out for CS long,5 , CS long,7 , and S ana,long5 . While the plot in (f) shows the difference in E Norm,in /E Norm,out ratios. By adding the tilted mantle to the thin slab element in CS long,5 , the strength of the outside and inside fringes seem to be enhanced, except the 1 st fringe inside is diminished. The Crosses in (a) and (b) represent the x-y-positions considered for E Norm extraction that are plotted in (e). The error was up to ± 0.07, ± 0.05, or ± 0.03 V/m in the case of CS long,5 , CS long,7 , or S ana,long5 , respectively. The error in the measurements was estimated by averaging the extracted E Norm from three consecutive points at the same fringe. The ratio in (f) was calculated based on the average values. Figure S6: The λ vacc dependency of the edge diffraction fringes in CS long,5 (a) and S ana,long5 (b) in the air. The color code emphasizes the E Norm enhancement in red (E Norm > 1 V/m) while keeping both the E input and the E Norm reduction in white (E Norm ≤ 1 V/m), not emphasized. The presence of the mantle in (a) also spatially delays the 1 st and 2 nd diffraction fringes inside. Figure S7: The mantle-coupled GMR-like behavior appears in CS long,5 (a), CS long,4 (b), and CS ver,2 (c) in the air. Figure S8: (a) The n m dependency of Z f of CS long,5 at n v = 1.46 and λ vacc 330 nm. (b) The L sl dependency of Z f of S ana,long5 in air at λ vacc 330 nm. (c) the interference pattern of CS long,1 at λ vacc 300 nm. (d) PJ generated by Sana,long5 of L sl = 1 µm at 330 nm in air. Figure S9: Tracking different focusing spots while increasing λ vacc in CS ver,2 (a) and CS long,7 (b) in air. Figure S10: Photonic jet generation in CS ver,4 at λ vacc 350 nm (a), the λ vacc dependency of the generated PJ from the disassembled lens-like structure (CS long,3 /nodule ) in the air (b), the ∆n dependency of the generated PJ from CS long,3 /nodule at λ vacc 300 nm with changing n v for n m = 1 (c) or changing n m for n v = 1.46 (d). In both (c) and (d), the images were arranged to show ∆n increasing toward below. Figure S11: The analytical grid G ana,long4 equivalent to CS long,4 (a), CS ver,1 (b), and the analytical grid G ana,ver1 equivalent to CS ver,1 (c) at λ vacc = 350 nm in the air. The three graphs show the Z T independency of the changes in n v (n m =1.00) of CS long,4 (d), the grid units number of G ana,long4 (e), or its thickness (f) at λ vacc = 350 nm in the air compared to the theoretical expectation. The error bars in graphs represent the uncertainty in the measured Z T due to deformation of the fringes. Figure S12: The graphs show CS long,4 λ vacc,GMR -dependency of zero mode on θ inc (a), changing n v (n m =1.00) (b), and changing n m (n v =1.46) (c). While graphs in (d), (e), and (f) show λ vacc,GMR -dependency of zero mode on changing grid units, D, or d (at different ff ), respectively, of G ana,long4 in air. Figure S13: (a) the θ inc -dependency of the PJ generated by the nodule inside CS long3,frustule at λ vacc 400 nm in water. The color code starts from 1.1 V/m to emphasize the PJ from the talbot fringes. (b) the θ inc -dependency of the PJ generated by the dissembled nodule at λ vacc 350 nm in air. (c) zero mode grid-coupled GMR in CS long4,frustule at λ vacc,GMR 553 nm in air and (d) at λ vacc,GMR 665 nm in water. The black arrow in (d) indicates what seems as an increase in the evanescence field penetration. | 23,552.2 | 2022-12-26T00:00:00.000 | [
"Physics"
] |
Potential energy surfaces of actinide and transfermium nuclei from multi-dimensional constraint covariant density functional theories
Multi-dimensional constrained covariant density functional theories were developed recently. In these theories, all shape degrees of freedom \beta_{\lambda\mu} deformations with even \mu are allowed, e.g., \beta_{20}, \beta_{22}, \beta_{30}, \beta_{32}, \beta_{40}, \beta_{42}, \beta_{44}, and so on and the CDFT functional can be one of the following four forms: the meson exchange or point-coupling nucleon interactions combined with the non-linear or density-dependent couplings. In this contribution, some applications of these theories are presented. The potential energy surfaces of actinide nuclei in the (\beta_{20}, \beta_{22}, \beta_{30}) deformation space are investigated. It is found that besides the octupole deformation, the triaxiality also plays an important role upon the second fission barriers. The non-axial reflection-asymmetric \beta_{32} shape in some transfermium nuclei with N = 150, namely 246Cm, 248Cf, 250Fm, and 252No are studied.
Introduction
"Shape" gives an intuitive understanding of spatial density distributions of quantum many-body systems including atomic nuclei. For the description of the nuclear shape, it is convenient to use the following parametrization where Q λµ are the mass multipole operators. A schematic show of some typical shapes is given in Fig. 1 [1]. The majority of observed nuclear shapes is of spheroidal form which is usually described by β 20 . Higher-order deformations with λ > 2 such as β 30 also appear in certain atomic mass regions [2]. In addition, non-axial shapes in atomic nuclei, in particular, the nonaxial-quadrupole (triaxial) deformation β 22 have been studied both experimentally and theoretically [3,4,5]. There is no a priori reason to neglect the nonaxial-octupole deformations, especially the β 32 deformation [6,7,8]. Furthermore, more shape degrees of freedom play important roles in the study of potential energy surfaces of atomic nuclei. Particularly, various shape degrees of freedom play important and different roles in the occurrence and in determining the heights of the inner and outer barriers in actinide nuclei (in these nuclei double-humped fission barriers usually appear). For example, the inner fission barrier is usually lowered when the triaxial deformation is allowed, while for the outer barrier the reflection asymmetric (RA) shape is favored [9,10,11,12,13]. Nowadays, it becomes more and more desirable to have accurate predictions of fission barriers also for superheavy nuclei [14,15,16,17,18,19]. It is usually customary to consider only the triaxial and reflection symmetric (RS) shapes for the inner barrier and axially symmetric and RA shapes for the a e-mail<EMAIL_ADDRESS>outer one [15,20,21,22]. The non-axial octupole deformations are considered in both the macroscopic-microscopic (MM) models [23] and the non-relativistic Hartree-Fock theories [24].
In order to give a microscopic and self-consistent study of the potential energy surface with more shape degrees of freedom included, multi-dimensional constrained covariant density functional theories are developed recently [25,26]. In these theories, all shape degrees of freedom β λµ deformations with even µ are allowed, e.g., β 20 , β 22 , β 30 , β 32 , β 40 , β 42 , β 44 , and so on. In this contribution, we present two recent applications of these theories: the potential energy surfaces of actinide nuclei and the non-axial reflectionasymmetric β 32 shape in some transfermium nuclei. In Section 2, the formalism of our multi-dimensional constrained covariant density functional theories will be given briefly.
The results and discussions are presented in Section 3. Finally we give a summary in Section 4.
Formalism
The details of the formalism for covariant density functional theories can be found in Refs. [27,28,29,30,31,32]. The CDFT functional in our multi-dimensional constrained calculations can be one of the following four forms: the meson exchange or point-coupling nucleon interactions combined with the non-linear or density-dependent couplings [25,26]. Here we show briefly the one corresponding to the non-linear point coupling (NL-PC) interactions. The Starting point of the relativistic NL-PC density functional is the following Lagrangian: are the linear, non-linear, and derivative couplings and the Coulomb part, respectively. M B is the nucleon mass, α S , α V , α T S , α T V , β S , γ S , γ V , δ S , δ V , δ T S , and δ T V are coupling constants for different channels and e is the electric charge. ρ S , ρ T S , j V , and j T V are the iso-scalar density, isovector density, iso-scalar current, and iso-vector current, respectively. The densities and currents are defined as: Starting from the above Lagrangian, using the Slater determinants as trial wave functions and neglecting the Fock term as well as the contributions to the densities and currents from the negative energy levels, one can derive the equations of motion for the nucleons. Furthermore, for systems with time reversal symmetry, only the time-like components of the vector currents (6) and (7) survive. The resulted equation for the nucleons readŝ where the potentials V(r) and S (r) are calculated as An axially deformed harmonic oscillator (ADHO) basis is adopted for solving the Dirac equation [25,26,33]. The ADHO basis are defined as the eigen solutions of the Schrodinger equation with an ADHO potential [34,35], where is the axially deformed HO potential and ω z and ω ρ are the oscillator frequencies along and perpendicular to z axis, respectively. The solution of Eq. (11) reads where φ n z (z) and R m l n ρ (ρ) are the HO wave functions, χ s z is a two component spinor and C α is a complex number inserted for convenience. Oscillator lengths b z and b ρ are related to the frequencies by These basis are also eigen functions of the z component of the angular momentum j z with eigen values K = m l +m s . For any basis state Φ α (rσ), the time reversal state is defined as Φᾱ(rσ) = T Φ α (rσ), where T = iσ y K is the time reversal operator and K is the complex conjugation. Apparently we have Kᾱ = −K α and πᾱ = π α . These basis form a complete set for expanding any two-component spinors. For a Dirac spinor with four components, where the sum runs over all the possible combination of the quantum numbers α = {n z , n r , m l , m s }, and f α i and g α i are the expansion coefficients. In practical calculations, one should truncate the basis in an effective way [25,26,33].
We expand the potentials V(r) and S (r) and various densities in terms of the Fourier series, The nucleus is assumed to be symmetric under the V 4 group, that is, for all the potentials and densities we have Thus the components f µ 's satisfy f µ = f * µ = fμ and all the terms with odd µ vanish. The expansion Eq. (16) can be simplified as are real functions of ρ and z. The total energy of a nucleus reads where the center of mass correction E c.m. can be calculated either phenomenologically or microscopically. The intrinsic multipole moments are calculated from the vector densities by where Y λµ (Ω) is the spherical harmonics and τ refers to the proton, neutron or the whole nucleus. The potential energy surface (PES) is obtained by the constrained self-consistent calculation, whereĤ is the RMF Hamiltonian,Q n 's are the multipole operators to be constrained and N c is the dimension of the constraining space. Both the BCS approach and the Bogoliubov transformation are implemented in our model to take into account the pairing effects. For the pairing force, we can use a delta force or a separable finite-range pairing force [36,37,38]. More details of the multi-dimensional constraint covariant density functional theories can be found in Refs. [25,26].
Results and discussions
3.1 One-, two-, and three-dimensional potential energy surface of 240 Pu In Ref. [25], one-(1-d), two-(2-d), and three-dimensional (3-d) constrained calculations were performed for the actinide nucleus 240 Pu. The parameter set PC-PK1 is used [39]. In Fig. 2 calculations with different self-consistent symmetries imposed: the axial (AS) or triaxial (TS) symmetries combined with reflection symmetric (RS) or asymmetric cases. The importance of the triaxial deformation on the inner barrier and that of the octupole deformation on the outer barrier are clearly seen: The triaxial deformation reduces the inner barrier height by more than 2 MeV and results in a better agreement with the empirical datum; the RA shape is favored beyond the fission isomer and lowers very much the outer fission barrier. Besides these features, it was found for the first time that the outer barrier is also considerably lowered by about 1 MeV when the triaxial deformation is allowed. In addition, a better reproduction of the empirical barrier height can be seen for the outer barrier. It has been stressed that this feature can only be found when the axial and reflection symmetries are simultaneously broken [25]. In order to see how the PES of 240 Pu becomes unstable against the triaxial distortion, 2-d PES's from calculations without and with the triaxial deformation were compared in Fig. 3 [25]. When the triaxial deformation is allowed, the binding energy of 240 Pu assumes its lowest possible value at each (β 20 , β 30 ) point. At some points, especially those around the two saddle points, non-axial solutions are favored than the axial ones. The inner barrier height is lowered by about 2 MeV. About 1 MeV is gained for the binding energy at the second saddle point due to the triaxiality. In the regions around the ground state and in the fission isomer valleys, only axially symmetric solutions are obtained.
A full 3-d PES has been obtained for 240 Pu [25]. In Fig. 4 are shown only five typical sections of the 3-d PES of 240 Pu in the (β 22 , β 30 ) plane calculated around the ground state, the first saddle point, the fission isomer, the second saddle point, at a point beyond the outer barrier, respectively. The following conclusions were drawn by examining these 3-d PES's [25]: (1) The ground state and the fission isomer are both axially and reflection symmetric. The stiffness of the fission isomer is much larger than that of the ground state against both the β 22 and β 30 distortions. (2) The second saddle point which is close to β 20 = 1.3 appears as both triaxial and reflection asymmetric shape. (3) The triaxial distortion appears only on the top of the fission barriers.
From the investigation of the one-, two-, and threedimensional PES of 240 Pu, we can learn a lot about the importance of different shape degrees of freedom in different regions of PES in actinide nuclei. These information could be useful in further systematic calculations.
Inner and outer fission barriers of even-even actinide nuclei
Guided by the features found in the 1-d, 2-d, and 3-d PES's of 240 Pu, the fission barrier heights were extracted for eveneven actinide nuclei [25]. The calculated values were compared with empirical ones recommended in RIPL-3 [40].
As it has been shown previously, around the inner barrier an actinide nucleus assumes triaxial and reflection symmetric shapes. Thus in order to obtain the inner fission barrier height we can safely make a one-dimensional constraint calculation with the triaxial deformation allowed and the reflection symmetry imposed. In Fig. 5(a) we show the calculated inner barrier heights B i f and compare them with the empirical values. It is seen that the triaxiality lowers the inner barrier heights of these actinide nuclei by 1 ∼ 4 MeV as what has been shown in Ref. [13]. In general the agreement of our calculation results with the empirical ones is very good with exceptions in the two thorium isotopes and 238 U. Possible reasons for these disagreements were discussed in Ref. [25].
NSRT12
To obtain the outer fission barrier height B o f , the situation becomes more complicated because more shape degrees of freedom play important roles around the outer fission barrier. In Ref. [25], 2-d constraint calculations were made carefully around the second saddle points for eveneven actinide nuclei. In the lower panel of Fig. 5 we show the results of outer barrier heights B o f and compare them with empirical values. For most of the nuclei investigated here, the triaxiality lowers the outer barrier by 0.5 ∼ 1 MeV, accounting for about 10 ∼ 20% of the barrier height. One finds that the calculation with the triaxiality agrees well with the empirical values and the only exception is 248 Cm. From the calculation with the axial symmetry imposed, the outer barrier height of 248 Cm is already smaller than the empirical value. The reason for this discrepancy may be related to that there are two possible fission paths beyond the first barrier [25].
In Ref. [25], it was also examined the parameter dependency of the influence of triaxiality on the outer fission barrier and the lowering effect of the triaxiality on the outer fission barrier was also observed when parameter sets other than PC-PK1 are used.
Non-axial octupole shapes in N = 150 isotones
Nowadays the study of nuclei with Z ∼ 100 becomes more and more important because it not only reveals the structure of these nuclei themselves but also provides significant information for superheavy nuclei [41,42,43,44]. One of the relevant and interesting topics is how to explain the low-lying 2 − states in some N = 150 even-even nuclei. In these nuclei, the bandhead energy E(2 − ) of the lowest 2 − bands is very low [45]. It is well accepted that the octupole correlation is responsible for it. For example, a quasiparticle phonon model with octupole correlations included was used to explain the excitation energy of the 2 − state of the isotones with N = 150 [46]. In Ref. [47], Chen et al. proposed that the non-axial octupole Y 32 -correlation results in the experimentally observed low-energy 2 − bands in the N = 150 isotones and the reflection asymmetric shell model calculations reproduces well the experimental observables of these 2 − bands.
The non-axial reflection-asymmetric β 32 shape in some transfermium nuclei with N = 150, namely 246 Cm, 248 Cf, 250 Fm, and 252 No were investigated with the multidimensional constrained covariant density functional theory [48]. The parameter set DD-PC1 is used [49]. For the ground states of 248 Cf and 250 Fm, the non-axial octupole deformation parameter β 32 > 0.03 and the energy gain due to the β 32 distortion is larger than 300 keV. In 246 Cm and 252 No, shallow β 32 minima are found.
The triaxial octupole Y 32 effects stem from the coupling between pairs of single-particle orbits with ∆ j = ∆l = 3 and ∆K = 2 where j and l are the total and orbit angular momenta of single particles respectively and K the projection of j on the z axis. In Fig. 6, we show the proton and neutron single-particle levels near the Fermi surface for 248 Cf as a function of β 32 with β 20 fixed at 0.3. It was shown that the spherical proton orbitals π2 f 7/2 and π1i 13/2 are very close to each other [48]. This near degeneracy results in octupole correlations. As seen in Fig. 6, the two proton levels, from 1i 13/2 , satisfying the ∆ j = ∆l = 3 and ∆K = 2 condition, are very close to each other at β 20 = 0.3. Therefore the non-axial octupole Y 32 develops and with β 32 increasing from zero, an energy gap appears at Z = 98. Similarly, the spherical neutron orbitals ν2g 9/2 and ν1 j 15/2 are very close to each other [48]. The neutron levels [734]9/2 originating from 1 j 15/2 and [622]5/2 originating from 2g 9/2 are also close to each other and they just lie above and below the Fermi surface. This leads to the development of a gap at N = 150 with β 32 increasing. The Y 32 correlation in N = 150 isotones is from both the proton and the neutron and for 248 Cf the correlation is the most pronounced [48].
Summary
In this contribution we present some applications of the multi-dimensional constrained covariant density functional theories in which all shape degrees of freedom β λµ deformations with even µ are allowed. The potential energy surfaces of actinide nuclei in the (β 20 , β 22 , β 30 ) deformation space are investigated. It is found that besides the octupole deformation, the triaxiality also plays an important role upon the second fission barriers. For most of even-even actinide nuclei, the triaxiality lowers the outer barrier by 0.5 ∼ 1 MeV, accounting for about 10 ∼ 20% of the barrier height. The non-axial reflection-asymmetric β 32 shape in some transfermium nuclei with N = 150, namely 246 | 3,935.4 | 2013-03-04T00:00:00.000 | [
"Physics"
] |
Molecular Dynamics Simulation of Iron — A Review
Molecular dynamics (MD) is a technique of atomistic simulation which has facilitated scienti¯c discovery of interactions among particles since its advent in the late 1950s. Its merit lies in incorporating statistical mechanics to allow for examination of varying atomic con¯gurations at ¯nite temperatures. Its contributions to materials science from modeling pure metal properties to designing nanowires is also remarkable. This review paper focuses on the progress of MD in understanding the behavior of iron — in pure metal form, in alloys, and in composite nanoma-terials. It also discusses the interatomic potentials and the integration algorithms used for simulating iron in the literature. Furthermore, it reveals the current progress of MD in simulating iron by exhibiting some results in the literature. Finally, the review paper brie°y mentions the development of the hardware and software tools for such large-scale computations.
Introduction
It has already been over 50 years since Alder and Wainwright 1,2 developed molecular dynamics (MD) simulations as a computational tool used for tracing the phase space trajectory of all particles being simulated. Apart from biochemical discipline which This is an open access article published by World Scienti¯c Publishing and distributed under the terms of the Creative Commons employs MD to investigate the properties of biomolecules, materials scientists often employ MD as a step of understanding the mechanisms of physical phenomena caused by metallic atoms. This is achieved by integrating the equations of motion. Then the velocity of the particles follows the Maxwell-Boltzmann distribution, which is temperaturedependent. Accordingly the pressure acting on the particles is determined by the virial theorem. 3 Periodic boundary condition (PBC) 4 has already been employed in the early formalism of MD to avoid the surface e®ect that is common in small simulation samples. Useful physical quantities, such as the di®usion coe±cients, heat capacities and energy changes, can be determined later from the trajectories of the particles saved in the computers.
In the early days, the interatomic potentials were fairly limited to hard sphere approximations, in order to accommodate to the relatively slower calculation capability of the computers at the time MD was just developed. Driven by the demand for more complex materials, numerous interatomic potentials have been devised for a more pertinent representation of the materials as a function of interatomic separation. Initially the potentials focused on pure metals, but later on they could also re°ect the interactions and thermodynamics occurring in alloys. The approaches to formulating the potentials evolved from pure distance dependency to electronic density dependency, followed by bond order dependency.
The advantage of using MD is that one can obtain physical paths of the particles in the course of attaining thermodynamic equilibrium, which is not possible by using Monte-Carlo simulations as it can only return meaningful equilibrium values but random transient states. 2 The method of MD is mainly the solution to particle trajectories derived from the interatomic forces. Numerical integration of atomic motion is performed on the interatomic forces, which results in the particle velocity. The particle position is then obtained by further integrating the velocity. By MD approach, the phase space trajectories of the ensembles can be evaluated.
The success of MD simulation of iron relies on the proper interatomic potentials that address the particular electronic structure of iron accurately. The accuracy of MD simulation of iron is important for nuclear industry, because it can estimate the extent of damage of nuclear power plants. In practical cases, the introduction of impurities in iron potentials is crucial for investigating the e®ect of irradiation which releases a number of impurities that would interact with pure iron. Appropriate potentials of iron are also essential for estimating time evolution of defects that occur in iron, such as vacancies, interstitials, dislocations, and grain boundaries. Besides, MD simulation of iron plays a key role of understanding the e®ect of metal catalyst on the growth of carbon nanotubes.
Because of the application of cuto® distances in atomic force computation, parallel computations of forces can be applied to di®erent portions of a simulation box, with each portion having no e®ect on the other. Recon¯gurable computers and graphics processing units (GPU) can execute parallel computations, so as to accelerate the computation tasks. Science practitioners have to design the algorithms of allocating computing resources for recon¯gurable computers and GPU. The speedup of parallel computation can be over 100 compared to the sequential counterpart.
The organization of this review paper is as follows. The basic principles of MD simulation is discussed, together with a brief introduction to statistical mechanics that is directly relevant to MD formalism. The MD implementation and the corresponding algorithms are then exhibited brie°y. A number of thermostats are mentioned. Some of the techniques applied to MD simulation are provided as a supplement to the conventional MD approach. The history of interatomic potentials for iron in various formats is discussediron without spins, magnetic iron and iron with impurities. A number of categories of MD simulation for iron are exhibited, which demonstrates a wide range of applications of MD in modeling defects and nanotubes. Then the development of computer hardware used in MD simulation is discussed. A summary of the review is presented at the end.
Basics of MD Simulation
A number of references regarding the formalism of MD have been available, such as Refs. 5-10. The remaining portion of this section is a very brief summary of these references, which demonstrates the major points of interest in the MD computation technique. Before brie¯ng the MD technique, important concepts of statistical mechanics that are helpful to the development of MD are stated. The interested reader is referred to Refs. 11 and 12 for much detailed explanations.
Statistical mechanics
Thermodynamic states can be de¯ned by the set of parameters, such as number of atoms N, pressure P and temperature T. These macroscopic quantities can in principle be connected to the microscopic state of the system of interest, and statistics is such a required connection. The study of macroscopic properties via microscopic quantities is known as statistical mechanics.
A microstate of a system of particles is the basis of statistical mechanics. It represents a particular state determined by the set of phase space coordinates with some probability of occurrence. Suppose there are N particles, each with n degrees of freedom. The microstate can then be represented by a point of nN dimensions in the phase space. A particle has 3 position components fr i g and 3 velocity components fv i g, so each particle has 6 degrees of freedom. In this case, the microstate can be represented by a point s ¼ ðfr i g; fp i gÞ of 6N dimensions. The time series of s is known as the phase space trajectory Àðfr i g; fp i gÞ.
The ensemble average of an observable A, based on its probability distribution P ðfr i g; fp i gÞ, is expressed as By ergodicity principle, the ensemble average is equal to the time average as long as every point on the phase space is accessible. The time average has the form where t obs is the observation time. The ergodicity principle is very useful in MD because one can obtain the thermodynamic average from the time evolution of phase space trajectory generated by MD. A microcanonical (NVE) ensemble is commonly used when the system of interest is isolated, so that no energy exchange occurs with the surroundings. Here, the number of particles N, the volume V and the energy E are all kept constant. Each microstate has the same a priori probability. Therefore, the probability of a macrostate depends on the statistical weight ðN; V ; EÞ, which is the number of microstates of that particular macrostate. The entropy of an NVE ensemble is given by where k B is the Boltzmann constant. It is clear from Eq. (3) that the maximum entropy occurs at the maximum statistical weight. Such an equation is vital for MD because one can link the microstates of an ensemble to the thermodynamic states. Another important ensemble is the canonical (NVT) ensemble, in which the temperature T rather than the energy is conserved. In this case, energy transfer to the surroundings is permitted. The probability of occurrence of a macrostate P i follows the Boltzmann distribution: Here, E i is the energy of the macrostate, ¼ 1=k B T is the temperature parameter with Boltzmann constant k B , and Z is the partition function in the form Z ¼ P i e ÀE i , which normalizes the total probability of occurrences to 1. The entropy of an NVT ensemble is given by The average energy of an NVT ensemble, also known as the internal energy, is given by
MD principles
The idea of MD simulation is deduction of the particle motion starting from the interatomic potential. According to Newtonian mechanics, once the potential UðrÞ is given, the time-varying force F i ðtÞ of each particle i can be evaluated as F i ðtÞ ¼ mr :: Here, m is the particle mass, r N is the positions of N particles that are used to de¯ne the potential and r i is the individual particle position. Equation (7) is the basic equation governing particle motion. With atomic forces, one can perform integration to obtain the velocity and then the position after another integration of velocity. For Hamiltonian mechanics, an isolated system of particles with energy E can be expressed in terms of their positions r N and momenta p N : from which one can obtain the equations of motion as The second line of Eq. (9) is simply equal to Eq. (7) in principle. By performing integration on Eq. (9), one can also obtain the velocity and position of individual particles. Regardless of Newtonian or Hamiltonian formalism, the implementation of these integrations in MD involves di®erentiating the potential function numerically, and plugging in the interatomic distances to obtain the interatomic forces. A number of algorithms are available for the numerical integration processes. Here we mention some of them. Theoretically, the interatomic force must be calculated from the interaction of all other atoms. However, this is very time consuming. By employing a cuto® distance from an atom, one can limit the number of nearest atoms within the cuto® that are included in the force evaluation. The suitable cuto® distance should be set according to the interatomic potential, such that atomic interaction beyond the cuto® is negligible.
Verlet algorithm
After the force equations are formulated, the velocity and position of each atom can be obtained by integrating the force equations. In order to allow for numerical integration, the di®erential equations governing the motion have to be discretized in time steps Át. Accordingly, the¯nite di®erence (FD) method is commonly used in MD calculations. One type of the FD methods is the Verlet algorithm, which is derived from the di®erence of two Taylor expansions in position r: Adding them up gives Therefore, one can obtain the particle position for the next time step if one uses the acceleration aðtÞ ¼ r 0 0 ðtÞ derived from the intermolecular forces, the current position and the position for the previous time step. The advantage of using Eq. (11) to determine the position is that we do not need the atomic velocity vðtÞ ¼ r 0 ðtÞ. The velocity of particles is obtained by using¯rst-order central di®erence: The velocity depends on the position for the previous and also the next positions. The merits of the Verlet algorithm are its easy implementation and stability over large time steps.
Velocity Verlet algorithm
The velocity Verlet algorithm helps us to obtain both the velocity and position at t þ Át. The position for the next time step is simply obtained by the Taylor expression: The velocity at the next time step is evaluated by It can be seen that the evaluation of the next velocity step involves the use of the next acceleration step, which is derived from the next position step.
Leapfrog algorithm
In this method, the velocity at half time step is evaluated, which is then used to obtain the position at full time step. After the next position is evaluated, it is used to obtain the velocity for another half time step. This means velocity \leaps" over position, and position \leaps" over velocity in turn. The formulae used are The disadvantage of this algorithm is that the position and velocity cannot be evaluated at the same time step.
Predictor-corrector method
This approach is a three-step process. In the¯rst step, the velocity and position for the next time step are predicted. The acceleration at the next time step is evaluated by the predicted velocity and position. In the¯nal step, the initially predicted velocity and position are corrected with the evaluated acceleration. By modeling particle interaction as produced by harmonic oscillators, we predict the next velocity and position as Here, ! is the angular frequency of an oscillator. Then we evaluate the acceleration for the next time step as After the acceleration is updated, the predicted values are corrected by using those for the next time step:
Gear's predictor-corrector method
This is an improved version of the original predictorcorrector method obtained by employing the¯fthorder Taylor expansion. Therefore, the particle position for the next time step is predicted in terms of ve derivatives: The interatomic forces are evaluated by using the predicted positions. The force is given by where r ij (t) is the interatomic separation, andr ij (t) is the unit vector of the interatomic separation. From the evaluated forces, one can¯nd the di®erence Á r :: between the predicted and evaluated acceleration, such that Ár ð2Þ ¼ ½r where quantities with superscript P is the predicted value for the next time step. The correction would become The values of 's are¯ne-tuned to ensure numerical stability. The 's are determined by the order of the di®erential equations and the order of the predicted Taylor expansion.
Thermostats
Simulation of NVT ensembles requires the application of a thermostat that maintains the ensemble at constant temperature. There are a number of implementations of such thermostats.
Anderson thermostat
The coupling of the Anderson thermostat to an NVT ensemble is achieved by introducing stochastic collision forces that act occasionally on randomly selected particles, such that the particle forces of some atoms are altered for just a short time instant. The frequency of stochastic collision represents the coupling strength to the thermostat, having a Poisson distribution of where P ðt; Þdt is the probability of the next collision at time t þ Át. The motion integration is divided into three steps. First, we initialize the positions r N and momenta p N of N particles, and perform motion integration up to the instant before the¯rst stochastic collision. Second, some particles are randomly chosen to have a collision with the thermostat. Third, the momentum of the particles after the collision is chosen from the Boltzmann distribution at the desired temperature T. All other particles are una®ected.
Nos e-Hoover thermostat
It is an extension to the conventional Lagrangian form by introducing one more coordinate s, such that Here, Q is the e®ective mass associated with s, and g is number of degrees of freedom of the system. The momenta conjugate to r i and s are The Hamiltonian form of the system can be expressed as The Hamiltonian in Eq. (26) leads to the following equations of motion: The extended microcanonical ensemble has 6N þ 2 degrees of freedom, with partition function If we set p 0 ¼ p=s and r 0 ¼ r, as real variables, then If we choose g ¼ 3N þ 1, then where In this condition, the ensemble average of an observable A follows the relation This means that the extended system in real variables can reduce to a canonical ensemble. Also, by letting s 0 ¼ s and t 0 ¼ t=s as other real variables, we can transform the equations of motion in Eq. (27) as
Velocity scaling
This is a very straightforward method by scaling the particle velocity v i by , where Here, T 0 is the target temperature. The disadvantage of this approach is that the result does not correspond to a canonical ensemble. The momentum space generated by this method results in discontinuity.
Berendsen thermostat
Unlike the simple velocity scaling approach that modi¯es the velocity in one step, Berendsen thermostat does the scaling slightly for each time step. The rate of temperature increase is expressed in a di®erential equation where is the coupling strength between the system and the thermostat. The change in temperature ÁT can be expressed as The particle veloctiy v i of each particle can then be scaled to , where Here, T is the time constant characterizing the rate of achieving the target temperature T 0 .
Langevin theromostat
A stochastic approach to maintaining temperature is called the Langevin thermostat, by which a time-varying random force ðtÞ following Gaussian distribution is introduced to the equation of damped motion, such that Here, is the damping constant. The random force satis¯es delta-correlation, such that with being a constant characterizing the strength of the random force. The idea of this thermostat is the choice of that can achieve the target temperature. At this time, the damping force will balance the random force.
Periodic boundary conditions
A simulation box con¯nes the region where particles can be located. Yet, the simulation results generated from this box can fail to represent the bulk condition because of surface e®ect that occurs at the boundary planes con¯ning the box. An approach to correct this problem is the introduction of identical copies of simulation boxes contiguous with the original simulation box. Motion integration has to incorporate the wraparound e®ect when a particle leaves the original simulation box. For example, the x-coordinate of a particle is bounded by ÀL x =2 x L x =2, where L x is the length of the simulation box. If the particle position r x i ! L x =2, then replace the position by r x i À L x . Similarly, if r x i ÀL x =2 then replace the position by r x i þ L x . The same treatment has to be made on interatomic separation when the interatomic potential is updated.
Techniques of Applying MD
In addition to the MD formalism, many techniques have been developed to enrich its applicability to a number of re¯ned situations.
Spin-lattice dynamics
Spin-lattice dynamics (SLD) is a modi¯ed approach of MD, in order to incorporate both spin and lattice degrees of freedom in a Hamiltonian. 13,14 In this formalism, the spin and lattice degrees of freedom are coupled by the exchange integral term, in the sense that the lattice degree of freedom would change the behavior of the spin degree of freedom, and vice versa. The SLD formalism is thus suited for spin-carrying materials, such as iron.
The corresponding Hamiltonian is given by which has four components: lattice kinetic energy, lattice potential energy, magnetic energy as a result of spin-lattice coupling and the magnetic energy due to an external¯eld H ext , respectively. In Eq. (40), m i is the mass of atom i, fp i g is the momentum space, fR i g is the lattice space, fe i g is the classical spin space of unit length, S is the spin vector length. Also, j ij ðR ij Þ represents spin-lattice coupling, which is the product of the exchange integral J ij ðR ij Þ between spins i and j and the norms of the spins, such that j ij ðR ij Þ ¼ S i S j J ij ðR ij Þ, and UðfR i gÞ is the total lattice potential. Physically, e i Á e j signi¯es the spin-spin correlation. The constant g is the gyromagnetic ratio, and B is the Bohr magneton. For the de¯nition here, the direction of the magnetic moment is opposite to that of the classical spin, such that M i ¼ Àg B S i . It is noted that this format of Hamiltonian is isotropic. The equations of motion for the momentum, lattice and spin components can be derived from the time derivatives of Eq. (40), returning In Eq. (43), the e®ective magnetic¯eld H eff i is given by The equations of motion are then implemented using conventional MD approach, except that the spin motion has to be evaluated separately. An application of SLD is exhibited in Ref. 15, which refers to modeling the iron thin¯lm behavior. With the SLD formalism, the thin¯lm magnetization decreases with temperature, having roughly the same temperature dependence as a bulk demonstrates. The thin¯lm temperature dependence is also found to vary with the¯lm thickness. The magnetic transition temperature also decreases with¯lm thickness. The surface magnetization is also di®erent from that inside the bulk. It is also noticed that the introduction of spin-lattice coupling and spin-spin correlation can result in the near-surface relaxation strain that varies with temperature.
Thermodynamic integration
Thermodynamic integration (TI) is a computational approach to evaluating the free energy di®erence between two states. The description below mainly follows Ref. 8. It is known that the Helmholtz free energy is equal to where Q is the partition function in the form Here, Ã ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi h 2 =ð2mk B T Þ p is the thermal de Broglie wavelength, with h being the Planck's constant and m being the particle mass, which is not related to the canonical average over the phase space but to phase space volume that is accessible to the system. The Helmholtz free energy change cannot be measured directly from real or computer experiments, because it depends on the partition function which cannot be evaluated numerically.
The idea of TI is the coupling of two thermodynamic states with reference Hamiltonian H I and target H II by a switching parameter . An intermediate state between H I and H II is given by a thermodynamic path The free energy di®erence ÁF between two thermodynamic states characterized by ¼ 0 and ¼ 1 is given by The angle brackets hÁ Á Á i represent taking the ensemble average over . The linear path in Eq. (47) is a convenient choice because So, @F =@ decreases with increasing . The Frenkel-Ladd method 16 of TI is often applied to solid phase governed by a hard-sphere potential. The idea is the construction of a thermodynamic path from the system of interest to a noninteracting Einstein solid having the same structure as the required system. Here, a noninteracting Einstein solid consists of noninteracting atoms coupled to the lattice sites by harmonic springs. Since we cannot switch on the springs while switching o® the hard-sphere potential, the thermodynamic path is modi¯ed to the form Here, H 0 is the unperturbed Hamiltonian, N is the number of atoms, r i0 is the lattice position of atom i and r i is the atom position of atom i. The free energy di®erence between the Einstein solid and the system of interest is then given by The system reduces to an Einstein solid for large value of ! max . Later, the Frenkel-Ladd method is modi¯ed by using an \ideal Einstein molecule", which is a noninteracting Einstein solid except one atom that is not coupled to a harmonic spring. 17 Let the Helmholtz free energy of the ideal Einstein molecule be A ideal . Then we¯x the position of the atom that is not coupled to a harmonic spring (called atom 1). In this situation, the thermodynamic path connecting a hard-sphere potential and a set of harmonic springs is given by where H 0 is the hard-sphere potential, r 1 is the position of the¯xed atom, r i is the position of atom i and r i0 is the lattice position. The Helmholtz free energy due to the harmonic springs is given by whereas the Helmholtz free energy due to the hardsphere potential is given by Here, u 0 ðr ij Þ is the hard-sphere potential depending on the interatomic separation r ij . The free energy of a solid is then found to be By¯xing one atom in¯nding free energy di®erence, the whole lattice needs not be totally¯xed as is necessary for the Frenkel-Ladd method. The implementation of the TI approach is thus more straightforward.
The method of TI has been applied to study the vacancy formation and migration occurring in BCC iron, 18,19 whose Hamiltonian is modeled by the SLD formalism. 13,14 The study uses the SLD formalism to determine the free energy change in the course of vacancy formation and migration by TI. By using both magnetic and nonmagnetic potentials, it is found that a vacancy leads to scattering of magnons, leading to an increase in the total free energy. It is also noted that the magnon state determines the exchange interaction and hence the interatomic force. The temperature dependence of magnon distribution is crucial for the lattice properties as well. The phonon-magnon interaction would lower the energy barrier and increase the entropy of vacancy migration and formation.
Interatomic Potentials
Iron is a commonly found in pure metal form or in alloy form, bringing about various applications in industry. Accordingly, computational tools have to be devised that can return reliable information about iron and its other structures. In fact, it is best to determine the metallic properties by ab initio computations, which can evaluate the interactions among atoms in electronic level. However, the related process is highly computationally demanding and time consuming. Instead of ab initio computations, MD is an e®ective tool of simulation in materials science and engineering, given a potential that describes the atomic interactions pertinently. Then interatomic potentials become to play an important role of determining the time evolution of defects. In order to represent the conditions of iron atoms in various states, a number of interatomic potentials have been devised. Later on, they have been further modi¯ed to suit to practical situations.
In¯tting a potential empirically, many physical quantities are derived from it as a veri¯cation against the experimental values. The remainder of this section reviews several potentials commonly used in modeling iron or iron alloys. Their adapted forms, if any, are also discussed to investigate the improvement in describing more complicated physical phenomena. The veri¯cations against experimental results made by the developers of the potentials are also mentioned.
Pure iron potentials
The embedded-atom method (EAM) 20,21 is an approach based on the density functional theory (DFT), used to model the ground state properties of FCC metals with impurities. This method is an improvement of the previously developed pair potential 22 that requires accurate volume-dependent energy to describe the elastic properties, which can sometimes be ambiguous in situations involving surface defects and cracks. The initial use of EAM is to model the role of hydrogen atoms in steel, which leads to brittle fracture and cracks. It considers the pair potential plus the energy required to \embed" an atom in an electronic density constituted by a host lattice. The energy functional of a system of atoms is then expressed as In Eq. (57), F i is the embedding energy of an atom, to be determined by the local electronic density i ðR i Þ at position R i but without atom i, and V ðR ij Þ is the short-range electrostatic pair potential of repulsive nature due to the neighbors of atom i. The values of F i and V ðR ij Þ are based on the experimental results of the atoms concerned, such as the lattice constants, elastic constants and the migration energy of the impurities. Usually, a monotonically decreasing form is used for i ðR i Þ. Since the electronic density is clearly de¯ned without considering the volume, the ambiguity arising from conventional pair potentials is resolved.
At the time of developing the EAM, this method is not guaranteed to be the universal form of modeling transition metals, especially for BCC metals. Accordingly, many adaptations have been made as an attempt to extend the use of EAM. One can¯nd many methods of implementing the EAM for various transition metals in the literature. This review highlights those relevant to iron. For example, Johnson and Oh 23,24 have extended the application of EAM to determination of the short-range potential for alloys and BCC metals including iron, respectively. The potentials are¯tted to the¯rst and second nearest neighbors for both alloys and pure BCC metals. The idea of¯tting is by means of transformation of an EAM potential to a normalized form, such that the¯rst derivative of the embedding function with respect to the electronic density is zero, i.e., dF i =d i ¼ 0. With this normalization, the potential turns to an e®ective pair potential that is only slightly dependent on the embedding function. The determination of the potential can become conceptually easier, while maintaining the characteristics of an EAM. Wang and Broecker 25 have extended the interatomic distance of the pair potential to the¯fth nearest neighbors by incorporating a Gaussian function as the weighting factor and by using an oscillatory model in terms of sinusoidal functions for the local electronic density. These modi¯cations suggest that the anomaly of the phonon spectra of BCC transition metals can be better reproduced.
Based on the EAM formalism, Mendelev and coworkers 26 have¯tted an EAM potential for crystalline and liquid iron. In fact, this potential forms the basis of some upcoming potentials of Fe alloys, to be discussed later. They established three potentials, each by¯tting to asymmetric crystal defect data by considering atomic force of liquid iron obtained by ab initio calculations, to experimental structure factor of liquid iron at melting point, as well as to symmetric perfect crystal data by the EAM approach, respectively. In this sense the state with small interatomic separation can also be accommodated, and the solid-liquid phase transition of iron can also be described more accurately.
Later, many interatomic potentials of iron have evolved from the work by Mendelev et al. 26 Chamati et al. 27 have followed the EAM approach to develop a potential which is more suitable for BCC iron (-Fe) as well as FCC iron (-Fe). The strength of this potential is that it can reproduce various BCC and FCC parameters such as thermal expansion coe±cient, phonon dispersion relations, meansquare displacements and surface relaxations with-out¯tting with the corresponding experimental results. Other examples of the EAM potential for iron in various states can be found in Refs. 28-30. The modi¯ed embedded-atom method (MEAM) [31][32][33] has been developed for computational simplicity and hence wide applicability to MD simulations. The development responded to the high demand of potentials suitable to model semiconductor physics, in which simple pair potentials are not able to reproduce the elastic constants of covalent structures accurately. In fact, MEAM has also found its values in modeling metallic structures.
The merit of MEAM lies in the addition of angular dependence of the bond-bending forces that constitute the background electronic density, as opposed to linear superposition of radially-dependent atomic densities for the original EAM approach. The main idea of the modi¯cation comes from the host electronic density, which includes angular correction terms in addition to the simple pair potential form. The general expression of the total energy is similar to that of EAM, known as The de¯nition of the symbols follows that of EAM. The change for the modi¯ed EAM lies in the evaluation of F and V ðR ij Þ. The embedding energy F is known as where E 0 is the sublimation energy, and " ¼ n 1 a ðr 1 Þ is the background electronic density due to n 1¯r st nearest neighbor atoms of the reference structure in the form of a monatomic homogeneous solid, with each of the neighbors having equilibrium rst-neighbor distance r 1 . The format of Eq. (59) corresponds to the logarithmic relationship between bond length and number of bonds. The atomic electron density a ðr 1 Þ is exponentially decaying with r 1 : The value of is dependent on the atomic density in units of distance À1 . In Eq. (58), the local electronic density i experienced by atom i exhibits angular dependence by where jik is the included angle formed by atoms j, i and k, a is the¯tting constant determined by shear modulus data. The pair potential V ðR ij Þ in Eq. (58) is given by ÈðrÞ ¼ X s n s n 1 V a s r ð Þ: The notations in Eq. (62) are de¯ned as follows. r is the¯rst-neighbor distance, n s is the number of s-nearest neighbor atoms, a s is the ratio of the s-neighbor distance to the¯rst-neighbor distance, r c is the cuto® distance.
In the early formalism, only the¯rst nearest neighbors were e®ective to calculate the potential, on the basis of DFT computations on BCC tungsten (see Appendix of Ref. 33). The MEAM has been further re¯ned to incorporate the atoms in the second nearest neighbors, 34,35 known as 2NN-MEAM.
The MEAM approach has also been applied by many workers in order to derive more alloy potentials that have been¯tted according to more abundant experimental or ab initio¯ndings. Here we introduce some of the MEAM applied to various situations. Yuan et al. 36 set up a new scheme of determining the embedding energy of BCC transition metals by slightly modifying the embedding energy in Eq. (59) with a¯tting parameter , such that F ðÞ ¼ E 0 ð=" Þ lnð=" À Þ. With , the MEAM potential can be¯tted even to nonbulk systems such as surfaces. The potential returns the crystal elastic sti®ness, the vacancy formation energy and the low-index surface energies, which are close to experimental¯ndings. Jelinek et al. 37 have performed a large-scale formulation of MEAM potentials for Al, Si, Mg, Cu and Fe alloys, which results in improved values of the generalized stacking fault energy curves. The resulting potentials have been validated by comparing them to corresponding DFT results, together with a number of properties such as equilibrium volume, elastic constants and defect formation energy. More examples of the MEAM potentials can be found in Ref. 38. In addition, a detailed review of the MEAM potentials can be found in Ref. 39.
An approach that evolves from EAM to model the potential of transition metals is the Finnis-Sinclair (FS) potential. 40 It aims at rectifying the drawbacks of adopting pure pair potentials in modeling metallic defects such as dislocations and grain boundaries, as a result of the unsatisfactory consideration of elastic constants, Cauchy pressure and vacancy formation energy. The FS potential is an empirical one that solves the problem of pair potentials by considering the metallic cohesion of an atom due to its neighbors according to the tightbinding theory of metals, 41 based on the second moment approximation, as well as the repulsive pair potential. Its initial form focuses on the BCC structure. The general form of the FS potential is expressed as In Eq. (63), the total potential energy U is the sum of the repulsive pair potential U P and the N-body cohesive energy U N . So U P considers the sum of the pair potential V FS ðR ij Þ dependent on the interatomic separation R ij . The cohesive energy is the sum of the square root of the electronic density p i , multiplied by a positive proportionality constant A. The electronic density of each atom is the sum of the cohesion energy ðR ij Þ due to the neighboring atoms. It can be observed that the FS potential simply uses a predetermined square root form of the embedding energy, while the EAM form requires an embedding energy to be ascertained by¯tting.
The parameterized pair potential and cohesive energy of the FS potential for iron is expressed as where c and d are the cuto® distances of V and , respectively, between the second and third nearest neighbors of a BCC structure, and B is a value such that will achieve its maximum within the¯rstnearest neighbor distance. Soon after the FS potential, Ackland and Thetford 42 have improved it by introducing a core term in the pair potential V ðR ij Þ, which increases the short-range force repulsion that is lacking in the original FS potential. With this correction, the atoms at short separation would not fall together. The expression of V ðR ij Þ is recast as Here, V FS ðR ij Þ is the pair potential in the original FS formalism, and b 0 is the nearest-neighbor distance. B and are¯tting parameters. After varying the pair potential term, the pressure-volume relationship of many BCC transition metals becomes more physical. In addition, the altered potential has been veri¯ed by checking the formation energy of an interstitial against those in other theoretical studies. The original FS potential has considered up to the second nearest neighbors, and hence it can be regarded as a short-ranged one. The expressions of the potential are¯tted against some material parameters, such as the lattice constant, elastic constants, bulk modulus and Cauchy pressure of various BCC structures. Initial development of the FS potential has been veri¯ed to give rise to a stable BCC structure, yet its applicability to FCC and HCP structures is questionable due to its short-ranged¯tting strategy.
The FS potential has been adapted since its release in order to extend its range of applications. For example, Dai et al. 43 have attempted to extend the FS potential to FCC transition metals, and corrected the BCC potential to follow the Rose equation of state of metals 44 with increased accuracy. This is achieved by applying a sextic expression to the original quartic form of the repulsive pair potential, and by applying a quartic expression to the original quadratic form of the cohesion energy. So some of the expressions in Eq. (64) have changed to where c and d are the cuto® distances of V and , respectively, between the second and third nearest neighbors of a BCC or FCC structure one is investigating. The extended format can be reduced to the original FS potential when c 3 , c 4 and B are set to zero. This format has found good agreement with the experimental lattice constants, cohesive energies, elastic constants and vacancy formation energies. With this extension, the lattice beyond the equilibrium state can be better represented. Apart from the successful extension to FCC structure, the pressure-volume relationship of BCC structures derived from the extended FS potential becomes more satisfactory after checking with the equation of state. The extended FS potential can even¯nd good agreement with various FCC-BCC cross potential determined by ab initio calculations.
Based on MEAM which considers angular dependence of bonds, Müller et al. 45 have formulated a potential that considers the analytic bond order explicitly, such that the lack of conventional MD in describing electronic degrees of freedom can be remedied. This potential has been successful to model À phase transition and -iron before the melting point.
Magnetic iron potential
Though the EAM and MEAM formalism have been successful in deriving a number of potentials that have found practical values in modeling irradiation damage of steels, the magnetic e®ects of iron has not been considered explicitly in these potentials. In order to¯ll this research gap, potentials speci¯c to magnetic iron have been developed with further modi¯cations after its initial version. Dudarev and Derlet 46 have made a¯rst attempt on -iron potential that can describe both the magnetic and nonmagnetic states of iron. This potential (named as DD potential) applies the EAM formalism with in-depth evaluation of the local electronic density based on Ginzburg-Landau (GL) model and Stoner model up to the second-moment description of the electronic density of states. The GL model describes the second-order phase transition of ferromagnetic iron, whereas the Stoner model describes the correlation e®ect that leads to band magnetism used to characterize the ground state of ferromagnetic 3d transition metals. It is then found that the symmetry-broken solution of the GL model is able to link magnetism and interatomic forces. After applying all these models, the embedding function for both magnetic and nonmagnetic states of Fe: Here, the¯rst term on the right-hand side of Eq. (67) is the FS embedding potential corresponding to the cohesive energy, with A being a constant to be determined. The second term on the right-hand side of Eq. (67) is the magnetic potential term due to Stoner and GL model. In this term, B and are constant, Â is the Heaviside unit step function, and c is the critical electronic density beyond which the magnetic e®ect vanishes. However, there is a problem with this embedding potential: the derivatives are discontinuous at ¼ c .
In view of this shortcoming, Eq. (67) is further modi¯ed as Parameterization takes place on both F ð i Þ and V ðR ij Þ of Eq. (57), with i being written in the Then fðR ij Þ and V ðR ij Þ are written in cubic knot functions: In Eqs. (69) and (70), f n , V n , r f n and r V n are determined by¯tting, given N f and N V terms to include in the knot functions, respectively. The initial two versions of DD potential were¯tted to bulk BCC magnetic properties, vacancy formation energy, isotropic BCC bulk of nonmagnetic iron and interstitial energies. The interested reader is referred to Ref. 47 for an attempt to use the DD potential in large-scale MD simulations containing around one million atoms, which helps to validate the applicability of the potential in evaluating the magnetic moments around an interstitial defect and in describing the migration of self-interstitial and multiple-interstitial con¯gurations in iron. Since this version, some modi¯cations of the DD potential have been achieved by¯tting it to more ab initio data such as the third-order elastic constants and the conditions during the 1=2h111i screw dislocation. 48,49
Potentials for iron with impurities
Practical interatomic potentials should be able to re°ect the interaction with impurities, which is a typical consideration of modeling the materials used for nuclear fusion reactors. For example, appropriate potentials to model the Fe-C alloy are necessary to produce steel. Defects of steel have to be modeled by an appropriate interatomic potential.
Ackland et al. 50 have developed a potential (known as AMS potential) for phosphorus in -iron, through comprehensive analysis of ab initio and experimental results of iron defects, as a tool of investigating irradiation damage caused by phosphorus which shifts the ductile-to-brittle transition temperature of steels. For such a potential to be successful, the P-P interaction, Fe-Fe interaction and Fe-P interaction have to be considered collectively in the evaluation of the pair potentials and the pair electronic densities. The formulation relies on the pure Fe potential in Ref. 26 in view of its applicability to point defect interactions. In addition, large-scale ab initio computations have been performed as the basis of the¯tting process, including Fe monolayers and surfaces, substitutional impurities, vacancies and liquid state. Only those values matching with the experimental results are included in the¯tting process. By using this potential, vacancy and interstitial mechanisms of P atom in Fe matrix can be realized from MD simulations.
As another example, if the e®ect of helium gas on the fusion reactor materials is considered, a potential should be formulated for the Fe-He alloy to better model the irradiation damage which causes void swelling, helium bubbles and blistering. These mechanical defects are often investigated by MD simulations, so a proper interatomic potential is required to model Fe-He materials. Many attempts have been made to develop such a potential. For instance, Seletskaia et al. 51 have formulated a Fe-He potential based on the electronic structure calculations. It consists of a repulsive pair potential and a three-body embedding term. This potential was¯tted with the ab initio computations of the formation and relaxation energy of He defects and He clusters, together with the AMS potential. 50 The detailed implementation of the three-body term is elaborated in Ref. 52. It also considered highly the interstitial properties and applied a three-body potential term to improve the¯tting. Juslin and Nordlund 53 have later developed a Fe-He pair potential based on the C. P. Chui et al. AMS potential 50 to model helium atoms in iron matrices, which was found to be already su±cient to reproduce simple defects of iron due to helium irradiation. Since experimental data for Fe-He clusters were lacking at the time of developing this potential, the e®ectiveness of the potential in modeling migration barriers of helium in iron was realized by verifying it against the DD potential, FS potential and DFT computations. Later, another Fe-He potential based on the multiple lattice inversiontechnique was proposed to solve the problem of tting a potential which requires multiple parameters. 54 Its applicability was established after it was used to reproduce the elastic constant, binding energy and migration barrier of Fe-He crystals obtained from other similar potentials. Gao et al. 55 have developed a Fe-He potential based on the s-band model to describe the many-body interaction, together with the embedding form and repulsive pair potential. To verify it, the binding energy required for an additional He atom to approach a He cluster, together with the migration energy of an He cluster in -iron were reproduced, with fairly good agreement with the ab initio results. Another pair potential for Fe-He materials has been formulated not only by adjusting the method in Ref. 56, but also by¯tting the magnetic potential formalism based on that used in Refs. 46-49. It is found that the values of formation and migration energies of He atom in Fe agree with the ab initio results.
With an abundant choice of Fe-He potentials, the applicability of each potential to a speci¯c physical situation has to be examined carefully. In order to ease this examination process, an interatomic potential design map has been developed for Fe-He potentials, 57 from which one can assess the uncertainties of using a certain potential in modeling a particular defect. It is expected that design maps of this type can be extended to other types of interatomic potentials, thereby facilitating the sci-enti¯c community.
A suitable potential for Fe-Cu binary alloy is important in modeling Cu precipitates, which could lead to embrittlement of reactor pressure vessels. 58 A number of Fe-Cu potentials have been available to investigate this irradiation damage. For example, the Fe-Cu alloy potential has been developed 59 on the basis of 2NN-MEAM, by combining the MEAM potentials for pure Fe and Cu, respectively. Thē tting of this potential is done to reproduce the lattice constants in Fe-rich BCC and Cu-rich FCC phases, enthalpy of the liquid mixture, and the binding energy of a Cu atom in BCC Fe matrix. As another example, Pasianot and Malerba 60 have later developed a binary-alloy potential for Fe-Cu based on EAM, by incorporating the phase diagram data of Fe-Cu systems, such that the thermodynamic functions of the systems can be re°ected in the potential, and that the radiation damage can be modeled with higher accuracy. Other attempts of Fe-Cu potential can be found in Refs. 61-63. The interaction of hydrogen atom with iron is another concern of materials scientists, because it is related to the irradiation damage of steel in nuclear plants and to the physical conditions of containers used to store or transport hydrogen as a source of clean energy. Accordingly, some Fe-H potentials have been designed. For example, Ruda et al. 64 have performed a detailed exposition of the EAM potentials of hydrogen in various metals including iron. Potentials speci¯c for pure metals have been adapted to the determination of the metal-hydrogen pair potentials. Thermodynamic heat of solution of H and lattice expansion in the course of H dissolution form the basis of parameter¯tting. As another example, Lee and Jang 65 have formulated a potential for Fe-H system by means of 2NN-MEAM, with tting parameters coming from experimental parameters such as the dilute heat of solution of H in Fe and the binding energy of H in Fe. With this potential, the role of H atom in vacancies, dislocations and grain boundaries can be predicted. Ramasubramaniam et al. 66 have developed an EAM potential for Fe-H system by adapting the pure Fe potential formulated by Mendelev and coworkers. 26 This form of potential can inherit the property of the Mendelev potential 26 in modeling screw dislocations with comparable accuracy to corresponding DFT computations. The physical quantities derived from this potential can model the di®usion and di®usion of H in -Fe, binding of H to free Fe surfaces, and the trapping of H atoms in defects well.
Carbon is an important impurity of iron, because its introduction increases the tensile strength of iron. The dislocation movement in iron can be impeded by carbon impurities. A number of interatomic potentials for Fe-C alloys have been developed.
Here we illustrate a few of them. An EAM potential of Fe-C alloy has been formulated by¯tting experimental and ab initio data to an e®ective pair interaction. 67 The equilibrium lattice constant, the bulk modulus and cohesive energy in stable and metastable states have been adopted as the¯tting parameters. The potential has been tested by checking against martensite transformation, C interstitials in Fe grain boundaries and C interstitials in a free surface of Fe-C alloys. In order to model the point defects of Fe-C alloys, which is not the strength of the aforementioned potentials, a potential has been made 68 as a remedy. The potential is based on FS formalism, with incorporating C-C potential used to describe defects containing more than 1 C atom. This potential can cater for arbitrary point defect concentration. A number of formation energies are used as the¯tting parameters: carbon interstitials in a perfect BCC Fe lattice, 1C-1V clusters, 2C interstitials in a perfect BCC Fe lattice, 2C-1V clusters and Fe 3 C. Another Fe-C potential used for designing carbon nanotubes from the carbon-saturated metal clusters has been developed in the bond-order formalism, whose quantities used for¯tting are derived from DFT. 69 For example, energies of symmetrical Fe-C clusters for varying bond lengths have been obtained from DFT. The energies of isolated C and Fe atoms of varying bond length have been obtained as well.
Some other Fe potentials with other impurities are brie°y mentioned. Besson and Morillo 70 have developed a potential for B2 and D03 Fe-Al alloys by EAM and pair potential formalism, veri¯ed by checking it against the elastic constants, in order to better study the interfacial properties during segregation of grain boundaries. After that, Lee and Lee 71 have developed a more practical potential for Fe-Al alloys by 2NN-MEAM, such that the Fe-Mn-Al-C system commonly found in steel is considered as a whole. The structural, thermodynamic and elastic properties of Fe-Al binary alloy could be modeled successfully by this approach. Another MEAM potential for Fe-C system has been formulated by using MEAM Fe and C potentials, 72 with intricacy lying in the comprehensive consideration of a carbon atom in various interstitial con¯gurations inside a Fe matrix. After this consideration, the dilute heat of solution of carbon, the vacancy-carbon binding energy and the migration energy of carbon atom in Fe matrix can be reproduced with high degree of experimental agreement. A potential used for modeling high-nitrogen steel has been developed 73 by means of 2NN-MEAM potentials of pure Fe and N, respectively. It is found that nitrogen in iron can result in improved ordering tendency in BCC and FCC iron, compared to carbon in iron, due to the
Simulation Results in the Literature
MD simulation is often regarded as a replacement of real experiments that are technically formidable and less controllable. Some assumptions may have been adopted to improve the computational speed, nevertheless the generated results can su±ciently reveal the physical behavior of atomic interactions. This section exhibits some of the important MD results of iron properties by means of MD calculations, demonstrating the practical value of this widely used technique.
Phase transition
Studies of phase transition or transformation occurring in iron is a hot topic at the time of writing this review, in the sense that martensitic transformation in nuclear power plants and shape-memory materials can be better studied. Determining the temperature at which phase transition occurs is also a major target of the study. Many MD simulations have indicated that the Bain transformation path 80 has been followed in the course of FCC-BCC transformation.
For example, the transformation from FCC to BCC has been demonstrated by MD simulation, 62 showing that FCC-Fe can be transformed perfectly at temperatures of 100-1800 K, under pressures of 0-40 GPa. Such a transformation largely follows the Bain transformation path. Another extensive MD simulation of FCC-BCC transformation has been undertaken on pure iron, in which both the Nishiyama-Wasserman (N-W) and Kurdjumov-Sachs (K-S) orientations of FCC-BCC interfaces have been attempted. 81 Figure 1(a) shows the gradual propagation of FCC structure to BCC structure at 1200 K, near the phase transition temperature of 1185 K between FCC and BCC phase. The growth of FCC structure is mainly planar. The ledge structure is developed at the cross section of the FCC-BCC interface, as can be seen from Fig. 1(b). If K-S interface is adopted, the time evolution of transformation is the one shown on Fig. 1(c). The C. P. Chui et al. arrows in the sub¯gure indicate the needle-like growth of the interface. It is found that the interfacial atoms rearrange themselves, following the Bain transformation path, to reduce the mismatch in the course of phase transformation. In view of the experimental di±culty in capturing the interfacial motion during phase transition, MD simulation has been adopted to explain the corresponding behavior in atomic layer level. 82 It is found that FCC interfaces require some temperature-dependent incubation time to e®ect on a few layers before they undergo very fast transformation to BCC structure. A certain structure similar to screw dislocations has to be established during the incubation time, after which the interface can move quickly. It is realized that the volume to surface area ratio is decisive of the incubation time but not decisive of the transformation rate. The FCC-BCC transformation in iron thin¯lms through the direct and inverse Bain path has been simulated, 83 whose¯ndings are supported by the corresponding variation with the elastic moduli. The correlation between the¯lm thickness and elastic moduli has been identi¯ed. It is found that the change in biaxial strains is responsible for the transformation mechanism. The atomic con¯guration during FCC-BCC transformation has also been investigated by Engin and Urbassek. 84 By using the FS potential, 40 The transition between BCC and HCP phase has also been investigated in the literature. For example, it is found that, by MD simulations in Lagrangian form, 86 the uniaxial tensile stress can induce such a transition. 87 With this approach, the volume can vary with the internal and external stress. The induction of structural change from BCC to HCP due to a uniaxial stress can be realized, together with the intermediate change of structure due to asymmetric shear deformation. The shear deformation becomes more uniform at the end of transformation. The reverse transformation from HCP to BCC can be undertaken by a uniaxial compression, at the expense of a hysteresis loop. The reverse transformation is undertaken by pure homogeneous shear mechanism, which is di®erent from that of the direct transformation. The symmetry breaking mechanism might be responsible for such a di®erence. Morris and Ho 88 have made a step further by analyzing the structure factor in the course of BCC-HCP transformation, suggesting that the Brillouin-zone dependence of scattering is greatest in the course of transformation, indicated by the formation of Bragg peaks that is responsible for the HCP structure. The e®ect of directional loading on transformation between BCC and HCP/FCC is discussed in Refs. 89 and 90, with corresponding time evolution of atomic con¯gurations. For a better representation of both forward and backward phase transition between FCC/HCP and BCC iron states, a speci¯c potential has been formulated. 91 The temperature ranges in which the FCC and HCP phase of iron is unstable have been determined by MD simulations using this potential. The FCC-BCC transformation is found to follow Bain path, 80 while the BCC-FCC transformation is found to follow a Burgers mechanism. 92 Transformation between FCC/HCP and BCC states in iron nanowires has been simulated, 93 which indicates that tensile axial stress can vary the phase transition temperature of iron nanowires and that the transition temperature has an inverse relation with the wire diameter. However, the application of stress beyond a critical value can inhibit transformation from FCC/HCP to BCC phase. Hysteresis e®ect has also been observed in the e®ect of temperature on nanowire length.
MD simulations for characterizing Fe alloys have been attempted, so that the thermodynamic properties of them can be realized. For example, Yang et al. 94 have performed MD simulations on undercooled Fe-Ni alloy in liquid state to¯nd its heat capacity because the time required for measuring the heat capacity already allows the alloy to crystallize, leaving the phase of interest. They have employed the EAM implemented with analytic nearest-neighbor interactions 23,24 to characterize the potentials of both Fe and Ni. Then they have determined the heat capacity of Fe-Ni alloy by di®erentiating the energy with respect to temperature, and have concluded that the composition of an alloy is deterministic of the heat capacity at the undercooled state. Kadau et al. 95 have investigated the phase transition occurring in Fe-Ni nanoparticles, from which the scaling behavior of transition temperature with the inverse of particle diameter is observed. Besides, the N eel temperature of FCC Fe is found to decrease with cluster size.
Interstitials, dumbbells and crowdions
As the atoms in a crystal structure outnumbers the lattice sites, the extra atoms may need to occupy a space that is not reserved for atoms. These extra atoms can be the same as the lattice atoms (known as self-interstitials), or di®erent from the lattice atoms (known as impurity interstitials). Many of the con¯gurations pertaining to interstitials are possible. For example, an atom originally¯xed at a lattice site may be substituted by another atom that has no¯xed site (known as substitutional atom).
The study of interstitials usually involve the time evolution of one self-interstitial atom (SIA) and of a multiple-SIA (n-SIA) cluster. A dumbbell or dimer is another type of interstitials, which has two atoms resting on one lattice site in perpendicular direction to a crystallographic direction. A crowdion is the addition of an interstitial atom along a crystallographic direction, such that the atoms arrange more compactly. The mechanisms of these two interstitials involve an atom displacing another atom for some space to stabilize both atoms. Figure 2 shows the con¯guration of a self-interstitial, a dumbbell and a crowdion.
Regardless of the forms of interstitials, the major concern about interstitials is mainly the interstitial formation energy required for an extra atom to become stable at its occupied space. Another major concern is the mechanism of interstitial movement, i.e., the directions involved in cluster migration. The di®usion coe±cient of the interstitials, usually expressed by using the Arrhenius plot, are sometimes investigated.
Many studies focus on the above-mentioned issues and employ MD as the tool of investigation. Here we study some important¯ndings of interstitial simulations, together with the potentials used in each study. For example, Osetsky and coworkers 96 have attempted the study of Fe clusters evolution by means of MD. By using two potentials suitable for modeling Fe and Cu, 61,62 stable 1=2h111i interstitial loops and glissile h100i loops would be formed on BCC iron. It is found that all SIA clusters would turn to be glissile, even for those clusters that are initially sessile. This implies that the accumulation density of defect in BCC iron could be much lower than that in Cu. As another example, Marian et al. 97,98 have used MD to study the SIA migration in -Fe and in Fe-Cu alloys by using a potential for Fe-Cu systems developed by Ackland and coworkers. 61 It is found that introduction of Cu impurities of 1.0 at.% in Fe results in a lower prefactor of the Arrhenius behavior of SIA di®usion. Also, the migration energy for small clusters that involve three-dimensional motion is found to be larger than that for large clusters that involve linear motion alone. The oversized substitutional Cu solute causes a dilational strain in Fe lattices, which leads to the drop of e®ective migration energy. These¯ndings are attributed to the atomic displacement¯eld interaction which triggers the change in di®usional behavior of atoms with atomic con¯gurations. Based on the variation of the migration energy with the interstitial cluster size, power-law expressions of the prefactor and the migration energy of Arrhenius plots have been extrapolated for a larger cluster size. A more detailed exposition of the mechanisms of self-interstitials of -Fe has been given by Wirth and coworkers, 99 who have calculated the stability conguration of -iron by using the FS potential 40 modi¯ed by Calder and Bacon. 100 They have found that the most stable self-interstitial of -iron is the h110i dumbbell, followed by the h111i dumbbell and h111i crowdion. They have also observed that the migration mechanism of self-interstitials in -iron is composed of two steps. The¯rst step is rotation from the ground-state h110i dumbbell to h111i dumbbell, and the second step is translation along h111i direction through the crowdion saddle point. However, it is noted that the e®ect of angular dependence of bonds in -iron cannot be demonstrated in this work, in view of the adopted FS potential which has not considered this e®ect. Tapasa et al. 101 have investigated the carbon interstitials in -iron by means of two potentials for Fe-Fe interactions 50 and Fe-C interactions, 102 respectively, for use in MD. The study shows that all clusters of SIA of size larger than seven would transform into 1=2h111i con¯guration and migrate along their crowdion axis direction, consistent with the¯ndings in Ref. 96. Furthermore, the study notices that introduction of carbon impurities inhibits cluster mobility. Two C atoms can delay transformation of h111i dislocation loops, and can even make a cluster sessile. Terentyev and coworkers 103 have studied the three-dimensional cluster motion of BCC iron by MD, with the help of the AMS potential that has incorporated DFT computations of SIA of -Fe. 50 Di®usion coe±cients and jump frequencies derived from this potential have been checked against those derived from other empirical potentials too. The study veries the h110i dumbbell as the most stable self-interstitial con¯guration, followed by the h111i crowdion. Their respective formation energies obtained in this study are closer to the DFT results than those by other empirical potentials. The study also suggests that the jump mechanism determined by Johnson 104 is the one that agrees with the DFT results. This observation is di®erent from the established results in Ref. 99. Three major mechanisms have been obtained for -iron: single and diinterstitials in fully three-dimensional motion, 3 to 5 SIA clusters in mixed one-dimensional and threedimensional motion, and SIA clusters of larger size in preferentially one-dimensional motion along h111i directions. Figure 3 compares the major¯ndings of the jump mechanism of -iron clusters.
Studies of Fe-C alloy continue to progress. A more recent study of interstitials of Fe-C alloys can be found in Ref. 105, which involves more recent potentials that considers covalent bonding of carbon explicitly. 77
Vacancies
A vacancy is a point defect which occurs when the lattice sites in a crystal structure outnumbers the atoms, such that the atoms have some space to switch their site. Figure 4 shows the vacancy created by the lack of an atom at a lattice site. Usually, the In some cases, however, the atoms in the vicinity of the vacancy would come close together, making a large vacancy. A vacancy formed by a displaced atom becomes an interstitial atom. Therefore an atom-vacancy pair is formed known as a Frenkel pair. As a displaced atom recombines with a vacancy, the Frenkel pair disappears. The major concern of vacancies is similar to that of interstitials and dumbbells. We care about the vacancy formation energy, vacancy migration energy and the jump and di®usion mechanism of vacancies. Here we introduce some representative papers that focus on Fe. The vacancy binding energy and time evolution of copper precipitates containing vacancies that are found in -Fe have been studied by large-scale MD simulations. 63,106 From this illuminative study, the vacancy binding and migration energies of -Fe have been calculated. The interaction between precipitate and vacancy is found to be anisotropic, preferentially along h011i and h111i directions of the precipitates. The anisotropic interaction suggests the tendency of precipitate phase transformation. One can realize that the di®usion behavior of vacancies within Cu precipitates depends on the vacancy concentration. Besides, the study identi¯es the three stages of time evolution of vacancies within Cu precipitates. Thē rst stage refers to the free migration of vacancies. The second stage is the clustering of vacancies that are free to move initially. The third stage is the di®usion of vacancy clusters. It is also found that the di®usion of vacancy clusters (the third stage) has a larger correlation factor than that of free migration of vacancies, which is of random-walk nature. On the other hand, monovacancy migration within a precipitate results in a smaller correlation factor than a random walk in bulk Fe. The growth of larger Cu precipitates on -Fe has been studied as well. At high vacancy concentration, the time evolution of the precipitates results in partial transformation of their atomic planes from BCC to FCC. The notion of three stages of cluster formation with vacancies has later been challenged by a similar study of Cu di®usion in -Fe, 107 using an Ackland potential for Fe and Cu, 61 from which only stage 1 and 3 can be identi¯ed. Arokiam et al. 108 have performed Cu di®usion in Fe by vacancy mechanism, showing that the di®usion coe±cients are similar for Fe atom in pure Fe in Fe-Cu alloy. The similarity is attributed to the weak interaction between the vacancy and copper atom, and to the short-range behavior of vacancy-Cu binding energy. The study also indicates that single 1=2h111i vacancy jumps dominate the simulation. On the other hand, as the temperature increases to 1500 K, vacancy double jump in h111i direction occurs. Irradiation damage study of nuclear reactors rely heavily on the interatomic potentials of -Fe. Helium is an element that can be generated in fusion reactors, so the Fe-He compound is a major topic for materials scientists. The e®ect of helium clusters on -Fe has been investigated by large-scale MD simulations by changing the number of He atoms and number of vacancies in a cluster. 109 It is found that the binding energies involved in the clusters and the Fe matrix depend on the He-vacancy ratio of clusters, i.e., the number of He atoms to that of vacancies. The binding energies are not dependent on the cluster size. The thermal stability of clusters is also dependent on the He-vacancy ratio, which controls the thermal emission of defects. Another extensive MD simulation of He clusters on -Fe has been performed, 110,111 in which the mechanisms involved in He-vacancy formation and recombination are investigated by using several Fe-He potentials and Fe matrix potentials. The study indicates that the Fe-He potential is a more important factor than the Fe matrix potential to determine the di®usion coe±cient of single He atoms. The additional binding energy required for a He atom to join an interstitial He cluster or a Hevacancy cluster has also been determined. The results show that the speed of Frenkel pair formation and He clustering in -Fe vary quite largely with the potential. The study also shows that He bubbles would expand its radius as more He atoms join the bubbles. The dilation of He bubbles developed after He-vacancy clustering is dependent on the He/V ratio, i.e., the ratio of the number of He atoms to that of vacancies within a bubble.
The vacancy mechanisms of carbon in iron have also been studied. By MD simulation implemented with the AMS potential, 50 Tapasa et al. 112 have determined the jump mechanism of a C atom toward a vacancy and of a vacancy toward a C atom. The corresponding activation energies obtained from MD are similar to those from molecular statics (MS) calculations. Another study of irradiation defects in Fe 113 indicates that vacancy di®usion in Fe-C alloys is faster than soluble carbon di®usion. This means that carbon has an e®ect of retarding microstructure evolution.
Displacement cascade
A neutron colliding with solid or liquid metal can result in displacement cascade of nuclear reactors, in which the atoms with energy greater than a threshold may experience a permanent displacement. The¯rst displaced atom after irradiation is known as the primary knock-on atom (PKA). If the energy of a PKA is su±cient, further displacement of other atoms can occur. The displaced atoms will form point defects and clusters as they migrate to other parts of a reactor. Then the resulting point defects and clusters may continue to migrate and interact, causing an even large cluster and changing the microstructure of the reactors more severely.
The study of displacement cascades typically involves realizing the formation of cascades after bombarding with a PKA with a certain recoil energy, and measuring the production e±ciency of subsequent defects. Calder and Bacon 100 have performed the¯rst MD simulation of displacement cascades of -Fe by using the FS potential, which is modi¯ed to cater for the pressure-volume relation of real metals. It is found from this study that the morphology of cascades would change to collisional phase when the PKA energy is about 1-2 keV. Generally, the number of vacancies and interstitials are greatest in the collisional phase. After a longer time, recombination is prevalent. The relaxation time for vacancy-interstitial recombination is shorter than that of the thermal spike phase. The materials after collisional phase has mainly become a hot solid, instead of a liquid state. Vacancy clustering is not found to occur in them thermal spike phase. Another early attempt has been made by Stoller 114 on displacement cascade in -Fe. The number of surviving defects is found to follow a power law with the cascade energy. The MD simulation can also suggest the presence of threedimensional clusters, which opposes the idea that only planar clusters can be formed. Even a longer simulation time of 100 ps cannot return the threedimensional morphology to the planar one. After that, Soneda and Diaz de la Rubia 115 have performed a large-scale MD simulation on displacement cascade of -iron at 600 K using the Johnson-Oh EAM potential, 23 with the recoil energy of the PKA ranging from 100 eV to 20 keV. They have successfully demonstrated the relations between the recoil energy, cluster size and number of clusters. A larger number of clusters can be formed if the recoil energy is beyond 5 keV. A cluster size of over 10 can be formed as the recoil energy is beyond 10 keV. Figure 5 shows the distribution of these relations.
The MD study of displacement cascade also suggests that a cascade is likely to split into smaller cascades of lower recoil energy. These smaller cascades are also possible to combine back to larger clusters. Similar work has been done on -Fe with carbon in solution form 116 by employing an adapted potential used to account for the short range interactions occurring in Fe-Fe solution form. It shows that for 600 K, the carbon concentration in Fe solution generally has indiscernible e®ect on the number of vacancies formed per cascade. Overlap of cascades has also been studied 117 in which a cascade produced by a larger recoil energy can mask the defects due to that by a smaller recoil energy. Displacement cascade of -Fe by a combination of MD and binary collision approximation (BCA) has been attempted, 118 which is found to be complementary to conventional MD in modeling primary damage. Application of BCA to MD provides similar qualitative results compared to full MD approach, as re°ected in the pair correlation of vacancy-vacancy separation. For a formalism of BCA, which is commonly used for computing atomic trajectories due to elastic and inelastic collisions in a lattice by considering interatomic potentials as well, readers may refer to Refs. 119-121. The BCA is applicable to modeling displacement cascades because it brings about a fairly low variance of statistics. 118 The number of cascades to simulate can be further reduced to save computation time.
Apart from using conventional MD approach, heat propagation in continuum performed by a thermal block surrounding the link cells has been proposed. 122 In this way, one can model the heat dissipated to the surrounding materials that can a®ect the mobility of SIA formed. By this approach, the number of point defects during cascade decrease with the irradiation temperature. On the other hand, the fraction of interstitials in clusters increases with the irradiation temperature. The e®ect of the PKA mass on displacement cascade of -Fe has been studied 123 using the AMS potential. 50 C, Fe or Bi atoms of varying recoil energy are made to knock on -Fe, from which we see that a heavier PKA produces fewer point defects, and that such an e®ect is more pronounced for lower PKA energy. It is the PKA mass instead of the PKA-Fe potential that is crucial for the damage in individual cascades.
A number of studies have been undertaken to examine the choice of an interatomic potential which better models displacement cascades of -Fe. Becquart et al. 124 have performed displacement cascade simulations with some EAM potentials, concluding that the short-range interaction is crucial for studying irradiation damage. Suitable repulsion mechanisms have to be formulated for better description of cascade morphology. Equilibrium properties of the potentials, such as vacancy migration energy and vacancy binding energy, are also important for modeling cascades. A comparative study of the potentials used for displacement cascades can be found in Ref. 125, which shows that MD simulations using DD, 46 AMS 50 and MEA 45 potentials can produce comparable results of Frenkel pair production. Their major di®erence lies in the defect clustered fraction. Malerba 126 has performed an in-depth review of cascades in -Fe simulated by a number of interatomic potentials and their adapted forms. This review¯nds that defect production energies calculated by various potentials exhibit essentially consistent values. It also tells us that, for the potentials attempted, the minimum displacement energy has little e®ect on the¯nal number of Frenkel pairs developed by a given recoil energy. The approach to model the potential at mid-range interatomic distance is found to be more decisive of the cascade behavior. For other critical reviews of the e®ect of interatomic potentials on displacement cascade, readers may refer to Ref. 127 for pure -Fe and Ref. 128 for He-vacancy clusters within -Fe.
Dislocations
Three types of dislocations exist: edge dislocation, screw dislocation and partial dislocation which is the combination of the previous two. An edge dislocation has a Burgers vector perpendicular to the dislocation line, while a screw dislocation has a Burgers vector parallel to the dislocation line. A partial dislocation is an intermediate case where the Burgers vector and the dislocation line intersect at an oblique angle. In many cases, a pure dislocation plane does not exist. Instead, the dislocation plane has to form kinks in order to become stable in a crystal structure. The kink structure aims to stay at the minimum points of the Peierls potential, created by the bulk atoms, that tends to prevent the dislocation plane from gliding. Figure 6 shows the edge dislocation, screw dislocation and a double-kink (DK) dislocation plane that tries to reside on minimum Peierls potential.
By applying MD, trajectories of dislocation glide can be traced out. For example, MD simulation of a=2h111i DK screw dislocation in iron has been performed for evaluating the critical resolved shear stress (CRSS) required to slip a dislocation plane. 129 The resulting glide is found to occur along a (110) plane. The MD critical stress is shown to exhibit temperature dependence as well. The migration of DK dislocation under stress has been simulated by MD approach, 130 showing that the dislocation travels from one Peierls valley to another by the application of moderate stress. As the stress further increases, the travel of dislocation becomes rough and the dislocation begins to rupture into interstitials and vacancies. Pinning points of the dislocation can even be developed, further hindering dislocation migration.
Many of the MD studies of dislocations consider the hardening e®ect of impurities in the course of dislocation glide and climb. For example, the hardening e®ect of Cu precipitates on edge dislocation in BCC iron crystals has been investigated by MD simulation. 131 The corresponding interatomic potential is based on EAM, which superposes all the Fe-Fe, Cu-Cu and Fe-Cu interactions. The MD result con¯rms the notion that the hardening e®ect increases with the diameter of precipitate atoms. In other words, a larger energy is required for an edge dislocation to penetrate through a Cu precipitate of larger radius. This hardening e®ect is caused by the introduction of a dislocation, which leads to phase instability in the particles. Similar studies have been undertaken by another group. 132 The stress-induced interaction between an edge dislocation and voids or Cu precipitates has been studied at¯nite temperature. In the course of dislocation-void interaction, the critical stress decreases with temperature. The dislocation velocity also determines the possibility of passing through the void. In passing through Cu precipitates, the critical stress also decreases with temperature. The dislocation line shape is also found to be dependent on the critical stress. Besides, the approach of passing through Cu precipitates is size dependent. Simple shear displacement occurs on small precipitates, while dislocation climb occurs on large precipitates, accompanied by phase transition of Cu from BCC to FCC. A large-scale MD study, using the AMS potential, 50 has been undertaken 133 to investigate the e®ect of void on hardening of Fe. The MD results are compared to those calculated with one Fe-Cu potential. 61 The simulation result corroborates the fact that voids are strong obstacles of edge dislocation motion because they would deform the dislocation to the screw one at low temperature. Other than this, the dislocation behaviors from the two potentials are basically the same for temperature beyond 100 K. Interested readers may also refer to other studies aimed at investigating the hardening e®ect of Fe due to Cu precipitates. [134][135][136] The e®ect of other impurities around dislocation in iron has been investigated. For example, an MD study of dislocations involves understanding the trajectory of impurity atoms in the vicinity of dislocation cores. For example, the trajectories of He atoms being put close to an edge dislocation core in bulk -Fe have been simulated. 137 It is found that, at 100 K, He atoms on the tension side would migrate to the layer closest to the slip plane as a crowdion atom. The atomic motion is driven by the interaction between He and Fe atoms. On the other hand, a He atom initially on the compression side would travel for a much shorter distance in parallel to the dislocation core, and would become stable at an octahedral site. The much restrained motion on the compression side is due to the higher activation energy required to leave the slip plane. shows the simulated trajectories. The di®usion of hydrogen atoms in the vicinity of dislocations on a f112g slip plane has also been studied by MD simulation. 138 It is found that H atoms are strongly trapped in the vicinity of the edge dislocation core, so that the corresponding atomic di®usion is very limited. The di®usion of H atom is more mobile as its initial location is 1 or 2 nm beyond the dislocation core.
Grain boundaries
A grain boundary (GB) is the interface between two grains of di®erent orientations. Such a mis-orientation can be quanti¯ed by the mis-orientation axis and angle. One type of GB is the tilt boundary, in which the mis-orientation axis (also called tilt axis) lies in the boundary plane. Another type is the twist boundary, in which the mis-orientation axis (also called twist axis) is perpendicular to the boundary plane. Many of the real situations exist as a combination of both types of GB. Figure 8 shows a simple situation of both tilt and twist boundaries.
An early study of vacancy di®usion in the tilt boundary of -Fe at various temperatures has been undertaken. 139,140 Vacancy jumps have been iden-ti¯ed, with the probability of multiple jumps increasing with temperature. The vacancy jumps are found to be more frequent than in the bulk con¯guration, with a higher tendency of the jump direction along the tilt axis. Short-lived Frenkel pairs would also be established at elevated temperatures. The atomic vibration near the GB has a higher frequency than the bulk counterpart. By tracing the trajectories, it is noticed that vacancies jumps across adjacent layers are found to be more preferential than within the same layer. Also, at 1300 K, the atomic vibration in the GB region is more vigorous than in the bulk region. The GB structure at this temperature is found to be stable, such that the vacancies near the GB can be readily identi¯ed. The simulation can also re°ect the increase in vacancy jumps as the temperature goes to a higher value of 1500 K. Fewer vacancy jumps occur at places far away from the tilt boundary.
Impurity di®usion in GB of -Fe is also a topic of interest. For example, motion of He interstitials in GB of iron has been investigated by Gao and coworkers. 141 A series of MD simulation shows that the maximum binding energy for substitutional/interstitial He atoms and the GB are highly correlated. By¯nding the activation energy of He atoms during di®usion, He interstitials are mobile in the GB. He interstitials primarily di®use in one-dimensional path at low temperature (600 K), and in two-dimensional and three-dimensional paths at higher temperature (800 K and 1200 K). Also, a He atom in a GB would tend to di®use along the GB direction in one-dimensional path. Another MD study focuses on migration of C and H interstitials along GB in -Fe. 142 According to the MD results presented by means of Arrhenius plots, the GB decreases the mobility of H and C atoms in the vicinity of GB, because the activation migration enthalpy across GB is larger than that in bulk -Fe. In other words, GBs in -Fe can trap H and C atoms. Experimental work in this study veri¯es the trapping of C atoms in GB. The penetration of H atoms in GB is found to be more di±cult than in bulk crystal. Figure 9 shows the collection of H atoms in GB, together with the decreased penetration distance for H atoms.
Physical e®ects due to movement of GB have been considered in the literature. For instance, sliding of GB in -Fe has been studied using MD. 143 The e®ect of applying shear stress on tilt GB has been investigated by MD at 1 K, as opposed to MS technique at 0 K, which can indicate the creation of dislocation pairs acting oppositely to each other. The net e®ect of these opposing dislocation pairs results in migration of GB. Besides, the critical shear required to nucleate a dislocation decreases with temperature. In another study of symmetrical tilt GB, minimization of GB energy can be achieved by appropriate translation of grains in GB planes and by speci¯c adjustment of the tilt angle. 144 As another example, the interaction between a dislocation and a tilt GB has been simulated. 145 In this MD study, a tilt GB has been built between a dislocation and a free surface. The result shows that the dislocation glide is determined by the competition between the GB and the free surface. The attraction is strongest when the glide plane is perpendicular to the GB.
Alloy composition ratio is also crucial to the GB energy. Such a study has been performed on a symmetric tilt GB inside the Fe-Cr alloy. 146 The structure of the GB is found to remain stable in the course of thermalization, regardless of the increase in at.% of Cr. It is also noted that the GB energy decreases with higher at.% of Cr. The heterointerface established between Fe-Cr alloy and pure BCC iron has also been simulated. Such a heterointerface structure can be maintained during thermalization, and the GB transition energy is not correlated to increase in at.% of Cr.
The role of GB in fracture of -Fe has been studied by MD simulation. 147 It is noted that a GB stops crack propagation across iron. Besides, the intergranular crack propagation is determined by the angle between a GB and a crack plane. From this study, it is found that the fracture behavior of nanocrystalline materials should be linked to GB accommodation, GB triple-junction activity, grain nucleation and grain rotation. A high volume fraction of GB inside nanocrystalline materials controls crack propagation.
The e®ect of displacement cascades in the vicinity of GB has been investigated extensively. [148][149][150] A PKA of 1 keV, initiated from various directions hits -Fe bulks at varying distances from the tilt and twist GB. The subsequent time evolution of the GB has been recorded. The study shows that the tilt angle has little correlation with the GB energy, such that the GB energy can be regarded as stable. The GB has shown its role as a partial barrier to collision cascades. The PKAs cannot penetrate the GB, and defects (mainly dumbbells) accumulate near the GB in turn. In fact, during collision cascade, the GB su®ers more damage than a bulk lattice does, which can be re°ected by the number of defects formed in these two cases. Most interstitials are formed at the core portion of a GB. In addition, some preferential sites in the (5 3 0) symmetric tilt are discovered for some interstitials after the collision cascade. The largest energy change of collision cascade near a GB occurs at the¯rst half picosecond, after which the energy remains stable in general.
Nanotubes
MD simulation of iron has played its role in understanding the growth of single-walled carbon nanotubes (SWNT), which relies on transition metals such as iron to form metal carbides which act as a catalyst. Carbon atoms are then supplied to the metal carbide leading to the growth of a nanotube. The direction of the nanotube growth is also dependent on the interaction between C atoms and the metal carbides. Such a process is known as the vapor-liquid-solid (VLS) model.
The major concern about nanotubes is its high surface-to-volume ratio. Some adjustments of the MD approach is thus necessary. The MD approach used in simulating nanotubes is basically the same as that used in understanding defects, except that more re¯ned methods are used to¯nd the interatomic and electronic force. Understanding the contribution due to the bonding between C atoms and transition metal atoms is crucial for subsequent time evolution of the nanotube. In view of this criterion, special forms of MD, such as the reactive empirical bond order (REBO-type) MD, 151 quantum mechanical (QM-type) MD and density-functional tight-binding (DFTB-type) MD are adopted to calculate the interatomic potential and electronic forces by quantum mechanics. By this advanced approach, the interaction between the hybridization of C atoms and the orbitals of transition metal atoms can be better evaluated. Once the electronic forces are evaluated, the atomic trajectories are evaluated by conventional force integration. Interested readers may refer to a critical review of the SWNT 152 for an elementary understanding of the various forms of MD approaches to simulating nanotubes.
A number of MD studies involving the nucleation and growth of carbon nanotubes with the use of iron as a catalyst can be found in the literature. For example, the thermodynamics of iron carbide clusters occurring in carbon nanotubes has been investigated by MD. 153 The carbon content dependence of the cluster melting points has been calculated by cooling from much higher temperatures than their melting point. The results show that, for the range of carbon content adopted, the melting point would decrease and then increase due to the formation of stable Fe 3 C phase. The variation of melting point with Fe cluster size has also been obtained showing that the surface e®ect in simulation would result in a lower melting point than the experimental bulk condition. Introduction of carbon also lowers the melting point of Fe clusters because carbon atoms would break the symmetry of Fe clusters thereby destabilizing the structure. The study deduces that below 1200 K, nanotubes might grow from a solid particle or from the molten surface. In another study, the time evolution of growing SWNT on iron nanoparticles have been attempted by means of ab initio MD. 154 It is found that fast carbon di®usion occurs on the metal surface with carbon dimers followed by carbon sp 2 pentagonal and hexagonal bonded caps rooting on top of Fe catalyst without carbon penetration into Fe. The results also indicate that stabilizing a C atom at the Fe cluster surface is more favorable than at the cluster core. The binding energy calculation of Fe on C shows that SWNT growth is possible in iron, where a stronger covalent bond and higher adhesion energy can be achieved. Surface melting of Fe 586 Wul® polyhedral clusters occurring on SWNT has been studied to realize its mechanism. 155 Calculations indicate that the molten surface state can occur at a temperature below the melting point of the clusters. The temperature dependence of Lindemann index (LI) 156 (a measure of atomic disorder) tells us that the melting process of an Fe 586 cluster can be split into three stages. Thē rst stage refer to the slow increase in LI with temperature, while the second stage is the abrupt and nonlinear increase in LI, corresponding to surface melting. This means that the atomic kinetic energy can overcome the binding energy at the cluster surface. The¯nal stage refers to complete melting at high temperature, returning the maximum LI. The graph indicating the temperature dependence is shown in Fig. 10. The radial distribution of LI indicates that surface melting is more prominent at higher temperature. The growth of SWNT on iron clusters has been simulated by MD formalism 157 showing that the growth favors the perpendicular direction that has weaker interaction between the SWNT and the supported substrate of aluminum oxide. The growth angle also increases with simulation time due to the carbon-substrate interaction, which favors the presence of precipitated carbon atoms. In order to reduce the formation along the perpendicular direction, the SWNT-substrate interaction should be increased. MD simulation of coating iron onto SWNT has been studied. 158 By simulating continuous metal evaporation coating, one can see that iron clusters can combine with carbon¯rmly and provide an outward pull on carbon atoms leading to structural deformation of nanotube. The iron atoms tend to cluster together and form second layers.
6. Hardware and Software Development for MD
Recon¯gurable computing
With the advance of¯eld programmable gate arrays (FPGA) as an external device used in cooperation with the conventional CPU machines, one can congure a hardware board that is dedicated to a certain processing stage easily, so that it can accelerate computations that are undertaken ine±ciently in CPU. The approach to using hardware components such as FPGA in operation with appropriate hardware programming is known as recon¯gurable computing. As the phase space trajectory of a particle is practically irrelevant to that of another distant particle, FGPA can come into play for an accelerated performance that a CPU focusing on sequential processing cannot achieve. Accordingly, FGPA is a candidate of high-speed MD computation.
The working principle of recon¯gurable computing is di®erent from CPU computations. In fact, the design principle of recon¯gurable computing shifts to formulating the connection among various memory blocks and logic blocks as well as to devising the force pipeline that results in the interatomic forces used for motion integration. 159 Therefore, developers of applications in FPGA need to program the hardware connections every time a new algorithm is adopted. Despite the hardware-level approach, the idea behind the work°ow remains the same as those applied in CPU and GPU. For example, computational scientists have to organize the data°ow between the host (CPU) and device (FPGA in this case).
Currently in the FPGA market there are two popular brands. Xilinx is the leader of the market, providing the Spartan series for basic computation capability, the Artix and Kintex series for more demanding tasks, and the Virtex series for advanced tasks. Altera, as the major competitor of Xilinx, provides a similar range of products. The Cyclone series targets the Spartan series, whereas the Arria series targets the Artix and Kintex series. The Stratix series is the high-end series o®ering the same level of computational power as the Virtex series.
While recon¯gurable computing which has evolved for over a decade has found its value in MD, its application is still fairly limited to modeling biological and chemical molecules. Modelers in biochemical discipline are concerned about two major types of forcesbonded and nonbonded. The bonded force has a lower computational complexity of O(NÞ which is a®ordable by CPUs, while the calculation of nonbonded forces has a higher complexity of O(N 2 ) and is hence suitable for hardware acceleration. Lennard-Jones (LJ) force 160 is the short-range interaction that is deterministic of the resulting interatomic force. It is derived from the potential which has the form where r ij is the interatomic separation, " is an energy parameter and is a distance parameter. The short-range force is then obtained by numerically di®erentiating this potential with respect to interatomic separation. The velocity and position of individual particles can be calculated by motion integration techniques. Some of the worldwide computation systems that are adaptable to FPGA boards, such as GROMACS, 161 MD-GRAPE, 162 MDGRAPE-2, 163 NAMD 164 and MODEL, 165 have succeeded in processing the computationally complex LJ force and Coulombic force. In order to accelerate the force computation, lookup table storing the LJ potential as a function of r ij might be used instead of direct computation of LJ potential by putting interatomic distances into Eq. (71). Cuto® distance is often used in short-range computations to increase the processing speed, such that the force or interaction is neglected for the interatomic distance beyond a cuto® distance r C . The Coulombic force is a long-range nonbonded force that is often incorporated in the atomic interaction, which is expressed as 166 where in Eq. (72) q i is the charge of particle i, and r ij is the atomic separation vector between particles i and j. Unlike the LJ force, Coulombic force is slowly decaying. It can still have a¯nite amount even after a rather large atomic separation. The approach of using cuto® is thus inapplicable to determine the Coulombic force. 167 A notable method to solve this problem is the Particle mesh Ewald (PME), 168 which relies on computing the force in the reciprocal domain using the three-dimensional fast Fourier transform (FFT), yet the implementation in FPGA is less ine±cient. 169 The multi-grid approach is also a possible approach, whose sequential processing can attain roughly the same computational speed as the PME. 170 A number of attempts have been made to implement the computation of the Coulombic force, see for example Ref. 171.
As large-scale high performance computing is required for many MD tasks, the advantage of FGPA to allow for scaling and parallelism can be utilized. An advantage of FGPA over the conventional computing clusters is the customization of hardware components, which leads to the decreased cost and electric power. 172 The general process of MD calculation on an FGPA can be summarized in few steps, 173 similar to the steps implemented in CPU. First, the cell list is loaded to the FGPA memory, whereas the particle position data are stored in a memory location external to the FGPA. Second, the FGPA adopts the cell list and position data to generate the pairs to be used in interatomic force computation. Newton's third law is often applied, so that the number of force pairs would be reduced by half. After the force pairs are determined, the LJ force can be computed, followed by the update of the acceleration of the particles which are later stored in external memory. Figure 11 illustrates this idea by means of a schematic diagram representing a typical FPGA board.
FPGA designers have to formulate the pipelines dedicated to processing the MD tasks. Readers may refer to some of the implementations in Refs. 166, 169 and 174-176, and this review discusses two of them. Kasap and Benkrid 166 have decomposed the whole MD process into four independent pipelines, and each pipeline handles the nonbonded potentials, the resulting forces and virials due to all other particles in the simulation system. Figure 12 shows the schematic diagram of one pipeline, and the design requires four of them linked to establish the FPGA implementation. Each of these four processors has a dedicated SDRAM allocation for holding the input data. Then the input bu®er of each processor receives the data from the input SDRAM portion and transfers the data to the processor for calculation with the help of the function coe±cients used to interpolate the potentials. The calculation results go to the output bu®er after passing which the results go to the SDRAM portion of the FPGA responsible for storing data. The processors rely on the¯nite state machines (FSM) to coordinate the data transfer.
In view of the advancement of computer networking, Scrofano et al. 175 have developed another pipeline for MD processes performed on a cluster of FGPAs computation nodes. The hardware design for each FGPA node is similar to that in Fig. 11, but the nodes this time are connected to each other to establish a cluster. The schematic diagram of this idea is illustrated in Fig. 13, with parallelization of nonbonded force evaluation relying on spatial decomposition technique. 177 A number of generalpurpose processors (GPP) group themselves to form a GPP element, whereas a number of recon¯gurable hardware (RH) devices group themselves to form an RH element. These two elements are linked together for mutual data transfer. A simulation box is partitioned in a number of simulation cells containing a number of atoms, each of which is assigned an FGPA node. Each node handles the Fig. 11. Block diagram of an FPGA board whose con¯guration targets MD simulations. Reprint of Fig. 1 computations independently, except that cross-cell communication occurs during the atomic movement across the edges of the simulation box and across the cells. They found that a cluster of N accelerated FGPA nodes perform faster than a cluster of 2N computing nodes without applying FGPA acceleration, so that investment in hardware infrastructure can be reduced while maintaining high computation performance.
While recon¯gurable computing packages for MD have been ripe in biochemical discipline, the popularity of recon¯gurable computing in modeling physical phenomena is far lower. Collaborative studies between computer scientists and physicists are therefore anticipated to further extend the implementation of recon¯gurable computing to simulate metals. This can be achievable by formulating a general approach to force computation and integration algorithms tailored for metals. It is expected that metal simulations can gain advantage by using the FPGA accelerator together with appropriate MD acceleration algorithms. The pipelines reviewed in this paper might be treated as some possible guidelines of such a design for MD in metals. Although FGPA has to sacri¯ce the°oating-point precision for a larger number of function units, 176 some approaches implemented in¯xed-point calculations might still be helpful to explore the use of FGPA to materials sciences. After all, the requirement of double precision in MD simulations is questionable. 178
Computing based on GPU
In spite of the progress made by recon¯gurable computing in MD simulation, its development is hindered by the complexity to design custom¯rmware and hardware dedicated to parallel computation tasks. 179 Expertise in electronic hardware and its related programming skills are therefore required for a research team to employ recon¯gurable computing. Furthermore, the programming language for electronic design is not suitable for coding scienti¯c tasks 174 adding di±culties for applying it to MD computations.
With the advance of GPU, large-scale simulation tasks can be performed with more readily accessible hardware components. The speedup factor of using GPU compared to using CPU can often reach 100. Besides, the skills requirement is lowered from understanding hardware architecture of FGPA to realizing parallel programming techniques. In the old days, GPUs were not easy to use for materials modelers because the MD codes had to transform to graphics operations manually by means of appropriate mapping. 180 Fortunately, two streams of programming architecture are now in the market that help users to perform this task. In June 2007, NVIDIA developed the Compute Uni¯ed Device Architecture (CUDA), a proprietary application programming interface (API), which facilitates multicore and parallel computing that is coordinated between CPU and GPU. In 2009, Open Computing Language (OpenCL) framework was established as an open-source and cross-platform counterpart of CUDA, so that parallel processing can also be performed on Intel CPUs, AMD CPUs and AMD GPUs. Nowadays OpenCL is already incorporated in the AMD APP SDK as a tool of computing using ATI GPU cards. Multicore computation units controlled by hardware programming language are applied to complex calculations, with proper code optimization for better parallel computing performance. Supercomputing can hence be performed in software level, without touching on the implementation of FPGA. Moreover, CUDA and AMD APP SDK can be executed on GPU cards initially focused on video gaming, in the sense that more users can experience the high-throughput computations with ease at a more a®ordable cost.
Another advantage of GPU over recon¯gurable computing is the abundance of open-source code libraries for a fairly accessible GPU architecture, as compared to the lack of such programming resources for FPGA. 179 Both GPU card manufacturers release their own code libraries, and some third-party libraries are available for download free of charge. Apart from a series of NVIDIA libraries such as cuBLAS and cuFFT, notable third-party libraries include CULA which acts as an alternative linear Fig. 13. Idea of connecting various FPGA nodes to form a cluster, which is found by the authors of Ref. 175 that N nodes with acceleration perform better than 2N nodes without acceleration. Reprint of Fig. 1 algebra library, JCuda which combines CUDA operations to JAVA libraries, and PyCUDA which allows usage of CUDA codes in Python environment. For AMD, the AMD Core Math Library (ACML) supports the use of AMD APP SDK.
A GPU performs parallel computations by the single-input-multiple-data (SIMD) architecture. The same instructions would be performed on each thread of execution which processes di®erent data values. Since the time to execute each thread can be slightly di®erent, before performing another set of computations, processed threads have to wait until all other threads have¯nished their computations; threads of execution have to be synchronized. Data transfer between the GPU and CPU interweave the computation process, so that data are uploaded to the threads and the results are downloaded to the host machine. MD simulation follows the SIMD architecture, therefore it is suitable for GPU processing.
The general mechanism of CUDA in transferring data and instructions (known as kernels) between the host and the GPU device is depicted in Fig. 14. The host memory containing variables to be evaluated is copied to the device memory, at which the data required are transferred to the CUDA cores for computation to take place. The computed results are stored in the device memory, which are then copied back to the host memory. For computationally intensive processes, one may further raise the speedup by using the shared memory, a fast memory of the order of KB per multiprocessor. 181 Because of its limited capacity, delicate and skillful organization of data transfer to the shared memory is necessary to ensure that only the toughest part of computation is performed by it. At the end of the computation, developers have to free the memory in the GPU, which is initially used to store variables.
Legacy CUDA versions require memory transfer to be performed explicitly, which discourages developers accustomed to pure software programming from employing this API. This shortcoming has been overcome since the release of CUDA 6.0, which allows developers to use the uni¯ed memory that is shared between the CPU and GPU. 182 Explicit transfer actions are no longer necessary, such that CUDA programming is more amenable to software developers whose focus is not on GPU hardware architecture.
AMD APP SDK, on the other hand, operates on another scheme by supporting OpenCL as the language of parallel computations performed on AMD graphics cards. Unlike CUDA which links both CPU and GPU chips, AMD APP SDK is merely the runtime for CPU, so users have to install the Catalyst driver for AMD cards, which includes the runtime for the GPU component. OpenCL employs roughly the same terminology as CUDA to construct the programming architecture. 183 For example, parallel algorithms are performed by kernels. The smallest working unit of OpenCL is known as a work item, which is conceptually equivalent to a thread of execution in CUDA architecture. Memory allocation has to be done on the GPU, and the results generated by the GPU have to be copied back to the host. Also, users have to free the GPU memory after use.
Di®erent collections of NVIDIA GPU cards can help to perform computations of varying complexity. For supercomputing level, the latest Tesla GPU to date has been installed in many famous computing clusters such as Titan, the top cluster till November 2012, 184 which is used for materials science, nuclear engineering and climate research. The relatively cost-friendly Titan series of NVIDIA GPUs can reach the Tesla computation capability of over about 1.3 TFLOPS for double-precision tasks, 185 at the expense of using non-ECC RAM, i.e., memory without error correction code functionality. For single-precision computation, the GTX series primarily designed for video gaming market can already provide a modest speedup as compared to single-core CPU computation.
AMD, as the competitor of NVIDIA, has its series providing comparable computational power. The R series (formerly known as the Radeon series), typically used for video gaming, targets the NVIDIA GTX series. The FirePro series is the collection for professional computation tasks, which can provide over¯ve TFLOPS (W9100 model) for single-precision calculations.
With the rapid growth of the number of transistors assembled in a GPU unit, which is faster than that of a CPU unit, the bottleneck of GPU computation speed lies instead in the bandwidth provided by the PCI Express slots used to transfer data between the host and the device. 186 Worldwide researchers of MD simulations have suggested some algorithms to utilize the GPU architecture and its parallel computation capability that is not the strength of CPU (see for some of these examples). Besides, open-source GPU software packages such as OpenMM 193 and its later version 194 are also available for public downloading. Some of the aforementioned examples are discussed in this review paper.
Using CUDA, Anderson et al. 188 have performed a comprehensive study of MD simulation on a single GPU not by adapting the CPU code but by completely rewriting a set of code optimized for GPU cards. Particle data are stored in the global memory of the device accessible by all CUDA cores, from which the data is loaded to the texture memory where summing of forces is undertaken. In order to utilize the property of GPU in matrix computation, the neighbor list used to calculate the short-range force is organized in a matrix form instead of the conventional linked list format. Each thread sums the pairwise forces due to the atoms in the neighbor list concurrently. However, the authors did not apply the Newton's third law to¯nd the interatomic force, in fear that it would incur additional memory latency during the read-modify-write process. To reduce the time required to transfer data between the CPU and GPU, integration of the equations of motion is performed in the GPU instead of the CPU, soon after the interatomic forces are determined in the GPU. In simulating a large-scale system of over 1 million particles, the authors demonstrated a speedup factor of about 60 in¯nding the LJ force, and about 30 in generating the neighbor list.
Here we brie°y review another implementation of GPU computing by OpenCL, executable on AMD cards as well as NVIDIA cards. Michael Brown et al. 192 have established an implementation scheme for LAMMPS, 177 a famous object-oriented MD simulation package. In essence, the authors added the OpenCL code simply by adding a derived class of the original code used without GPU acceleration.
The authors were basically doing parallel decomposition of the MD processes including¯nding the neighbor list, calculating the LJ force and the Gay-Berne (GB) force, 195 and integrating the equations of motion. The idea of acceleration mainly lies in the operation of the neighbor lists in the GPU rather than in the CPU. The neighbor list is an improved version, which takes advantage of the linked cell list, and hence reduces the number of particles to check in each time step. The authors also balanced the load by overlapping the short-range computations in GPU with the long-range counterpart in CPU, such that calculations are undertaken on both host and device concurrently. The LJ force requiring low arithmetic intensity, whereas the GB force, a mod-i¯ed LJ force requiring high arithmetic intensity, was employed to demonstrate the system speedup of OpenCL implementation. It was found that the speedup of¯nding the LJ force could reach between 2.9 and 7.8, with a longer cuto® distance returning a higher speedup. The speedup of¯nding the GB force could even reach 11.2, because more arithmetically intensive operations performed in parallel can hide the memory latency between host-and-device transfers during the complex operations involved.
In order to promote the application of OpenCL in scienti¯c computing community, some source code mapping tools have been formulated to facilitate code translation from CUDA to OpenCL. [196][197][198] Though such a translation is trivial in view of the similar architecture of both CUDA and OpenCL, challenges exist for a robust porting between them. 199 For example, separate compilation of CUDA source¯les is possible, yet it is quite di±cult to link the code translated in OpenCL format due to the required reorganization of the initialization code throughout all the source¯les. Besides, source code calling the CUDA libraries, which are not included explicitly in the source¯les, is hard to translate to OpenCL directly.
It has been found that FPGA can still be a highly competitive choice of MD acceleration given that the hardware con¯guration and the pipelines are carefully designed. 200 However, we can realize from this section of the review that GPU is currently more favorable than FPGA for large-scale computations, in view of its accessibility and the skills involved. Also, at the time of preparing this paper, the CUDA is still more popular than OpenCL in MD simulations. It is expected that, with increasingly advanced GPU computation capability, developers can further contribute to both types of programming streams in MD simulations thereby providing more choices of programming tools to the scienti¯c community.
Summary
This review paper starts with a brief discussion of statistical mechanics, which forms the basis of the viability of MD formalism. A number of practical implementations of motion integration then follow. Some common thermostats have been mentioned to maintain the ensemble temperature. SLD for ferromagnetic materials and TI have been investigated, which act as the supplement of the conventional MD approach. The interatomic potentials for iron have evolved from using FS formalism to embedded atom method followed by magnetic iron potential. With this development, a number of iron potentials in pure form and with impurities have been formulated. Examples of applying appropriate interatomic potentials for iron to simulate the time evolution of atoms have been discussed, which are mainly related to the safety of nuclear power plants. They demonstrate that the considerations are further re¯ned when new potentials are adopted, so as to re°ect the increasingly complicated defect conditions. Recongurable computing and GPUs are common hardware components for MD simulation, yet the former is less employed in simulation of metallic materials. OpenCL is more developed than CUDA in terms of MD simulation in view of their current trend of application in scienti¯c community. | 26,226.2 | 2015-10-13T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Nature of unconventional pairing in the kagome superconductors AV$_3$Sb$_5$
The recent discovery of AV$_3$Sb$_5$ (A=K,Rb,Cs) has uncovered an intriguing arena for exotic Fermi surface instabilities in a kagome metal. Among them, superconductivity is found in the vicinity of multiple van Hove singularities, exhibiting indications of unconventional pairing. We show that the sublattice interference mechanism is central to understanding the formation of superconductivity in a kagome metal. Starting from an appropriately chosen minimal tight-binding model with multiple with multiple van Hove singularities close to the Fermi level for AV$_3$Sb$_5$, we provide a random phase approximation analysis of superconducting instabilities. Non-local Coulomb repulsion, the sublattice profile of the van Hove bands, and the bare interaction strength turn out to be the crucial parameters to determine the preferred pairing symmetry. Implications for potentially topological surface states are discussed, along with a proposal for additional measurements to pin down the nature of superconductivity in AV$_3$Sb$_5$.
Introduction.
The kagome lattice has become a paradigmatic setting for exotic quantum phenomena of electronic matter. This particularly applies to quantum magnetism, where the large geometric spin frustration inherent to the corner-sharing triangles promotes the emergence of extraordinary quantum phases [1]. From an itinerant limit, electronic kagome bands are likewise particular, as they feature a flat band, Dirac cones, and van Hove singularities at different fillings. The kagome flat band suggests itself as a natural host for the realization of ferromagnetism [2,3] or possibly non-trivial topology [4][5][6][7], while the kagome Dirac cones have been proposed to be a promising way to accomplish strongly correlated Dirac fermions [8] and turbulent hydrodynamic electronic flow [9]. The kagome lattice at van Hove filling has been shown to be preeminently suited for the emergence of exotic Fermi surface instabilities [10][11][12][13]. Among others, this involves charge and spin density-wave orders with finite relative angular momentum [14]. Moreover, the kagome Hubbard model was first predicted to yield degenerate nematic instabilities which can break pointgroup and time-reversal symmetry simultaneously [13], which has currently regained attention in the context of twisted bilayer graphene [15].
The recent discovery of AV 3 Sb 5 [16] provides an instance of kagome metals tuned to the vicinity of multiple van Hove singularities. What further makes them unique is the combination of metallicity, strong twodimensional electronic character, and significant electronic correlations derived from the d-orbital structure of the Vanadium kagome net. KV 3 Sb 5 was discovered to be a kagome superconductor with T c = 0.93 K [17], along with RbV 3 Sb 5 (T c = 0.92 K) [18] and CsV 3 Sb 5 (T c = 2.5 K) [19,20], where the latter was shown to rise up to T c = 8 K under 2 GPa hydrostatic pressure [21][22][23]. While the wheel of experimental exploration is still in spin, certain tendencies about the superconducting phase are starting to crystallize. The observed charge density wave (CDW) order [24], interpreted as a potential parent state for unconventional superconducting order [13,[25][26][27], exhibits indications for an electronically driven formation [28]. Specific-heat measurements suggest at least a strongly anisotropic gap [17]. While a significant residual term from thermal conductivity suggests a nodal gap [29], penetration depth measurements claim a nodeless gap [30]. The dome shape suggests unconventional superconductivity along with a large value of 2∆/k B T c , hinting at a strong-coupling superconductor [31].
In this Letter, we formulate a theory of unconventional superconductivity in AV 3 Sb 5 . In a first step, we develop an effective tight-binding model suitable for the analysis of pairing instabilities. In order to retain the necessary complexity of multiple van Hove singularities in the arXiv:2104.05671v2 [cond-mat.supr-con] 27 Mar 2023 vicinity of the Fermi level in AV 3 Sb 5 , we distill a sixband minimal model. In a second step, we specify the interaction Hamiltonian. Due to matrix elements implied by the sublattice interference mechanism [11], which we review below, it is essential to take non-local Coulomb repulsion into consideration. Over a large range of coupling strengths, we find dominant f -wave triplet superconducting order, succeeded by d-wave singlet pairing for stronger coupling. Throughout the phase diagram, the p-wave order stays subdominant but competitive. Aside from this general trend, the detailed competition between the different orders is crucially influenced by the location of the Fermi level with respect to the multiple van Hove singularities and the nearest-neighbor (NN) Coulomb repulsion.
Sublattice decoration of kagome van Hove points. As opposed to related hexagonal van Hove singularities such as for the bipartite honeycomb lattice, the kagome bands can host two different types of van Hove singularities which we label as sublattice mixing (m-type) and sublattice pure (p-type), characterized by odd and even parity at the M point, respectively. This is illustrated in Fig. 1 for the minimal kagome tight-binding model with three distinct sublattice sites located on the 3f Wyckoff positions of the P 6/mmm space group. The upper van Hove singularity (E = 0) is of p-type, since the Fermi level eigenstates in the vicinity of the three M points are localized on mutually different sublattices (left inset). By contrast, the lower van Hove filling (E = −2 t) has mixed sublattice character and thus is of m-type, with the eigenstates equally distributed over mutually different sets of two sublattices for each M point (right inset). These distinct sublattice decorations have a strong impact on the nesting properties (see Sec. II of supplementary materials (SM) [32]). Since p-type van Hove points do not couple to each other via local interactions, the inclusion of at least NN Coulomb repulsion is quintessential to adequately model interacting kagome metals close to p-type van Hove filling [11,33].
Effective model. The ab-initio band structure of AV 3 Sb 5 matches well with ARPES measurements below the CDW transition temperature, even though the corresponding density functional theory (DFT) calculations are performed neglecting the star-of-David-type structural distortion [19,34]. Due to the multiple sublattices and the large number of contributing orbitals from both V and Sb in the vicinity of the Fermi level, a reduction to an effective model is a prerequisite to any analysis of many-body instabilities.
The layered structure of AV 3 Sb 5 , together with the large transport anisotropy of ρ c /ρ ab ≈ 600 [19] (for CsV 3 Sb 5 ) allows us to constrain ourselves to the twodimensional V-Sb kagome plane. Analyzing the Fermi level at k z = 0 by means of density functional theory, we find three distinct Fermi surfaces in AV 3 Sb 5 : (i) a pocket composed of Vanadium d xy , d x 2 −y 2 , d z 2 orbitals in prox- imity to a p-type van Hove singularity, (ii) two additional pockets composed of Vanadium d xz ,d yz orbitals in proximity to another p-type and m-type van Hove singularity above and below the Fermi level, respectively (Fig. 2), and (iii) a circular pocket around Γ formed by Antimony p z -orbitals. Note that (i) and (ii) do not hybridize due to opposite M z eigenvalues and the symmetry-wise allowed hybridization of (ii) and (iii) is parametrically weak. These features are not particularly sensitive to spin-orbit coupling, which is hence not further considered in the following.
For the effective model, we restrict ourselves to the Fermi pockets (ii) for three reasons. First, the pockets in (ii) carry the dominant density of states at the Fermi level. Second, we preserve the complexity of multiple van Hove singularities of p-type and m-type in our minimal model. Third, upon comparison to the ab-initio band structure, our minimal model manages to correctly capture all irreducible band representations at the high symmetry points in the Brillouin zone. The constituting d xz/yz orbitals belong to the B 2g/3g irreducible representations of the site symmetry group D 2h for the 3f Wyckoff positions [ Fig. 2(b)], forming a set of bands with opposite mirror eigenvalues along the Γ-M line. These bands give rise to a mirror-symmetry-protected Dirac cone on the Γ-M line and hence, an upper and lower van-Hove filling with opposite sublattice parity (Fig. 2). Employing the D 6h point group symmetry, our corresponding effective six-band Hamiltonian can then be derived as where (i = A, B, C and α = xz, yz). The crystal field splitting is denoted by α , where the operator c † kiα (c kiα ) creates (annihilates) an electron with momentum k of sublattice i in orbital α. The lattice structure factors read Φ AB (k) = 1 + e −2ik·a1 , Φ BC (k) = 1 + e −2ik·a3 , and Φ AC (k) = 1 + e −2ik·a2 obeying the hermiticity con- where the sublatticeconnecting vectors are denoted by a 1,2 = √ 3/2, ±1/2 T and a 3 = (0, −1) T . The second term represents the intraorbital NN hoppings on the kagome lattice with two distinct amplitudes t α , while the third term describes NN inter-orbital hopping amplitude t . The non-trivial transformation properties of the d xz and d yz -orbitals under the site-symmetry group result in a non-trivial sign structure for the third term, described by s AC = s CB = −s AB and s ij = −s ji . We approximately fit our model to the abinitio band structure (see Sec. IV of SM) and obtain the parameters t xz = 1 eV, t yz = 0.5 eV, t = 0.002 eV, xz = 2.182 eV, and yz = −0.055 eV. The corresponding band structure and Fermi surfaces are shown in Fig. 2.
RPA analysis. For the electronic interactions, we consider multi-orbital density-density type interactions up to NNs where n liα = n liα↑ + n liα↓ and l, l is an index for the unit cell. U , U , J, and J denote the onsite intraorbital, inter-orbital repulsion, Hund's coupling, and pair-hopping terms, respectively. V αβ denotes the repulsion between NN sites. In the following we adopt the parameterization U = U + 2J, J = J with J = 0.1 U and V αβ = 0.3 U ∀α, β consistent with our ab-initio cRPA estimates for a target manifold comprising V-3d and Sb-5p orbitals [35]. An extensive ab initio study of interactions and their dependence on the effective low-energy model will be presented elsewhere. The inset of Fig. 3 displays the leading eigenvalue of the bare susceptibility χ 0 (q) along high-symmetry lines. It is mainly attributed to the d yz orbital and features three prominent peaks. The largest two are located proximate to the Γ point, while the peak close to M is suppressed through sublattice interference. Including onsite and NN interactions at the RPA level, these peaks get significantly enhanced in the spin as well as charge channel. Note that, indeed, we find the charge susceptibility at the verge of diverging around the M point for strong NN repulsion, hinting at an incident CDW instability.
Below the critical interaction, superconductivity emerges, triggered by charge and spin fluctuations [36][37][38]. The obtained pairing eigenvalues as a function of U are displayed in Fig. 3. For U < 0.54 eV, pairing on the p-type Fermi sheet from the p-type van Hove band with B 1u (f x 3 −3xy 2 -wave) symmetry is favored and E 1u (p-wave) and E 2g We further analyze the harmonic fingerprint of the obtained pairings. The f x 3 −3xy 2 -wave pairing is dominated by the sublattice-pure d yz Fermi surface and the corresponding gap function in k-space is shown in Fig. 4(a), where there are line nodes along Γ-K and the superconducting gap changes sign under 60 • rotation. The corresponding real-space pairing is displayed in Fig. 4(b), which represents a spin-triplet sublattice-triplet pairing between d yz orbitals on the next-nearest-neighbor (NNN) sites. This pairing is promoted by the effective interaction between the NNN sites from the second-order effect of NN repulsion, an effect which is robust to including diagrams beyond the RPA approximation as one would do in a functional renormalization group (fRG) study [40,41]. Counter-intuitively, we find that both B 2u and E 2g states, favored at larger V , are dominated by the sublattice-mixing d xz Fermi surface and attributed to the pairing between d xz orbitals on the NN sites. The B 2u pairing, different from the B 1u pairing, possesses line nodes along the Γ-M direction. We expect the twofold degenerate E 2g pairing instability to form a d + id state below T c in order to maximize condensation energy, which spontaneously breaks time-reversal symmetry. No-tably, as a particular feature of the kagome lattice, the pairings between NN sites are promoted by the interorbital NN repulsion: although there is a direct repulsion between the NN sublattices, the second-order contribution via the other sublattices can be attractive. Once it overcomes the direct repulsion term, the effective interaction between NN sites becomes attractive and can promote NN pairing. Furthermore, note that the superconducting gap in the obtained states for our minimal model are either dominant on the d yz or d xz Fermi surface, which can be attributed to the assumed weak interorbital hopping.
Topological properties of the pairing states. Our minimal-model analysis is dominated by an f -wave state for weak coupling. Combined with the observation that the band renormalization in ARPES appears moderate, f -wave order could be a favored candidate for the nature of pairing in AV 3 Sb 5 . For time-reversal-invariant superconductors, the topological criterion about zero-energy Andreev bound states on edges is determined by winding numbers [42,43]. For both f -wave pairing states emerging in our analysis, each node carries a winding number of +1 or −1.
If we impose open boundary conditions, where the projections of nodes with opposite winding number do not overlap, a zero-energy flat band connecting the projections of nodes is created. For illustration, we present the surface spectrum of the f x 3 −3xy 2wave state with open boundary conditions along the x direction in Fig. 4(c). The corresponding local density of states features a sharp zero-bias peak, shown in Fig. 4(d) which could be observed at corresponding step edges in STM measurements. A similar analysis can likewise be performed for the d-wave and p-wave state. Chiral superconductors, which are likely to result from either d-wave or p-wave instabilities on hexagonal lattices, are potential hosts to Majorana zero modes in their vortex cores [44,45].
Experimental signatures. The observation of a finite κ/T for T → 0 in thermal conductance measurements [29] as well as the typical V-shaped gap in STM measurements [31] have provided supporting experimental evidence for a nodal gap in AV 3 Sb 5 , which would be in line with f -wave pairing which we obtain in a large parameter regime for our minimal model. An f -wave state will have additional, clear experimental signatures. First, since the f -wave state pairs electrons in the spintriplet channel, we expect the spin susceptibility in the superconducting phase to stay constant upon lowering the temperature, which should be seen in Knight-shift measurements. A further signature of spin-triplet pairing is often a high critical field. However, recent critical field measurements for both in-plane [46] and out-of-plane [29] fields indicate orbital limiting at rather low fields, such that the critical fields cannot distinguish the Cooper-pair spin structure. Finally, many thermodynamic quantities allow to identify the nodal structure through the tem- perature scaling of their low-temperature behavior. With the f -wave order parameter being the only one with symmetry imposed nodes, the low-energy excitations due to these nodes can directly probe this order without requiring phase information. The to date strongest evidence for the f -wave state comes indeed from thermal conductance measurements in Ref. [29]. In order to strengthen this conclusion, other thermodynamic probes, such as the electronic specific heat, penetration depth, or 1/T 1 in NMR should show a square, linear, and cubic temperature dependence [47], respectively. Note, however, that disorder washes out these low-energy signatures. Chiral p-wave and d-wave superconductivity would be in line with a possibly highly anisotropic, but nodeless gap. Furthermore, concomitant signatures of time-reversal symmetry breaking could be revealed through Kerr measurements, µ-SR below T c , or even new experimental approaches such as the detection of clapping modes [48]. Most importantly, for the specific scenario of multiple van Hove singularities, the double dome feature in the superconducting phase can tentatively be understood by the evolution of the individual van Hove bands as a function of pressure [21,23].
Our work shows the unique principles for unconventional pairing in kagome metals, which promises to unlock a whole new paradigm of electronically-mediated superconductivity. | 3,864 | 2021-04-12T00:00:00.000 | [
"Physics"
] |
No immediate attentional bias towards or choice bias for male secondary sexual characteristics in Bornean orang-utans (Pongo pygmaeus)
Primate faces provide information about a range of variant and invariant traits, including some that are relevant for mate choice. For example, faces of males may convey information about their health or genetic quality through symmetry or facial masculinity. Because perceiving and processing such information may have bearing on the reproductive success of an individual, cognitive systems are expected to be sensitive to facial cues of mate quality. However, few studies have investigated this topic in non-human primate species. Orang-utans are an interesting species to test mate-relevant cognitive biases, because they are characterised by male bimaturism: some adult males are fully developed and bear conspicuous flanges on the side of their face, while other males look relatively similar to females. Here, we describe two non-invasive computerised experiments with Bornean orang-utans (Pongo pygmaeus), testing (i) immediate attention towards large flanges and symmetrical faces using a dot-probe task (N = 3 individuals; 2F) and (ii) choice bias for pictures of flanged males over unflanged males using a preference test (N = 6 individuals; 4F). In contrast with our expectations, we found no immediate attentional bias towards either large flanges or symmetrical faces. In addition, individuals did not show a choice bias for stimuli of flanged males. We did find exploratory evidence for a colour bias and energy efficiency trade-offs in the preference task. We discuss our null results and exploratory results in the context of the evolutionary history of Bornean orang-utans, and provide suggestions for a more biocentric approach to the study of orang-utan cognition.
respectively.Given that the presence of flanges or facial symmetry may be a signal of good genes, we predicted for the dot-probe task that individuals should respond faster on trials where the dot would replace stimuli that depicted males with large flanges or males with symmetrical faces than when the dot replaced stimuli that depicted males with small or no flanges or asymmetrical faces.For the choice task, we expected individuals to more often choose the coloured dot that was associated with pictures of flanged males over the coloured dot that was associated with unflanged males.
Furthermore, for the preference task, we retrospectively decided to explore (i) whether individuals had a colour bias, (ii) whether individuals made choices that might reflect conservation of energy, and (iii) whether individuals showed temporal clustering in their choices, i.e., whether individuals switch between selecting flanged and unflanged stimuli every other trial or whether their choices are more clustered (e.g., multiple choices for one type of stimuli in a row).We investigated colour bias because evolutionary theories of colour vision have suggested that the ability to see red co-evolved with frugivory 45 .With regard to energy conservation, Bornean orang-utans are characterised by extremely low rates of energy use 46 , potentially an adaptation to habitats with long periods of fruit scarcity resulting in negative energy balance 47,48 .Potentially, such energy conservation mechanisms could also influence their responses during the task.Lastly, we also investigated temporal clustering, because flanged males are not only preferred mating partners 27 , but might also pose a threat (e.g., risk of infanticide 23 ) or are perceived as threatening 49 .Consequently, individuals may show temporal clustering in their choices during our task, by either opting for a less arousing picture of an unflanged male after seeing a flanged male stimulus (i.e., more switching, temporal dispersion) or by mostly sampling flanged male stimuli, until arousal reaches a certain threshold and individuals switch to unflanged stimuli instead (i.e., fewer switches, temporal clustering).Thus, because it could provide an opportunity to learn more about the relationship between orangutan's socioecology and their cognitive biases, we decided to explore these three topics in addition to our main questions.
Subjects and housing
The animals that participated in this study were part of a population of 9 Bornean orang-utans (Pongo pygmaeus) at Apenheul Primate Park, The Netherlands (Table 1).They were kept in a fission-fusion housing system consisting of 4 enclosures, meaning that they were in small subgroups with changing composition over time, in order to mimic the natural social system of the species.Some individuals never shared enclosures to avoid conflict (e.g., the two adult males).Each enclosure consisted of an inside part and an outside part.The orang-utans were fed multiple times a day, and had ad libitum access to water.Most of the orang-utans had previously been exposed to touchscreens for a previous dot-probe study 38 , but only two of those orang-utans (Sandy & Samboja) eventually participated in this dot-probe study.
With regard to participation in the experiments described here, three individuals participated in the dot-probe experiments (both flange size and symmetry version), while six individuals participated in the preference test.Table 1 indicated which individuals participated in the experiments.
Apparatus
Touchscreen experiments were conducted via E-Prime 2.0 on a TFT-19-OF1 Infrared touchscreen (19″, 1280 × 1024 pixels).The touchscreen setup was encased in a custom-made setup which was incorporated in one of the orang-utans' night enclosures.This night enclosure could be made accessible from two of the main enclosures by the animal caretakers.The researchers controlled the sessions on a desktop computer connected to the touchscreen setup and could keep track of the orangutans' responses on the touchscreen through a monitor that duplicated the touchscreen view.Additionally, the researchers had access to a livestream with a camera that was built in the enclosure, allowing them to observe the participant.Correct responses were rewarded with a sunflower seed on a 100% fixed reinforcement ratio.For most individuals, the rewards were delivered by a custom-built autofeeder linked to the desktop computer, that dropped a reward in a PVC chute.However, Kawan and Baju did not habituate properly to the presence of the feeder, and kept trying to push it over with sticks.Therefore, we decided to reward them manually.The researcher was positioned behind the setup which prevented visual contact between the orangutans and researchers.www.nature.com/scientificreports/
Dot-probe task
For the dot-probe task with flange size manipulation, we collected 72 images depicting front-facing Bornean or Sumatran orang-utan males with flanges.The images were collected through image hosting websites and social media groups.Due to the origin of the pictures, and the often-lacking information about the depicted individuals, we cannot be entirely certain that some stimulus combinations depict the same individuals.Furthermore, we often could not find a clear mention of the species depicted, which is why we consider the stimulus set as a combination of Bornean and Sumatran orang-utan males.We expected the species depicted to have little to no influence on our results because (1) facial features of Sumatran and Bornean flanged orang-utans are relatively similar 50 , (2) orang-utans are known to hybridize in captivity 51 and (3) each stimulus would serve as its own control (i.e., we would present two modified stimuli based on the same face) meaning that no combinations of Bornean and Sumatran orang-utans would be shown.We edited the stimuli in GIMP (v2.10.32).First, we cropped the faces.Second, we consecutively selected the flanges on the left and right side of the face, respectively.We defined the width of the flange as the distance between the horizontally most peripheral point of the face and the most peripheral point of either the eye region or beard.Hereafter, we increased the width of the flanges (measured in pixels) with 15 percent to obtain the stimulus with enlarged flanges, and we decreased the width with 15 percent to obtain the stimulus with reduced flanges.We chose 15 percent to make sure that the stimuli would not become abnormal in terms of flange size.In total, this resulted in 72 combinations of enlarged and reduced stimuli.
Using the same 72 images, we created the stimulus set for the dot-probe with symmetry manipulation.Here, we could only include the images where the faces of the orang-utans appeared to be nearly exactly frontally facing.To determine this, we visually inspected whether the eyes and nostrils were at a similar distance from the vertical midline of the face and whether they were of approximately similar size.This was the case for 49 of the images.Next, we created symmetrical versions of the face by mirroring either the left or the right hemisphere at the vertical midline of the face.Thus, from every stimulus, we obtained two symmetrized versions: one based on the left hemisphere and one based on the right hemisphere.Importantly, in some stimuli we employed an extra step to remove cross-eyedness that resulted from the mirroring.To this effect, we selected one of the eyes, and mirrored it, resulting in more congruent gaze direction of the eyes.Furthermore, some of the mirrored stimuli were characterised by abnormal facial shape, which is a well-known issue in symmetrized stimuli 10 .If this was the case, we excluded the stimulus.In total, we obtained 80 stimulus pairs consisting of one symmetrized face and the original face showing natural variation in symmetry.
Preference task
For the preference task, we used 104 stimuli (52 flanged, 52 unflanged) of Bornean orang-utans.The stimuli were collected from the Internet, mainly from release reports published by Bornean orang-utan reintroduction programs.These were supplemented with portrait pictures taken from semi-wild orang-utans and pictures of zoo-housed orang-utans within the orang-utan EEP.All of the stimuli depict front-facing Bornean orang-utan males.We cropped their faces using GIMP (v2.10.32) and pasted the cropped faces on a light-grey background (#808,080), resulting in stimuli with an 18:13 aspect ratio.From both the flanged and the unflanged stimuli, we randomly selected four stimuli (eight in total) to use as stimuli in the forced-trial phase of the experiment.The remaining 48 stimuli of each category were randomly distributed across three sessions.
Dot-probe task
The procedure for the dot-probe task was almost identical to the one described in Laméris et al. 38 .In five months prior to the experiment, all individuals were allowed to participate in training sessions.For training, we followed the protocol previously used to train bonobos (Pan paniscus) and Bornean orang-utans on the dot-probe task 38,52 .We elaborate on the different steps and the individual trajectories of the training period in the Supplementary Materials (Supplementary Methods & Supplementary Table 1).Eventually, three individuals fulfilled the training criteria.They participated in the full task.
Regarding the task design, a trial consisted of five phases (Fig. 1).First, a 200 × 200-pixel black dot appeared on a random position on the screen and had to be clicked.We added this step to avoid anticipatory responses.Second, the dot appeared in the lower, middle part of the screen.Touching this dot activated presentation of two stimuli (500 × 375 px) that were vertically positioned in the middle of the screen, and horizontally equidistant from the center of the screen (20% vs. 80%).After 300 ms, the stimuli disappeared and only one of the stimuli was replaced by a dot (the probe) that remained on the screen until touched by the subject.Touching the dot resulted in a reward (sunflower seed).After an inter-trial interval of 3 s, a new trial started.The background of screen was white during all steps of the trial.
Trials were presented in randomized order.For the flange size dot-probe, each individual participated in 6 sessions consisting of 24 trials.For the symmetry dot-probe, each individual participated in 8 sessions consisting of 20 trials.All stimuli were presented twice across all sessions: once as probed stimulus (replaced by dot), once as distractor stimulus (not replaced by dot).At the end of the test sessions, we created extra sessions per subject to repeat outlier trials (see Statistical analysis).All data were collected between February and December 2020, with a test stop between March and July 2020 due to COVID-19.
Two of the three participating individuals were already trained on the task for a previous study 38 .They received a few training sessions to check whether they still executed the task correctly, which was the case.For the other individuals, we employed a similar training procedure.Only one of the individuals, Kawan, managed www.nature.com/scientificreports/ to pass all phases of the training (between July and December 2019).Thus, this resulted in a total sample of three participants.
Preference task
The procedure of the preference task was adapted from Watson et al. 22 .Each session consisted of two parts (Fig. 2): a forced-trial procedure (8 trials) and a choice-trial procedure (16 trials).During all parts of the experiment, the background was silver gray (#c0c0c0).Trials in the forced-trial procedure started with a 300 × 300-pixel black dot that appeared in a random position.This randomly located dot was added at the start of each trial to avoid anticipatory responses.After clicking the dot, a similar dot appeared exactly in the center of the screen.By clicking this dot, individuals would advance to a screen that depicted either a red dot or a green dot.The shades of green (#339,900) and red (#990,000) were almost equal in saturation.Each dot colour was associated with one specific stimulus category within the session (either flanged or unflanged stimuli).Because there was only one dot on the screen (either green or red), they were "forced" to select this one.After their response, they would be presented with a stimulus from the corresponding category for 4 s (820 × 1134 px) and receive a reward, followed by a 2 s inter-trial interval.In total, subjects had to pass 8 forced trials (4 green, 4 red) at the start of each session, in order to probe the association between dot colour and stimulus category within the session.Hereafter, they were presented with 16 choice trials.The start and end of each choice trial were essentially the same as for the forced trials.However, instead of being presented with one coloured dot, subjects could now choose between the red dot and the green dot, thereby controlling the stimulus category on the screen.The dots were presented in a circular way, equidistant from the center of the screen and always located exactly opposite of each other.Note that this differs from the method that Watson et al. 22 describe, who presented the choice dots always at the same location on the screen.However, we noticed during the familiarisation sessions that the orang-utans would show anticipatory responses because they would know the exact location where the dots would appear.Therefore, we chose to randomize the location of the choice dots in a circular way.Importantly, the coloured dots were always located at the same distance from the center of the screen, where subjects needed to tap to advance to the choice dots.With regard to training, all individuals were already familiar with clicking dots for a reward.Therefore, we mainly had to familiarise them with the specific task (between July and October 2021).To this effect, all participating subjects fulfilled eight sessions.The first six sessions presented them with pictures of animals and flowers.Importantly, in these sessions we had not yet implemented the randomized location of the choice dots.They were presented on fixed locations, as in the original method 22 .Because we noticed that individuals would sometimes anticipate the appearance of the choice dots by clicking their location repeatedly before onset, we decided to run two final training sessions in which we implemented the randomised circular presentation described above.Subjects could only participate in the experimental sessions after participating in all eight of the familiarisation sessions.In total, six subjects fulfilled this criterion: all individuals except for the two flanged males.
In total, each subject participated in six experimental sessions between September and December 2021, depending on whether the subject already finished the familiarisation phase.In three of the sessions, flanged stimuli were associated with red dots, and in the three other sessions, flanged stimuli were associated with green dots.Subjects were presented with the sessions in blocks based on the colour-stimulus category association, so that they did not have to re-learn the association each session.Thus, three individuals started out with the three sessions where green was associated with flanged stimuli, while the three other individuals started out with the sessions where red was associated with flanged stimuli.Within the 3-session colour blocks, the order of the sessions was randomized between subjects.
Statistical analysis
We performed all of the analyses in R statistics Version 4.2.2.To analyse the data, we used Bayesian mixed models.Bayesian analyses have gained in popularity over the past few years because they have a number of benefits compared to frequentist analyses 53,54 .While frequentist methods (e.g., p-value null-hypothesis testing 55 )inform us about the credibility of the data given a hypothesis, Bayesian methods inform us about the credibility of our parameter values given the data that we observed.This is reflected in the different interpretation of frequentist and Bayesian confidence intervals: The first is a range of values that contains the estimate in the long run, while the latter tells which parameter values are most credible based on the data 53,56 .Furthermore, Bayesian methods allow for the inclusion of prior expectations in the model, are less prone to Type I errors, and are more robust in small and noisy samples 54 .Altogether, these reasons make Bayesian methods a useful tool for data analysis.
All models were created in the Stan computational framework and accessed using the brms package 57,58 , version 2.18.5.All models were run with 4 chains and 6000 iterations, of which 1000 were warmup iterations.We checked model convergence by inspecting the trace plots, histograms of the posteriors, Gelman-Rubin diagnostics, and autocorrelation between iterations 59 .We found no divergences or excessive autocorrelation in any www.nature.com/scientificreports/model.Furthermore, we used the package emmeans 60 to obtain posterior draws for contrasts.Below, we discuss the specific statistical models for each experiment.
Dot-probe task
In line with previous studies 5,7,38,42 we filtered the reaction times (RTs).First, we excluded slow reaction times, because they might reflect low motivation or distraction.Instead of opting for a fixed outlier criterion, we determined the upper limit per subject based on the median absolute deviation (MAD) in RT (i.e., RT = median + 2.5 × MAD; Leys et al., 2013).Second, we excluded reactions times < 200 ms, because they very likely represent anticipatory responses 61 .These unsuccessful trials were afterwards repeated in subject-specific repetition sessions.After the repetition of these unsuccessful trials, we applied the same filtering criteria.
For the flange size dot-probe, we collected 423 trials of which 96 were excluded based on the outlier criteria (22.69%).In the subject-specific repetition sessions that consisted of the unsuccessful trials based on our outlier criterion, we collected 105 trials, 28 of which were excluded based on the outlier criteria (26.67%).Thus, our final dataset for the flange size dot-probe contained 404 trials (Kawan: 133; Samboja: 131; Sandy: 140).For the symmetry dot-probe, we followed the same procedure.In total, we collected 474 trials, 102 of which were excluded based on the outlier criteria (21.61%).In the subject-specific repetition sessions that consisted of the unsuccessful trials based on our outlier criterion, we collected 108 trials, 32 of which were excluded (29.63%).Thus, our final dataset for the symmetry dot-probe contained 448 trials (Kawan: 152; Samboja: 142; Sandy: 154).
For both experiments, we created separate statistical models per subject.We chose to analyze our data at the individual level because of the low number of subjects that participated in this experiment.Given the fact that we had a relatively high number of trials per subject, it was possible to test for the presence of a within-subject effect separately for each subject.Previous work has suggested that this is a suitable approach in case of low subject numbers 62,63 .
To test whether the orang-utans had an attentional bias for large flanges, we fitted three Bayesian mixed models with a Student-t family.The Student-t family is ideal for robust linear models, as the model will be influenced less strongly by outliers.We specified mean-centered RT (in ms) as dependent variable, and Congruence (Congruent: probe behind large flange stimulus; Incongruent: probe behind small flange stimulus) as categorical independent variable.We added Probe location (Left/Right) as categorical independent variable to control for possible side biases in RT.Furthermore, we allowed the intercept to vary by Session, so that the statistical model accounted for variation in RT between sessions.We specified a Gaussian prior with M = 0 and SD = 5 for the Intercept of the model.For the independent variables, we specified regularizing Gaussian priors with M = 0 and SD = 10.For the nu parameter of the Student-t distribution, we specified a Gamma prior with k = 2 and θ = 0.1.For all variance parameters, we kept the default half Student's t priors with 3 degrees of freedom.To test whether orang-utans had an attentional bias for symmetrical faces, we followed the exact same procedure.However, the predictor Congruence now refers to the symmetry of the depicted face (Congruent: probe behind symmetrical stimulus; Incongruent: probe behind original stimulus).We used sum-to-zero coding for all of our categorical independent variables.
Preference task
For 5 of the 6 subjects we had a complete dataset of 96 choice trials.Only for Kawan we missed 4 trials, because he left twice at the end of an experimental session.Thus, our final dataset consisted of 572 datapoints.Because we had a larger number of subjects in this experiment, we chose to analyze the data in one statistical model.To examine whether the orang-utans preferred seeing a picture of flanged males over unflanged males, we fitted a Bayesian logistic mixed model (Bernoulli family).We specified the binary choice (1 = flanged, 0 = unflanged) as dependent variable.The within-subject categorical variable Colour Flanged, which represent whether the flanged stimuli were associated with the red or the green dot, was added as an independent variable, together with the between-subject variable Order, which represented whether the individual first received the sessions in which the red dot was associated with the flanged stimuli or in which the green dot was associated with the flanged stimuli.To explore the effect of dot location on the screen on probability of selection, we extended the model by adding a continuous predictor that was zero-centered and reflected the location of the dot representing flanged stimuli relative to the vertical middle of the screen (range − 0.35-0.35,with negative values representing the higher portion of the screen).
With regard to the random effects, we allowed the intercept to vary by Subject and allowed the intercept of Session to vary within Subject.Furthermore, we allowed the slope for Colour Flanged to vary by Subject, to take into account potential treatment effects between subjects.We specified a Gaussian prior with M = 0 and SD = 0.5 for the Intercept and independent variables of the model.Note that these priors are specified on the logit scale.For all variance parameters, we kept the default half Student-t priors with 3 degrees of freedom.
To explore temporal clustering and dispersion in the choices of the orang-utans, we developed an R script based on 64 that is essentially a Beta-Binomial model that can be used to assess independence of binary observations.We applied it to each of the sessions independently.The script first counts the number of switches between selected categories within the session (variable T).Second, we specified a Beta (10, 10) prior on θ, the probability of selecting a flanged male stimulus, emphasizing a relatively strong expectation of 50/50 selection of flanged and unflanged stimuli.Third, we obtained a posterior for θ by updating the Beta(10, 10) prior based on the choices from the session.Fourth, we simulated 10,000 binary series of the same length as the session, based on sampling from the posterior distribution of θ.Note that the binary series consisted of independent samples.Fifth, based on these simulations, we counted the number of switches T in each independent series, and obtained a distribution of T under the assumption of independence.This allowed us to compare the observed T within the sessions with the expected T under the assumption of independence.Consecutively, we checked whether the observed T fell outside of the 95% Highest Density Interval of the expected T, and we calculated the proportion of expected T-samples that was either similar or higher, or similar or lower than the observed T. With regard to the interpretation, an observed T that is low compared to the distribution of expected T reflects fewer switches in a session than expected under the assumption of independence, hence temporal clustering of choices.An observed T that is high compared to the distribution of expected T reflects more switches in a session than expected under the assumption of independence, hence temporal dispersion of choices.
Effect size indices
The effect size indices that we report are based on the posterior distributions of each statistical model.We report multiple quantitative measures to describe the effects.First, we report the median estimate (b or OR), and median absolute deviation of the estimate between square brackets.Second, we report an 89% highest density interval of the estimate (89% CrI).We have chosen 89% instead of the conventional 95% to reduce the likelihood that the credible intervals are interpreted as strict hypothesis tests 56 .Instead, the main goal of the credible intervals is to communicate the shape of the posterior distributions.Third, we report the probability of direction (pd), i.e., the probability of a parameter being strictly positive or negative, which varies between 50 and 100% 54 .
Ethics
This study employed only non-invasive methods and animals were never harmed or punished in any way during the study.Participation was completely voluntary, animals were tested in a social setting, and animals were never deprived of food or water.The care and housing of the orangutans was adherent to the guidelines of the EAZA Ex-situ Program (EEP).Furthermore, our research complied with the ASAB guidelines 65 and the ARRIVE guidelines 66 , was carried out in accordance with the national regulations, and was approved by the zoological management of Apenheul Primate Park (Apeldoorn, The Netherlands).
Flange size
In the flange size dot-probe, we found no attentional bias for larger flanges in any of the three participating orangutans (Fig. 3A; Supplementary Table Because we applied a proportional transformation to our stimuli, the absolute width difference between the stimuli was not similar for all stimulus combinations.Therefore, we ran additional sensitivity analyses that explored whether the difference in RT between congruent and incongruent trials varied over the absolute width difference of the stimuli.These analyses are reported in the Supplementary Materials (Supplementary Table 3; www.nature.com/scientificreports/Supplementary Fig. 1).We found no indications that the orang-utans did show a faster response to congruent trials at specific width differences.This suggests that our null results are at least not driven by differential responses to stimuli on the extremes of the width spectrum.
Symmetry
In the symmetry dot-probe, we found no attentional bias for symmetrical faces in any of the three participating orang-utans (Fig. 3B; Supplementary Table 4); whether the probe replaced the large or small flange picture had no robust effect on the RT of Kawan (b congruent = -3.28[8.50]
Preference task
In the preference test (Supplementary Table 5), we found that the orang-utans chose stimuli of flanged and unflanged males exactly at chance level (OR Intercept = 1.00 [0.13], 89%CrI [0.78; 1.25], pd = 0.52).Thus, they did not seem to prefer looking at stimuli of flanged males.This was the case for all individuals (Fig. The between-subject effect of Order did not have a robust effect on the preference of the individuals (OR FlangedRedFirst = 0.88 [0.11], 89%CrI [0.69; 1.07], pd = 0.84).However, the colour of the dot that was associated with flanged males did have an influence on the preference: the orang-utans were more likely to select the flanged male stimulus if these were associated with the red dot (OR Green = 0.67 [0.08], 89%CrI [0.54; 0.83], pd = 0.99), indicating a preference for the colour red (Fig. 5).Furthermore, we found very strong evidence for the notion that orang-utans made energy-efficient choices (Supplementary Table 6; Fig. 6): they were more likely to select the In addition, we explored whether individuals showed temporal clustering in their choices by selecting the same category multiple times in a row.To this effect, we compared the number of switches between categories for every session to a dataset consisting of the number of switches that one would expect under the assumption of independence.We found no evidence for temporal clustering (fewer switches than expected) or temporal dispersal (more switches than expected) in any of the sessions, indicating that previous choices did not influence choices in the next trial.
Discussion
Even though face perception in primates has been studied extensively, the interplay between facial traits relevant to mate choice and cognition has received relatively little attention, especially in great apes.Therefore, the aim of this study was to investigate whether zoo-housed Bornean orang-utans (Pongo pygmaeus) have cognitive biases for males with fully developed secondary sexual traits (flanged males) or males with more symmetrical faces.Across two experiments, measuring either immediate attention bias or choice bias, we found no evidence of cognitive biases towards facial traits that might be relevant for mate choice.This lack of biases was consistent between all participating individuals.Furthermore, we did not find evidence for either temporal clustering or dispersion in the preference test: orang-utans did not seem to alter their choices based on their response in previous trials.However, we did find evidence of (i) a robust colour bias and (ii) an energy conservation strategy in the preference test.Below, we discuss our results in the context of primate literature and orang-utan ecology and consider methodological limitations.
Contrary to our hypotheses, we found no evidence for immediate attentional biases towards either large flanges or symmetrical faces in the dot-probe paradigm, while we expected a bias towards larger flanges and more symmetrical faces.With regard to flanges, previous research has shown that orang-utans spend a substantial amount of time looking at flanges while scanning male faces 3 and orang-utans also showed an attentional bias towards flanged males in an eye-tracking study 67 .Regarding symmetry, we recently reported a similar null result in humans in the exact same task 15 : human participants had no attentional bias towards symmetrical faces.While previous literature has often emphasised the importance of symmetry for mate choice 68,69 , recent literature has criticised this notion in humans on the basis that the link between symmetry and attractiveness seems overstated 32 and the link between symmetry and health remains equivocal 33 .Thus, the results for facial symmetry are in accordance with recent null findings and theoretical debates in humans.
While a null result could indicate that orang-utans do not have an immediate attention bias towards larger flanges or symmetrical faces, there are relevant methodological limitations in our dot-probe study that warrant some reflection.First, specifically regarding the symmetry experiment, we presented artificial stimuli (symmetrized versions) paired with the original faces.Therefore, there was a risk that we investigated attention bias to manipulated versus unmanipulated images instead of symmetrical versus asymmetrical faces.It is difficult, however, to envision how this could have led to null results.If the orang-utans indeed showed a clear bias towards either category, this would be a convincing alternative explanation.Unfortunately, no studies have yet investigated whether orang-utans have an attentional bias towards unmanipulated or manipulated stimuli.However, recent studies in rhesus macaques have not found evidence that natural images are attended to in a different way than "uncanny" manipulated stimuli 70,71 .Nevertheless, future studies could consider employing morphing techniques 72 to create manipulated versions of both symmetrical and asymmetrical faces.Such methods allow for symmetrizing the shape of the face without changing any other textural or structural parameters.
Moreover, it is possible that the manipulation we used, which involved presenting faces with slightly larger or smaller flanges, did not generate salient enough differences between the stimuli to produce robust variations in reaction times in an immediate attention task.Instead of presenting the orang-utans with pictures of different flanged and unflanged males, we wanted to present the same faces while varying only the size of the flanges.This is a common approach in such studies (e.g. in macaques 19 & humans 73 ) to keep the stimuli as controlled as possible.A more skeptical interpretation would be to question whether the orang-utans could even distinguish between the smaller and larger stimuli or between symmetrical and asymmetrical faces.Previous size discrimination studies showing that primates can distinguish objects that approximately differ 10% in volume 74 and that chimpanzees are able to discriminate between dots that differ < 10% in size 75 .Given that our stimuli differed on average 15% in width and none being < 10%, we think it is unlikely that the orang-utans would not have been able to distinguish between the larger and smaller stimuli.The same applies to facial symmetry: previous studies have shown that different primate species are sensitive to variation in facial symmetry (rhesus macaques 24 & capuchin monkeys, Sapajus apella 72 ).To our knowledge, there are no studies investigating explicit categorizing of symmetrical and asymmetrical faces in primates.However, even if primates were not able to explicitly do so, this would not mean that their attention cannot be implicitly biased differentially by symmetrical and asymmetrical faces.Such contradictions between implicit and explicit cognition can also be found in attentional tasks with humans.For example, people may implicitly avoid attending to specific locations that often contain distractor images while at the same time not being able to explicitly indicate those locations 76 .Altogether, we deem it unlikely that the orang-utans were not able to discriminate between larger and smaller flanges or symmetrized and asymmetrical faces, while at the same time acknowledging that more extreme manipulations of the stimuli might have resulted in an attentional bias.However, this would mean that we would present the orang-utans with extremely unnatural stimuli, which would affect the ecological validity of our results.
Another important limitation is that the experimental paradigm that we used to study immediate attention, the dot-probe paradigm, has been subject to debate in humans due to its relatively poor reliability 77,78 .Similarly, some inconsistent results have been observed when applying this paradigm to primates.While the paradigm has successfully shed light on the influence of emotion information on cognition in various primate species 7,41,42,52,79 , inconsistencies persist.For example, we have recently shown that Bornean orang-utans do not seem to show the expected attentional bias towards emotions in the dot-probe task 38 .This raises the question of whether such a widely reported bias is genuinely absent in Bornean orang-utans or if the current paradigm fails to capture it adequately.One potential methodological reason for these inconsistencies is that the dot-probe paradigm relies on reaction times, which are inherently noisy 80 .Especially for species with relatively low levels of manual dexterity compared to humans, such as orang-utans 81 , reaction time might not be the most suitable dependent measure in cognitive tasks.Instead, more fine-scaled methods such as non-invasive eye-tracking could be considered to study attentional preferences in primates.These methods are relatively easy to implement in primates 82 , and provide a more direct measure of attention 83 .Correspondingly, we did find an immediate attention bias towards flanged males in an eye-tracking task (Roth et al., in prep.).This suggest that eye-tracking allows us to probe cognitive biases that are potentially too subtle to identify using reaction time tasks, at least in orang-utans.
In the preference task, we used a previously developed paradigm 22 to test whether Bornean orang-utans would choose to be presented with flanged or unflanged stimuli.However, all individuals selected flanged and unflanged stimuli equally often.Our results are in contrast with the results that a previous study found in rhesus macaques 22 , who specifically selected stimuli depicting faces of high-ranking individuals or stimuli showing coloured perinea.While we made some minor adaptations to the original paradigm (longer stimulus presentation, no fixed dot locations to avoid anticipatory responses, no indirect comparison of stimulus categories), we do not consider it likely that these changes explain the null results.One potential explanation relies on the fact that both choices were rewarded equally, meaning that there was no incentive to choose one category over the other in principle.Because Bornean orang-utans are often confronted with long periods of fruit scarcity 48 , they might be especially sensitive to food reward.Potentially, the anticipation of reward during the trial was so salient for them that the means to get to the reward became relatively unimportant.This raises the question whether extrinsically rewarded touchscreen experiments like the one we used here are suitable to study Bornean orang-utan cognition.
We also found that individuals had a higher tendency to choose the flanged male stimulus when it was associated with a red-coloured dot instead of the green-coloured dot, despite the fact that the dots were similar in saturation.This preference for red may indicate a general sensory bias towards the colour red, which could be attributed to the evolutionary pressure on primates to select ripe fruits or young leaves 84 .This bias for red objects might extend beyond fruits, possibly explaining why the individuals in the study were more likely to select the red dot.However, previous reports present conflicting evidence regarding the colour bias in food preferences among orang-utans.While one report suggested a preference for red food in a juvenile orang-utan 85 , a more recent report did not find any colour bias 86 .It is important to note that both reports concern single-subject observations.A more comprehensive study in rhesus macaques demonstrated a bias towards red food items, but this bias did not extend to non-food objects 87 .In conclusion, we found evidence for the notion that orangutans have a sensory bias towards red objects, although this seems to conflict somewhat with existing literature on colour biases in primates.
In addition, orang-utans were more likely to select the dot associated with flanged male stimuli if it was in the lower portion of the screen, potentially reflecting an energy conservation mechanism.Bornean orang-utans are extremely well-adapted to low fruit availability.This is reflected in their extremely low levels of energy expenditure 46 and their energy-efficient locomotion style 88,89 .This inclination to conserve energy may also manifest in their behaviour during our experiment.In the preference tasks, the locations of the dots were randomized in a circular way between trials, with both dots appearing in exact opposite positions equidistant from the center of the screen.While this approach helped to avoid anticipatory clicking by the orang-utans, it did result in differential energy costs associated with the dots.Clicking the dot in the upper portion of the screen required them to lift their arm further compared to clicking the dot in the lower portion of the screen.Consequently, the orang-utans were more inclined to select the dot in the lower portion of the screen.It is important to acknowledge this limitation in our experimental design.Nevertheless, even after accounting for the vertical location of the dots, we found no bias for flanged or unflanged stimuli (Supplementary Table 6).Thus, the strong tendency of orang-utans to conserve as much energy as possible may influence their performance during cognitive tasks.
Future studies on orang-utan cognition should consider the aforementioned effects of colour and dot location on choices.These biases underscore the need for a biocentric approach to animal cognition, which takes into account a species' uniquely adapted perceptual system 90 .Interestingly, however, the notion that orang-utans try to conserve energy during cognitive tasks opens up intriguing avenues for further research.If orang-utans are so prone to conserve energy, it might be possible to exploit this tendency by presenting them with an effort task.Previous studies with primates have developed effort paradigms that are relatively easy to use.These paradigms allow individuals to control the presentation of stimuli by holding a button (i.e., exerting effort).For example, previous studies have used this approach to study preferences for different stimulus categories in Japanese macaques (Macaca fuscata), finding that they exerted more effort to see stimuli of monkeys 91 or humans 92 .A similar design could be considered for orang-utans: given that energy conservation is such a core strategy for them, using an effort task may be an especially relevant method to induce their preferences for specific stimuli categories.
In conclusion, our findings from two experimental paradigms indicate no immediate attentional bias towards large flanges or symmetrical faces, nor a choice bias for flanged males.However, we did find a preference for the colour red in the preference task.Furthermore, individuals seemed to conserve energy during the preference task by picking the vertically lowest option on the touchscreen.Our results highlight the importance of taking species-typical characteristics into account when designing cognitive experiments.Future studies could leverage the energy-conserving nature of Bornean orang-utans by presenting them with effort tasks, where they need to exert effort to view stimuli.Such an approach may be fruitful to study social cognition, including its interplay with mate choice, in Bornean orang-utans. https://doi.org/10.1038/s41598-024-62187-9
Figure 1 .
Figure 1.Schematic depiction of a dot-probe task trial with large and small flanges as competing stimuli.The arrow indicates the temporal progression of the trial.
Figure 2 .
Figure 2. Schematic depiction of two preference task trials with flanged and unflanged stimuli.The left box shows the design of a forced choice trial, while the right box shows the design of a choice trial.The arrows indicate the temporal progression of the trial.
Figure 3 .
Figure 3. Posterior predictions of the difference in RT between trials where the probe replaced (A) the stimulus with larger flanges (Congruent) and trials where the probe replaced the stimulus with smaller flanges (Incongruent), or (B) replaced the stimulus with symmetrized face (Congruent) and trials where the probe replaced the stimulus with original face (Incongruent).Values under the horizontal null-line mean that the subject was predicted to respond faster to congruent than incongruent trials.
Figure 4 .
Figure 4. Posterior predictions of the probability of selecting the flanged male stimulus per subject.The horizontal line indicates chance level.
Figure 5 .
Figure 5. Posterior predictions of the probability of selecting the flanged male stimulus as a function of the colour associated with flanged male stimuli per subject.The horizontal line indicates chance level.
Figure 6 .
Figure 6.Posterior predictions of the probability of selecting the flanged male stimulus as a function of the vertical position of the dot representing the flanged male on the screen.Negative values indicate that the dot associated with the flanged male stimulus was positioned in the higher portion on the screen, while positive values indicate the lower portion of the screen.
Table 1 .
Orang-utans housed in Apenheul at the time of study. | 9,727.2 | 2024-05-27T00:00:00.000 | [
"Biology",
"Psychology"
] |
Tuning halide perovskite energy levels
Perovskite solar cells are attracting great attention in the field of renewable energies. The possibility of combining different materials and compositions in these devices brings to significant advantages, but it also requires a careful optimization of the interfaces between the materials, including their energy level alignment. In this work, we show how to tune the energy levels of halide perovskite by controlling the deposition of dipolar self-assembled monolayers, providing a toolbox to simplify the application of halide perovskites in optoelectronic devices. See Antonio Abate et al ., Energy Environ . Sci ., 2021, 14 , 1429. Energy & Environmental Science
Introduction
The position of the energy levels of a material is of crucial importance in optoelectronics, from detectors 1 to LEDs 2 to solar cells, 3,4 mainly because of the energy level alignment at interfaces between different semiconductors comprised in these devices.
Among optoelectronic devices, solar cells are of particular interest nowadays because of the climate change crisis and with their record power conversion efficiency over 25%, 5 perovskite solar cells (PSCs) are considered a rising star in the field of new materials for photovoltaics. The flexibility of the halide perovskite composition 6,7 allows researchers to quickly develop new materials to improve power conversion efficiency and stability of PSCs. 8,9 The energy alignment of the perovskite at the interface with the other materials comprising the devices is essential for the efficiency of the charge separation and thus the PSCs performance. The importance of controlling the interface energetics is furthermore stressed by the high number of related reports that is possible to find in literature, mainly applied to electrode materials or charge selective layers, as shown for example in a helpful review from Kim et al. 10 Probably the most common method for tuning the energy alignment between two materials is with the introduction of a dipolar interlayer, for example, by functionalizing the surface through specific molecules. [11][12][13][14][15] In particular, in the photovoltaic field, this concept has been intensively applied to change the work function (WF) of transparent conductive oxides and with that the position of the oxide Fermi level with regard to the charge transporting levels of the semiconductor on top. For instance, Zhang et al. 16 functionalized ZnO with dipolar molecules for organic solar cells. Yang et al. 17 inserted an interlayer between TiO 2 and perovskite, improving the charge extraction, similarly to Liu et al., 18 who instead modified the interface between SnO 2 and perovskite. A few cases can also be found related to WF tuning of the perovskite layer. Agresti et al. 19 engineered the perovskite/hole selective layer interface with titanium-carbide MXenes. At the same time, Wu et al. 20 developed a PSC with a moisture-resistant carbon electrode and functionalized the perovskite surface with PEO (poly(ethylene oxide)) to reduce the mismatch of energy levels at the perovskite/ carbon interface. Similarly, Dong et al. 21 used conjugated aniline PPEA (3-phenyl-2-propen-1-amine) at the interface between perovskite and PCBM, achieving a better energy level alignment and, therefore, performance.
Among the different kinds of interlayers, self-assembled monolayers (SAMs) are of particular interest because of their stability and ordered distribution. Since the work of Campbell et al., 22,23 it is known that interfacial SAMs with a dipole can tune the WF of a material by changing the vacuum level position, and thus its energetics. This technique has been widely used on flat surfaces to, for example, manipulate the Schottky energy barrier between a metal electrode and an organic material. [24][25][26] Lange et al. 27 demonstrated that it is possible to use SAMs-modified ZnO as either electron or hole selective layer, depending on the dipole of the molecules used for the functionalization, and recently the same group also showed how SAMs surface treatments and UV light soaking could influence the hole injection properties of thin ZnO films. 28 Kong et al., 29 instead, proposed an hole selective layer-free perovskite solar cell where the energy level mismatch between ITO and perovskite was adjusted by introducing a monolayer to increase the ITO WF. At the same time, Zhang et al. 30 functionalized the TiO 2 surface, improving the charge extraction in PSCs.
Functionalizing surfaces with dipolar SAMs is proven to be a reliable and effective method to optimize the energy level alignment. Nevertheless, SAMs have been rarely used directly on the perovskite surface. Controlling the formation of SAMs on halide perovskite films is difficult due to the relatively high surface roughness and uncontrolled surface chemistry. Sadhu et al. 31 elucidated, indeed, how molecules can self-assemble on the surface in different ways depending on the substrate and how the interaction influences the surface potential, highlighting the challenges of SAMs deposition on perovskite. Moreover, there are so far no standard guidelines which would allow researchers to use this method systematically.
In this work, we show how to shift the perovskite energy levels (and by that the WF) and control the magnitude of the shift, without using necessarily different molecules. We can change the perovskite WF of several hundreds of meV by controlling the surface coverage through the concentration of the solution used for the deposition, avoiding in this way the risk of damaging the perovskite surface by employing molecules with a strong dipole. This finding distinguishes our work from others regarding perovskite WF tuning. Indeed, once a molecule with the desired binding mode and dipole direction is selected, our method allows the WF shifting only by changing the deposition parameters. Additionally, density functional theory (DFT) calculations demonstrate a correlation between surface coverage and WF shift, supporting our strategy. We also investigate the impact of the WF shift on the PSCs performance and their energy level alignment by combining experimental data and drift-diffusion simulations.
We aim to provide a tool which could allow to directly select which molecule and deposition conditions are needed to obtain a specific WF shift. This tool would provide more flexibility in the choice of the combination of materials, i.e. perovskite compositions and contact layers, to be used for any perovskite-based optoelectronic device. For example, in PSCs research it would be possible to ease the requirements for new electron or hole selective layers, allowing researchers to focus on finding or synthesizing materials with good conductivity and excellent stability rather than a proper energy level alignment.
Results and discussion
We made use of specific molecule-to-substrate interactions to self-assemble small molecules on the perovskite surface. 32,33 The combination of the intrinsic dipole of these molecules and the dipole created by the interaction between the molecules and the surface causes a shift in the perovskite energetics. For the sake of simplicity, from now on we will look only at the combined effect of these dipoles and call ''positive dipole'' the case where the total dipole is pointing towards the perovskite surface and ''negative dipole'' the case in which the total dipole points outside with respect to the perovskite surface. Fig. 1a shows the effect of a dipole on the perovskite vacuum level (and thus the WF). A negative dipole will shift the local vacuum level downwards and therefore decrease the WF of the material. Specifically, we will discuss the results of the perovskite functionalized with amyl sulfide (csc5 - Fig. 1b), which is a Lewis base and can bind to the Pb 2+ ions on the perovskite surface by donating its lone pair. 34 On the other hand, a positive dipole will shift the vacuum level upwards, causing an increase in the WF of the functionalized material. In this case, we chose perfluorodecyl iodide (IPFC10 - Fig. 1c) as an example of the described behaviour. This molecule is already known in the literature for being able to form SAMs and enhance the efficiency and stability of perovskite solar cells. 33
View Article Online
In this molecule, the iodine on the head remains slightly positive thanks to the electrons withdrawing effect of the fluorinated chain, which allows the molecule to behave like a Lewis acid and form halogen bond [36][37][38][39] with the halogen ions on the perovskite surface. The selective absorption of Lewis bases and Lewis acids makes them perfect candidates for generating a negative or positive dipole and, in our case, the main component of the total dipole seems to come from their interaction with the surface (see charge transfer analysis later in the text).
To demonstrate the scenario as mentioned above, we carried out various experimental and theoretical analyses. Making use of several techniques we measured WF and bands edges' position in the case of bare perovskite and perovskite functionalized with 10 mM solutions of the two molecules. We used a perovskite precursor solution with composition (Cs 0.05 [MA 0.15 FA 0.85 PbI 0.85 Br 0.15 ] 0.95 ).
At first, we measured several times with different UPS setups, but the results were repeatedly not reliable. We concluded that the molecules tend to detach in an ultra-high vacuum, making it challenging to draw actual conclusions from measurements in such conditions. For this reason, we decided to use other techniques working at atmospheric pressure, such as Kelvin probe (KP), Kelvin probe force microscopy (KPFM), Ambient Pressure Photoemission Spectroscopy (APS) and cyclic voltammetry (CV). We then compared the obtained results to ensure their reliability.
To evaluate the WF shift we performed KP 40-43 measurements in different conditions (Fig. 2a): in one case the experiment was entirely done in air (dashed lines), in the other case the samples were quickly mounted in air, but the investigation was carried out in N 2 atmosphere (solid lines). It is evident how the conditions of the measure can influence the amount of the shift. In both cases, samples treated with positive dipoles present an increase of the WF compared to the bare perovskite reference, while samples treated with negative dipoles show a WF decrease.
To have a better idea of the real magnitude of the WF shift induced by the functionalization, we also performed KPFM 43-45 measurements on non-air exposed samples. In Fig. 2b, we show the voltage maps of the WF shift for bare and functionalized perovskite together with the respective distributions. The results confirm the direction of the change already proven by KP and show a WF shift of about 150 meV for the negative dipole case and 300 meV for the positive dipole case. It is possible to notice that the distribution for the negative dipole functionalization is broader and less symmetric than the others, this might be an indication that at 10 mM concentration the surface is already saturated and in some areas, the molecules are packing in multilayers. This topic will be explored further in the following sections.
The maps also suggest that the shift is not homogeneous on the surface and the result is, therefore, an average of local WF variations. This behaviour is most likely due to the perovskite roughness and the presence of grains. In every map, the presence of small regions with a higher WF is indeed evident. Considering that our molecules functionalize the surface by binding to lead or halide ions, these regions probably indicate areas with a higher concentration of defects and thus higher concentration of molecules. According to the work of Gallet et al., 46 these regions might correspond to different facets of the crystalline structure. The local variation offers interesting data for further investigation. Nevertheless, it does not affect the results of this work, since the average gives the relevant quantity on a larger scale.
The KP setup operated in air is combined with an APS tool (KP-APS) 42 in view to respectively determine the WF and the ionization energy (IE), i.e. the valence band maximum (VBM) relative to the local vacuum level on the same position over the sample. With the absolute WF and IE data measured, the KP-APS combination offers the possibility to determine the VBM shift and compare it with the WF shift under the same conditions. Fig. 2c shows the IE as determined from photoelectron yield spectroscopy plots in APS measurements and, as shown in Fig. 2e, the shifts of VBM and WF match in direction and magnitude for the different cases. This result is significant because it shows that both WF and VBM are shifting equally, and thus it allows to exclude the possibility that the detected WF shifts are due to doping of the perovskite layer. Therefore, this provides evidence that the surface dipole affects all the surface energetics of the perovskite. Complementarily, we monitored the VB shift by performing CV on functionalized and bare perovskite on FTO (Fig. 2d), respectively. From the first distinguishable oxidation peaks of the different CV scans, it is possible to calculate the VB position by adding the redox potential of the Ag/AgCl electrode (4.65 eV) 47 for the different cases. We can observe that the negative(positive) dipoles trigger a VB shift downwards(upwards) in relation to the VB of the reference perovskite on FTO. Both cases are comparable in direction and magnitude of VB shifts to those measured with the other techniques. The position of the reduction peaks in the range of the negative bias is instead not reliable because some partial perovskite degradation during the reverse scan was noted. Therefore, a calculation of the conduction band position is not possible in this case. We do not have any direct measurement of the conduction band and its shift, but we measured the PL spectra of the different cases and, combining it with UV-Vis and EQE measurements (Fig. S9, ESI †), we calculated the bandgap. We obtained a value of approximately 1.64 eV and we could observe that it is independent of the functionalization. It follows that the conduction band is shifting in parallel of the valence band and WF.
In Fig. 2e we summarized the results. For all measurements conditions, we observed an increase(decrease) in WF and VB in the case of a positive(negative) dipole functionalization. Differences in absolute numbers are a consequence of varied measurement conditions inherent to the respective setups.
Having established that the dipolar molecules can control the positive and negative shift of the perovskite WF, we focused on achieving control over the magnitude of the change. We varied the concentration of the solution used for the deposition to control the distribution of the molecules on the perovskite surface and, consequently, the magnitude of the WF shift.
We measured the change in WF for the different solution concentrations through KP and KPFM, obtaining similar trends with both methods (Fig. S10, ESI †).
In Fig. 3a and b we show how the perovskite WF is changing depending on the solution concentration in the case of positive (3a) and negative (3b) dipole. In the positive dipole case (Fig. 3a), the work function is increasing with the increasing of the solution concentration until reaching saturation at about 40 mM. The two molecules (perfluorodecyl and perfluorododecyl iodide -IPFC10 and IPFC12) differ only in the chain length. Therefore the difference in the magnitude of the shift is due to the difference in the strength of the dipole, where the latter has a smaller dipole due to the higher molecular length (see ESI † for an overview of the dipoles of the used molecules and the charge transfer analysis).
A similar trend is obtained in the case of a negative dipole (Fig. 3b), but with the WF decreasing compared to the perovskite one and reaching saturation already for a concentration of about 10 mM. In this case, the two molecules are different both in structure and binding group (amyl sulfide -csc5-and trioctylphosphine oxide -TOPO-), therefore we can state that it is the dipole type, not the specific molecule, which defines the direction of the shift and characteristics of the trend. In other words, the behaviour is independent of the molecule.
It is worth noting that TOPO-functionalized perovskite shows an extreme WF shift. Moreover, it was not possible to collect data for more than 10 mM concentration because for higher molarities the solution was damaging the perovskite film, presumably by dissolving the lead cations 48 (Fig. S11, ESI †). This seems to suggest that TOPO might be a tricky molecule to use in devices, despite its excellent passivation properties reported by several papers. [49][50][51] The above results show that by changing the solution concentration, it is possible to tune the magnitude of the WF shift up to several hundreds of meV. It is known that polar monolayers properties are determined not only by the type of molecules and bonding configuration to the substrate but also by size, (dis-)order and adsorption patterns within the monolayer. 52,53 Therefore, in our opinion, the trends showed in Fig. 3 can be explained by considering the deposition kinetics, and it is related to the coverage. At first, all defect sites (i.e. Pb 2+ and halide ions/vacancies) are free, consequently, as soon as the samples are immersed in the solution, the molecules start to bind to them. However, with low solution concentration, not all the defect sites can be occupied by a molecule, therefore the surface is not fully functionalized. With the increase of the concentration the functionalization level increases as well, until reaching a point where a molecule occupies all the surface defects, therefore the curves reach a plateau, and the condition for maximum WF shift is achieved. To be noted that a similar process would happen if the solution concentration were kept constant and the dipping time varied (see Fig. S14, ESI †), for this reason, it is crucial to keep one of these parameters fixed. In our case, we chose to settle the dipping time and vary the concentration, but we would like to stress that changing the dipping time would give the same results, since the process is related to the deposition kinetics of the molecules.
Another exciting feature is visible by comparing the trends for the positive and negative dipoles (Fig. 3c): the concentration at which the WF starts to saturate is different for the two cases. This behaviour may indicate a specific ratio of defects at the perovskite surface, in particular the ratio between halides and Pb vacancies. Indeed, in our case, the positive dipole is given by a Lewis acid, which is binding to the halides on the surface, while the negative dipole is provided by a Lewis base, which is binding to the Pb ions. Therefore, the fact that the negative dipole functionalization curve saturates at lower concentrations than the positive dipole curve suggests that there are less Pb ions on the surface compared to halide ions, i.e. there are more Pb vacancies. This result is in agreement with the work of Philippe et al., 54 where the I/Pb ratio at the surface for CsMAFAperovskite is said to be around 3. Moreover, it is also essential to consider the geometry and the steric hindrance of the two molecules. Indeed, while IPFC10 consist of a straightcomparably rigid-chain which allows the molecules to form a compact vertical layer, csc5 has two aliphatic chains that most likely limits the number of molecules which can bind to the surface, thus leading earlier to saturation.
The difference in behaviour is also highlighted by the WF distribution curves in Fig. 3e (respective contact potential difference (CPD) maps in Fig. S12 and S13, ESI †). The negative dipole curve stops shifting and therefore reaches saturation for lower concentrations than the positive dipole one. Moreover, it is interesting to see how both curves start to broaden and lose symmetry while approaching saturation. This shows that initially (i.e. for low concentrations) the molecules are nicely depositing and forming an ordered monolayer. In contrast, with increasing concentration, the deposition seems to become less uniform, probably forming some agglomerates, especially when saturation is reached.
To support the experiments, we performed DFT simulations investigating the interaction of csc5 and IPFC10 with the MAPbI 3 surface, here considered as the prototype lead halide The adsorption energies and the associated WF shifts were monitored as a function of the molecular coverage and highlight a strong interaction between molecules and uncoordinated ions at the surface (see Table S3, ESI †). The adsorption of csc5 mainly occurs through the formation of an S-Pb bond at the surface with a bond length of B3.0 Å, and it is strongly predominant in the PbI 2 -terminated surface, while only slightly favored in the MAI-terminated surface.
On the other hand, IPFC10 mainly interact by forming halogen-halogen bonds with surface iodines at bond distances of B3.5 Å. Interestingly, on the PbI 2 -terminated surface IPFC10 adsorption results in the formation of both a Pb-I and an I-I bond, suggesting the simultaneous creation of a metal-halide and halogen bonds at distances of 3.3 and 3.5 Å, respectively (Fig. S4, ESI †). We then investigated how the csc5 and IPFC10 adsorption influence the WF values for the different surface terminations (see Table S3 and Fig. S3, ESI †). Under all modelled terminations, the adsorption of csc5 leads to a WF reduction, while IPFC10 leads to a WF increase or, in one case, to a negligible decrease. In the PbI 2 -terminated case, the WF variations are particularly significant for csc5 adsorption, with a WF reduction of ca. 0.6 eV per single-molecule adsorption, while they are marginally influenced by IPFC10 adsorption. This is likely the result of the two different bonds (Pb-I and I-I) compensating each other. In the half terminated case csc5 still preferably interacts with Pb ions at the surface. At the same time, IPFC10 adsorption takes place exclusively through halogen bond, leading overall to a WF decrease (increase) of ca. 0.3 eV for csc5 (IPFC10) adsorption. A similar trend is reported on the fully passivated MAIterminated surface and the same surface where an MAI vacancy was created for modelling the adsorption of csc5.
These results indicate that while csc5 preferentially interact with undercoordinated Pb atoms, leading to a WF decrease, IPFC10 can interact both with Pb or I surface atoms, forming different bonds whose contribution to the WF partially cancels out. However, a net WF increase is observed in the majority of the cases, including the one corresponding to our experimental data.
Overall, under all considered adsorption situations, csc5 provides a WF reduction while IPFC10 leads mainly to a WF increase.
To understand the origin of the WF shift, we investigated the charge transfer between the adsorbed molecule and the surface in the different cases. The charge accumulation (depletion) upon the molecule adsorption has been evaluated by calculating the charge density difference between the molecule + slab system and the single components at fixed geometries (see Discussion and Fig. S5, S6 of ESI †). In the case of csc5, an accumulation of electronic charge is reported around the coordinated Pb at the surface and a charge depletion in the molecular region. The opposite is observed for the IPFC10 molecule, where depletion and accumulation of charge on Pb and the molecule, respectively, is reported on the half PbI 2 -MAI and MAI terminated surfaces. This is consistent with the formation of negative and positive dipoles at the interface for csc5 and IPFC10 molecules, respectively. An excellent correlation between the transferred charge at the molecule-surface interface and the computed WF shifts has also been found (Fig. S7, ESI †).
To investigate the effects of molecular coverage, we extended the simulations for the half terminated surface to high percentages of coverage, i.e. up to y = 100%. In Fig. 3d the WF variation vs. y is reported for csc5 and IPFC10 molecules. In both cases, the WF initially shifts almost linearly with the coverage and saturation of the WF shift for coverage higher than 40-50% is observed, in excellent agreement with the experimentally measured trends. Considering that the measurements were performed on CsMAFA-perovskite, our experimental results would be comparable with a situation in between half terminated surface and MAI-terminated surface (see shaded stripes in Fig. 3d). Naturally, variations between simulations and experiments also arise due to the different thermodynamics on the other surface of the grains and the presence of defects.
By comparing the trends in Fig. 3c and d for our conditions, we can notice how, in both cases, the WF shift has a higher magnitude for IPFC10. The numbers suggest that experimentally saturation is reached for a coverage of about 20% in the case of IPFC10 and 10% for csc5. As previously pointed out, this difference is most likely related to a smaller amount of Pb ions on the surface combined with a more significant steric hindrance of csc5.
Overall, the simulations support the correlation between surface coverage and WF shift and evidence the validity of this method to tune the perovskite WF in a controlled way.
So far, we showed how to obtain and control the WF shift induced by the functionalization. The next step is to investigate the effect on PSCs performance.
We prepared standard n-i-p devices, where the electron selective contact is given by compact and mesoporous TiO 2 , and the hole selective contact is the widely used spiro-OMeTAD. We functionalized the perovskite with different concentrations of csc5 to create a negative dipole at the interface and modify in this way the energy level alignment. The expected effect of the dipolar functionalization is to reduce the approximately 200 meV offset between the perovskite valence band and the spiro-OMeTAD HOMO level.
Specifically, we explored the effect of functionalization with a low concentration (0.5 mM), meaning a WF shift of about 40 meV, and with high concentration (10 mM), which represent already a situation of surface saturation and should give a WF shift of approximately 160 meV.
In Fig. 4 we can see how the photovoltaic (PV) parameters such as open-circuit voltage (V oc ), short-circuit current (J sc ), fill factor (FF) and photovoltaic conversion efficiency (PCE) are changing. We observe that the V oc is increasing with increasing concentration, while the FF remains almost constant after the initial improvement and the J sc do not change significantly.
When the perovskite surface is functionalized with dipoles, there are different factors which can affect the devices' PV parameters. The dipoles induce a change in energy level alignment, but at the same time, there is evidence that they can have a passivating effect. 55 Besides, the molecules are binding to defects on the perovskite surface, reducing in this way the trap states. It is therefore difficult to disentangle the effect of the different factors. Thus we performed drift-diffusion simulations to gain a better insight.
In the simulations, one of the current density-voltage (J-V) curves from our control devices was set as a reference. Then only the perovskite energetics and the recombination time were varied to simulate the presence of a negative dipole at the interface between perovskite and spiro-OMeTAD. The results are represented by the red circles in Fig. 4.
The simulations show that the experimental trend is reproducible considering smaller WF shifts than those measured with KPFM and increasing the recombination time constant at the interface from 1.6 Â 10 À10 s to 2.84 Â 10 À10 s. In particular, the simulations predict a WF shift of about 25 meV for devices functionalized with a 0.5 mM solution and 100 meV for the 10 mM case. This seems to indicate that part of the deposited molecules are lost in the devices making process, leading to a smaller WF shift and therefore also to just a slight reduction on the perovskite -spiro-OMeTAD offset.
Both experimental data and simulations show an initial increase in FF, which can be ascribed to a better charge extraction due to a reduced perovskite -spiro-OMeTAD offset. It is also possible to observe a small loss in FF while passing from 0.5 mM to 10 mM, while the V oc keeps increasing. As already discussed before, it is possible that at saturation concentration (i.e. 10 mM in this case) the molecules layer is not uniform, agglomerates might act as insulators and partially reduce the charge transport, leading to a reduction of the FF. The V oc increase is instead consistent with an improvement in energy level alignment and with suppression of recombination. There is evidence that the latter could also be related to the presence of an electric field generated by the dipoles, which can push away the electrons from the interface. 55 It is also worth highlighting the consistency of this behaviour with the results from a previous work employing IPFC10 at the interface between perovskite and C60 in n-i-p PSCs, 35 where a similar V oc trend was observed. Moreover, it is demonstrated that V oc in PSCs tends to increase with the strength of the chemical bonding between perovskite and the deposited molecules, 56 that is similar to our case, in which a higher molecules concentration leads to more significant WF shift.
We can also see that the simulations do not predict a change in J sc , while experimentally, we observe a small variation. Nevertheless, it is quite common in perovskite solar cells to have J sc fluctuation of about AE0.5 mA cm À2 even for identical devices. Therefore we do not consider this change significant.
Overall, the simulations show the effect of interfacial dipoles and the WF shifts on PSCs and highlight that the behaviour observed experimentally can be justified by considering only the influence of the dipoles. It is also worth noticing that the actual WF shift in the device seems to be about half of the nominal value, which is a factor to take into consideration for future experiments.
These results suggest that the presence of a dipolar layer at the interface between perovskite and transport layers can influence and, if the conditions are appropriately chosen, improve the energy level alignment in PSCs. There is, therefore, great potential in the application of this technique for fine adjustment of the energy level alignment at interfaces or for simply tuning the perovskite energetics.
Conclusions
In conclusion, we managed to control the deposition of dipolar self-assembled monolayers (SAMs) on halide perovskite and obtain this way a controlled shift of perovskite work function, valence and conduction bands. We can control the direction of the shift with the direction of the surface dipoles and we can control the magnitude of the shift by tuning the density of the surface dipoles. We were able to shift the perovskite WF up to several hundreds of meV, and we explored the effect of this approach on PSCs with the support of experimental and simulated data. Our work showed that tuning the work function of semiconductors through surface functionalization with dipolar molecules, which is a well known method in organic and inorganic semiconductor physics, can be extended to hybrid organic-inorganic perovskite semiconductors. These findings provides a new toolbox to enhance further the flexibility of applying halide perovskites in optoelectronics.
Conflicts of interest
There are no conflicts to declare. | 7,161 | 2021-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
A Chain, a Bath, a Sink and a Wall
We investigate out-of-equilibrium stationary processes emerging in a Discrete Nonlinear Schroedinger chain in contact with a heat reservoir (a bath) at temperature $T_L$ and a pure dissipator (a sink) acting on opposite edges. We observe two different regimes. For small heat-bath temperatures $T_L$ and chemical-potentials, temperature profiles across the chain display a non-monotonous shape, remain remarkably smooth and even enter the region of negative absolute temperatures. For larger temperatures $T_L$, the transport of energy is strongly inhibited by the spontaneous emergence of discrete breathers, which act as a thermal wall. A strongly intermittent energy flux is also observed, due to the irregular birth and death events of the breathers. The corresponding statistics exhibits the typical signature of rare events of processes with large deviations. In particular, the breather lifetime is found to be ruled by a stretched-exponential law.
Introduction
The study of nonequilibrium thermodynamics of systems composed of a relatively small number of particles is motivated by the need of a deeper theoretical understanding of statistical laws leading to the possibility of manipulating small-scale systems like biomolecules, colloids or nano-devices. In this framework, statistical fluctuations and size effects play a major role and cannot be ignored as it is customary to do in their macroscopic counterparts.
Arrays of coupled classical oscillators are representative models of such systems and have been studied intensively in this context [1][2][3]. In particular, the Discrete Nonlinear Schrödinger (DNLS) equation has been widely investigated in various domains of physics as a prototype model for the propagation of nonlinear excitations [4][5][6]. In fact, it provides an effective description of electronic transport in biomolecules [7] as well as of nonlinear waves propagation in a layered photonic or phononic systems [8,9]. More recently, a renewed interest for this multipurpose equation emerged in the physics of gases of ultra-cold atoms trapped in optical lattices (e.g., see Ref. [10] and references therein for a recent survey). Since the seminal paper by Rasmussen et al. [11] it was realized that the presence in the DNLS of intrinsically localized solutions, named discrete breathers (DB) (see e.g. [12][13][14]), could be associated to negative absolute temperature states. In a series of important papers, Rumpf provided entropy-based arguments to describe asymptotic states above a modulational instability line in the DNLS [15][16][17][18]. It has been later found that above this line, negative temperature states can form spontaneously via the dynamics of the DNLS. They persist over extremely long time scales, that might grow exponentially with the system size as a result of an effective mechanism of ergodicity-breaking [19]. It has been recognized that most of these peculiar features of the DNLS can be traced back to the properties of its Hamiltonian that admits two first integrals of motion: the total energy H and the total number A of particles (also termed as the total norm).
A related question is how the structure of the Hamiltonian influences non-equilibrium properties when the system can exchange energy and/or mass with the environment. In a series of papers [19][20][21] it has been found that, when pure dissipators act at both edges of a DNLS chain (a case sometimes called boundary cooling [22][23][24][25]), the typical final state consists in an isolated static breather embedded in an almost empty background. The breather collects a sensible fraction of the initial energy and it is essentially decoupled from the rest of the chain. The spontaneous creation of localized energy spots out of fluctuations has further consequences on the relaxation to equipartition, since the interaction with the remaining part of the chain can become exponentially weak [26,27]. A similar phenomenology occurs after a quench from high to low temperatures in oscillator lattices [28]. Also, boundary driving by external forces may induce non-linear localization [29][30][31].
When, instead, the chain is put in contact with thermal baths at its edges, stationary states characterized by a gradient of temperature and chemical potential emerge [32]. The transport of mass and energy is typically normal (diffusive) and can be described in terms of the Onsager formalism. However, peculiar features such as non-monotonous temperature profiles [33], or persistent currents [34] are found, as well as a signature of anomalous transport in the low-temperature regime [35,36].
In this paper we consider a setup where one edge is in contact with a heat reservoir at temperature T L and chemical potential µ L , while the other interacts with a pure dissipator, i.e. a mass sink. The original motivation for studying this configuration was to better understand the role of DBs in thermodynamic conditions. At variance with standard setups [1,2], this is conceptually closer to a semi-infinite array in contact with a single reservoir. In fact, on the pure-dissipator side, energy can only flow out of the system.
A non-equilibrium steady state can be conveniently represented as a path in the (a − h)-parameter space, where a(x) is the mass density, h(x) the energy density, and x ∈ [0, 1] the rescaled position along the chain (see Fig. 1 for a few sampled paths). Making use of suitable microcanonical definitions [32], these paths can be converted into temperature (T) and chemical potential (µ) profiles. The presence of a pure dissipator in x = 1 forces the corresponding path to terminate close to the point (a = 0, h = 0), which is singular both in T and µ. Therefore, slight deviations may easily lead to crossing the β = 0 line, where β is the inverse temperature. This is indeed the typical scenario observed for small T L , when β smoothly changes sign twice before approaching the dissipator. The size of the negative-temperature region increases with T L . For high temperatures, a different stationary regime is found, characterized by strong fluctuations of mass and energy flux. In fact, upon increasing T L , the negative-temperature region first extends up to the dissipator edge and then it progressively shrinks in favour of a positive-temperature region (on the other side of the chain). In this regime, the dynamics is controlled by the spontaneous formation (birth) and disappearance (death) of discrete breathers.
In Section 2 we introduce the model and briefly recall the main observables. Section 3 is devoted to a detailed characterization of the low-temperature phase, while the strongly non-equilibrium phase observed for large T L is discussed in Section 4). This is followed by the analysis of the statistical properties of the birth/death process of large-amplitude DBs, illustrated in Section 5. Finally, in Section 6 we summarize the main results and comment about possible relationships with similar phenomena previously reported in the literature.
Model and observables
We consider a DNLS chain of size N and with open boundary conditions, whose bulk dynamics is ruled by the equation where (n = 1, . . . , N), z n = (p n + iq n )/ √ 2 are complex variables, with q n and p n being standard conjugate canonical variables. The quantity a n = |z n (t)| 2 can be interpreted as the number of particles, or, equivalently, the mass in the lattice site n at time t. Upon identifying the set of canonical variables z n and iz * n , Eq. (1) can be read as the equation of motion generated by the Hamiltonian functional through the Hamilton equationsż n = −∂H/∂(iz * n ). We are dealing with a dimensionless version of the DNLS equation: the nonlinear coupling constant and the hopping parameters, that usually are indicated explicitly in the Hamiltonian (2), have been set equal to unity. Accordingly, also the time variable t is expressed in arbitrary adimensional units. Without loss of generality, this formulation has the advantage of simplifying numerical simulations where we can easily check that H and the total mass A = ∑ n |z n | 2 (t) are conserved dynamical quantities.
The first site of the chain (n = 1) is in contact with a reservoir at temperature T L and chemical potential µ L . This is ensured by implementing the non-conservative Monte-Carlo dynamics described in Ref. [32]. The opposite site (n = N) interacts with a pure stochastic dissipator: the variable z N is set equal to zero at random times, whose separations are independent and identically distributed variables uniformly distributed within the interval [t min , t max ]. On average, this corresponds to simulating a dissipation process with decay rate γ ∼t −1 , wheret = (t max + t min )/2. Notice that different prescriptions, such as for example a Poissonian distribution of times with averaget, or a constant pace equal tot, do not introduce any relevant modification in the dynamical and statistical properties of the model (1). Finally, the Hamiltonian dynamics between successive interactions with the thermostats has been generated by implementing a symplectic, 4th-order Yoshida algorithm [37]. We have verified that a time step ∆t = 2 × 10 −2 suffices to ensure suitable accuracy.
Throughout the paper we deal with measurements of local temperature and chemical potential. Since the Hamiltonian is non separable, it is necessary to make use of the microcanonical definition provided in [38]. The general expressions are nonlocal and rather involved; we refer to [32] for details and the related bibliography.
In what follows, we consider a situation where all parameters, other than T L , are kept fixed. In particular, we have chosen µ L = 0 andt = 3 × 10 −2 , with t max and t min of order 10 −2 . We have verified that the results obtained for this choice of the parameter values are general. A more detailed account of the dependence of the results on the thermostat properties will be reported elsewhere.
Finally, we recall the observables that are typically used to characterize a steady-state out of equilibrium: the mass flux j a = 2 Im(z * n z n−1 ) , and the energy flux
Low-temperature regime: coupled transport and negative temperatures
In the left panel of Fig. 2 we report the average profile of the inverse temperature β(x) as a function of the rescaled site position x = n/N, for different values of the temperature of the thermostat. A first "anomaly" is already noticeable for relatively small T L : the profile is non monotonous (see for example the curve for T L = 1). This feature is frequently encountered when a second quantity, besides energy, is transported [32,39,40]. In the present setup, this second thermodynamic observable is the chemical potential µ(x), set equal to zero at the left edge. Rather than plotting µ(x), in the right panel of Fig. 2, we have preferred to plot the more intuitive mass density a(x). There we see that the profile for T L = 1 deviates substantially from a straight line, suggesting that the lattice might not be long enough to ensure an asymptotic behavior. To clarify this point, we have performed simulations for different values of N. The results for T L = 1 are reported in Fig. 3, where we plot the local temperature T as a function of x. All profiles start from T = 1, the value imposed by the thermostat and, after an intermediate bump, eventually attain very small values. Since neither the temperature nor the chemical potential are directly imposed by the purely dissipating "thermostat", it is not obvious to predict the asymptotic value of the temperature (and the chemical potential). The data reported in the inset suggest a sort of logarithmic growth with N, but this is not entirely consistent with the results obtained for T L = 3 (see below). If transport were normal and N were large enough, the various profiles should collapse onto each other, but this is far from the case displayed in Fig. 3. The main reason for the lack of convergence is the growth of the temperature bump. This is because, upon further increases of N, the system spontaneously crosses the infinite temperature line and enters the negative-temperature region. For T L = 3, this "transition" has already occurred for N = 4095, as it can be seen in Fig. 2. The crossings of the infinite temperature points (β = 0) at the boundaries of the negative temperature region correspond to infinite (negative) values of the chemical potential µ in such a way that the product βµ remains finite at these turning points (data not reported).
To our knowledge, this is the first example of negative-temperature states robustly obtained and maintained in nonequilibrium conditions in a chain coupled with a single reservoir at positive temperature. In order to shed light on the thermodynamic limit, we have performed further simulations for different system sizes. In Fig. 4 we report the results obtained for T L = 3 and N ranging from 511 to 4095. In panel (a) we see that the negative-temperature region is already entered for N = 1023. Furthermore, its extension grows with N, suggesting that in the thermodynamic limit it would cover the entire profile but the edges.
Since non-extensive stationary profiles have been previously observed both in a DNLS and a rotor chain (i.e., the XY-model in d = 1) at zero temperature and in the presence of chemical potential gradients [41], it is tempting to test to what extent an anomalous scaling (say n/ √ N) can account for the observations. In the inset of Fig. 4(a), we have rescaled the position along the lattice by √ N. For relatively small but increasing values of n/ √ N we do see a convergence towards an asymptotic curve, which smoothly crosses the β = 0 line. This suggests that close to the left edge, positive temperatures extend over a range of order O( √ N), thereby covering a non-extensive fraction of the chain length.
The scaling behavior in the rest of the chain is less clear, but it is possibly a standard extensive dynamics characterized by a finite temperature on the right edge. A confirmation of the anomalous scaling in the left part of the chain is obtained by plotting the profiles of h and a again as a function of n/ √ N (see the panels (b) and(c) in Fig. 4, respectively). Further information can be extracted from the scaling behavior of the stationary mass flux j a . In Fig. 5 we report the average value of j a as a function of the lattice length. There we see that j a decreases roughly as N −1/2 . At a first glance this might be interpreted as a signature of energy super-diffusion, but it is more likely due to the presence of a pure dissipation on the right edge (in analogy to what seen in the XY-model [42]).
In stationary conditions, mass and energy fluxes are constant along the chain. This is not necessarily true for the heat flux, as it refers only to the incoherent component of the energy transported across the chain. More precisely, the heat flux is defined as j q (x) = j h − j a µ(x) [42] Since j a and j h are constant, the profile of the heat flux j q is essentially the same of the µ profile (up to a linear transformation). In Fig. 6(a) we report the heat flux for T L = 1. It is similar to the temperature profile displayed in Fig. 3. It is not a surprise to discover that j q is larger where the temperature is higher. Panel (b) in the same Fig. 6 refers to T L = 3. A very strange shape is found: the flux does not only changes sign twice, but exhibits a singular behavior in correspondence of the change of sign, as if a sink and and source of heat were present in these two points, where the chemical potential and the local temperature diverge (see, e.g. the red dashed line in panel (b) representing the β profile). The scenario looks less awkward if the entropy flux j s = j q /T is monitored. For T L = 1, the bump disappears and we are in the presence of a more "natural" shape (see Fig. 6(d)) More important is to note that the singularities displayed by j q for T L = 3 are almost removed since they occur where T → ∞ (we are convinced that the residual peaks are due to a non perfect identification of the singularities). If one removed the singular points, the profile of the entropy flux j s = j q /T has a similar shape for the cases T L = 1 and T L = 3. A more detailed analysis of the scenario is, however, necessary in order to provide a solid physical interpretation
High temperature regime: DB dominated transport
Let us now turn our attention to the high-temperature case. As shown in Fig. 2(a), for sufficiently large T L values, the positive-temperature region close to the dissipator disappears (this is already true for T L = 5) and, at the same time, the positive-temperature region on the left grows. In other words, negative temperatures are eventually restricted to a tiny region close to the dissipator side. This stationary state is induced by the spontaneous formation and destruction of large DBs close to the dissipator. On average, such process gives rise to locally steep amplitude profiles that are reminiscent of barriers raised close to the right edge of the chain, see Fig. 2(b). As it is well known, DBs are localized nonlinear excitations typical of the DNLS chain. Their phenomenology has been widely described in a series of papers where it has been shown that they emerge when energy is dissipated from the boundaries of a DNLS chain [19][20][21]. In fact, when pure dissipators act at both chain boundaries, the final state turns out to be an isolated DB embedded in an almost empty background. In view of its localized structure and the fast rotation, the DB is essentially uncoupled from the rest of the chain and, a fortiori, from the dissipators. One cannot exclude that a large fluctuation might eventually destroy the DB, but this would be an extremely rare event.
In the setup considered in this paper, DB formation is observed in spite of one of the two dissipators being replaced by a reservoir at finite temperature. DBs are spontaneously produced close to the dissipator edge only for sufficiently high values of T L . Due to its intrinsic nonlinear character, this phenomenon cannot be described in terms of standard linear-response arguments. In particular the temperature reported in the various figures cannot be interpreted as the temperature of specific local-equilibrium state: it is at best the average over the many different macrostates visited during the simulation. Actually, it is even possible for some of theses macrostates to deviate substantially from equilibrium. Therefore we limit ourselves to some phenomenological remarks. The spontaneous formation of small breathers close to the right edge drastically reduces the dissipation and contributed to a further concentration of energy through the merging of the DBs into fewer larger ones and, eventually to a single DB. Mechanisms of DB-merging have already been encountered under different conditions in the DNLS model [19]. The onset of a DB essentially decouples the left from the right regions of the chain. In particular, it strongly reduces the energy and mass currents fluxes. One can spot DBs simply by looking at the average mass profiles. In Fig. 2, the presence of a DB is signaled by the sharp peak close to the right edge for both T L = 9 and T L = 11.
The region between the reservoir and the DB should, in principle, evolve towards an equilibrium state at temperature T L . However, a close look at the β-profile in Fig. 2 moderate temperature gradient that is typical of a stationary non-equilibrium state. In fact, DBs are not only born out of fluctuations, but can also collapse due to local energy fluctuations. As shown in Fig. 7 (a), once a DB is formed, it tends to propagate towards the heat reservoir located at the opposite edge. The DB position is tagged by black dots drawn at fixed time intervals over a very long time lapse of O(10 6 ), in natural time units of the model. This backward drift comes to a "sudden" end when a suitable energy fluctuation destroys the DB (see Fig. 7(a)). Afterwards, mass and energy start flowing again towards the dissipator, until a new DB is spontaneously formed by a sufficiently large fluctuation close to the dissipator edge (the formation of the DB is signaled by the rightmost black dots in Fig. 7(a)) and the conduction of mass and energy is inhibited again. The DB lifetime is rather stochastic, thus yielding a highly irregular evolution. The statistical properties of such birth/death process are discussed in the following section. The statistical process describing the appearance/disappearance of the DB is a complex one. On the one hand, we are in the presence of a stationary regime: the mass and energy currents flowing through the dissipator are found to be constant, when averaged over time intervals much longer than the typical DB lifetime. On the other hand, the strong fluctuations in the DB lifetime mean that this regime is not steady but it rather corresponds to a sequence of many different macrostates, some of which are likely to be far from equilibrium. Altogether, in this phase, the presence of long lasting DBs induces a substantial decrease of heat and mass conduction. This is clearly seen in Fig. 5, where j a is plotted for different chain lengths. The two set of data corresponding to T L = 9 and 11 are at least one order of magnitude below those obtained in the low-temperature phase. The sharp crossover that separates the two conduction regimes is neatly highlighted in the inset of Fig. 5, where the stationary mass flux is reported as a function of the reservoir temperature for the size N = 4095.
The effect of the appearance and disappearance of the DB in the high reservoir temperature regime on the transport of heat and mass along the chain is twofold. During the fast dynamics, it produces burts in the output fluxes of these quantities as demonstrated in Fig. 7(b). When the DB is present the boundary flux to the dissipator decreases, while when the DB disappears, avalanches of heat and mass reach the dissipator. In the slow dynamics obtained by averaging over many bursts, the conduction of heat and mass from the heat reservoir to the dissipator is hugely reduced with respect to the low temperature regime. We can conclude that the most important effect of the intermittent DB in the high temperature regime is to act as a thermal wall.
Finally, in Fig. 6 (panels (c) and (f)) we plot the heat and entropy profiles observed in the high-temperature phase, respectively. The strong fluctuations in the profiles are a consequence of the large fluctuations in the DB birth/death events and its motion. It is now necessary to average over much longer time scales to obtain sufficiently smooth profiles. It is interesting, however, to observe that the profile of j s in Fig. 6 (f) exhibits an overall shape similar to that observed in the low temperature regimes (see Fig. 6 panels (d) and (e)). This notwithstanding, there are two main differences with the low temperature behaviour. First, close to the right edge of the dissipator we are now in presence of wild fluctuations of j s , and second, the overall scale of the entropy flux profile is heavily reduced.
Statistical analysis
In order to gain information on the high-temperature regime, it is convenient to look at the fluctuations of the boundary mass flux j * a and the boundary energy flux j * h flowing through the dissipator edge. In Fig. 8 we plot the distribution of both j * a (panel (a)) and j * h (panel (b)) for T L = 11, for different chain lengths. In both cases, power-law tails almost independent of N are clearly visible. This scenario is highly reminiscent of the avalanches occurring in sandpile models. In fact, one such analogy has been previously invoked in the context of DNLS dynamics to characterize the atom leakage from dissipative optical lattices [43].
We processed the time series of the type reported in Fig. 7 (b) to determine the duration τ b of the bursts (avalanches) and τ l of the "laminar" periods in between consecutive bursts (i.e. the DB life-times). In practice, we have first fixed a flux threshold (s = 4.25 × 10 −3 ) to distinguish between burst and laminar periods. Furthermore, a series of bursts separated by a time shorter than dt 0 = 10 3 has been treated as a single burst. This algorithm has been applied to 20 independent realizations of the DNLS dynamics in the high-temperature regime. Each realization has been obtained by simulating a lattice with N = 511 sites, T L = 10 and for a total integration time t = 5 × 10 6 . In these conditions we have recorded nearly 7000 avalanches. The probability distribution of the burst duration is plotted in Fig. 9(a). It follows a a Poissonian distribution, typical of random uncorrelated events. We have also calculated the amount of mass A and energy E associated to each burst, integrating mass and energy fluxes during each burst. The results are shown as a scatter plot in the inset of Fig. 9(a). They display a clear (and unsurprising) correlation between these two quantities. The time interval between two consecutive bursts is characterized by a small mass flux. Typically, during this period the chain develops a stable DB that inhibits the transfer of mass towards the dissipator. Fig. 9 (b) shows the probability distribution of the duration of these laminar periods. The distribution displays a stretched-exponential decay with a characteristic constant σ = 0.5. Such a scenario is consistent with the results obtained in [24] for the FPU chain and in [20] for the DNLS lattice. The values of the power σ found in these papers are not far from the one that we have obtained here (see also [44] for similar results on rotor models).
Altogether there is a clear indication that the statistics of the duration of avalanches and walls is controlled by substantially different mechanisms. It seems that the death of a wall/DB is ruled by rare event statistics [20,24,44], while its birth appears as a standard activation process emerging when an energy barrier is eventually overtaken. In fact, when mass starts flowing through the dissipator edge after the last death of a wall/DB, we have to wait for the spontaneous activation of a new wall/DB before the mass flux vanishes again. Conversely, the wall/DB is typically found to persist over much longer time scales and its eventual destruction is determined by a very rare fluctuation, whose amplitude is expected to be sufficiently large to compete with the energy that, in the meanwhile, has been collected by the wall/DB during its motion towards the reservoir.
Conclusions
We have investigated the behavior of a discrete nonlinear Schrödinger equation sandwiched between a heat reservoir and a mass/energy dissipator. Two different regimes have been identified upon changing the temperature T L of the heat reservoir, while keeping fixed the properties of the dissipator. For low T L and low chemical potential, a smooth β-profile is observed, which extends (in the central part) to negative temperatures, without, however, being accompanied by the formation of discrete breathers. In the light of the theoretical achievements by Rumpf [15][16][17][18], we can say that such regions at negative temperature are certainly incompatible with the assumption of local thermodynamic equilibrium, usually invoked in linear response theory of transport processes. In this sense, despite the smoothness of the profiles and the stationarity of mass and energy fluxes, such negative-temperature configurations should be better considered as metastable states. Lacking any numerical evidence of their final evolution even for moderate chain lengths, we can just speculate that they could last over astronomically long times. Further studies are necessary to clarify whether and how this regime can persist in the thermodynamic limit. It is nevertheless remarkable to see that negative temperatures are steadily sustained for moderately long chain lengths. As a second anomaly, we report the slow decrease of the mass-flux with the chain length: the hallmark of an unconventional type of transport. This feature is, however, not entirely new; a similar scenario has been previously observed in setups with dissipative boundary conditions and no fluctuations [41].
For larger temperatures T L , we observe an intermittent regime characterized by the alternation of insulating and conducting states, triggered by the appearance/disappearance of discrete breathers. Note that this regime is rather unusual, since it is generated by increasing the amount of energy provided by the heat bath, rather than by decreasing the chemical potential, as observed for example in the superfluid/Mott insulator transition in Bose-Einstein condensates in optical lattices. The intermittent presence of a DB/wall makes the chain to behave as a rarely leaking pipe, which releases mass droplets at random times, when the DB disappears according to a stretched-exponential distribution. The resulting fluctuations of the fluxes suggest that the regime is stationary but not steady, i.e., locally the chain irregularly oscillates among different macroscopic states characterized, at best, by different values of the thermodynamic variables. A similar scenario is encountered in the XY chain, when both reservoirs are characterized by a purely dissipation accompanied by a deterministic forcing [41]. In such a setup, as discussed in Ref. [41], the temperature in the middle of the chain fluctuates over macroscopic scales. Here, however, given the rapidity of changes induced by the DB dynamics, there may be no well-defined values of the thermodynamic observables. For example during an avalanche, it is unlikely that temperature and chemical potentials are well-defined quantities, as there may not even be a local equilibrium. This extremely anomalous behavior is likely to smear out in the thermodynamic limit, since the breather life-time does not probably increase with the system size; however, it is definitely clear that the associated fluctuations strongly affect moderately-long DNLS chains. | 7,058.8 | 2017-06-21T00:00:00.000 | [
"Physics"
] |
The employer’s perspective: employment of people with disabilities in wage subsidized employments
The aim of this article is to examine employers’ perspectives of the conditions of employment of people with disabilities within a context of wage subsidies. Employers in different workplaces were interviewed, and the interviews were analysed according to qualitative content analysis (Graneheim and Lundman 2004). The results show that four factors attitude, matching, economic incentives and accommodations are important for the employment of people with disabilities within a context of wage subsidies. Positive earlier experiences of people with disabilities serve as one of the reasons employers are willing to consider people with disabilities for jobs, but for hiring to take place, there must also be a match between the right person and the right job. Wage subsidies are seen, within this context, as an incentive to hire people who have reduced work capacity; accommodations are seen as necessary for the successful implementation of such hiring decisions. This knowledge can be applied in the design of support measures for unemployed people with disabilities.
Introduction
In Sweden, at least one in ten persons of working age reports some kind of disability as defined in the United Nations' standard rules (Statistics Sweden 2009). About half of these people also report a reduced ability to work due to disability as defined by the respondents themselves (Statistics Sweden 2009). Swedish law prohibits discrimination against people with disabilities in the workplace (Swedish Government 1999); however, the law has been criticized for individualizing the problem and not addressing the behaviour of the market (Hännestrand et al. 2000). People with disabilities are still discriminated against in the labour market (Jones 2008;Statistics Sweden 2009), and despite active labour market measures to counteract this problem, higher unemployment rates for people with disabilities prevail in Sweden (Statistics Sweden 2009) as well as in other countries in the Organisation for Economic Co-operation and Development (OECD 2009).To address this exclusion from the labour market, Sweden has adopted policies,often in the form of employment programmes,to facilitate transition from unemployment to employment. The most utilized programme in Sweden is wage subsidies (Swedish National Audit Office 2007), with approximately 90,000 people currently employed in jobs with wage subsidies (Swedish Public Employment Service 2011). In the European context, programmes with wage subsidies have shown superior results to other active labour market programmes, but there have been few evaluations of active labour market programmes for people with disabilities (Kluve 2010), and the system of wage subsidies needs to be further researched. Active labour market programmes, in this context, are defined as 'all social expenditure (other than education) which is aimed at the improvement of the beneficiaries' prospect of finding gainful employment or to otherwise increase their earnings capacity' (OECD 2012). Swedish evaluations (Swedish Ministry of Employment 2003;Swedish National Audit Office 2007) of wage subsidies mainly focus on the effects of wage subsidies and tend to lack employer perspective. Employers' perceptions of employing persons with disabilities in a context of wage subsidies are highly relevant to an understanding of the system of wage subsidies. In the present study, we investigate Swedish employers' perspectives of employing people with disabilities within the context of wage subsidies.
Wage subsidies
For people with disabilities in Sweden, there has been disproportionately high unemployment for decades (Swedish Ministry of Employment 2012).This may be due to Swedish employers' reluctance to employ people with disabilities (Knutsson and Persson 2001) and also to discrimination against people with disabilities in the workplace (Statistics Sweden 2009). To address this, Sweden has adopted facilitation policies that include wage subsidy programmes aimed at increasing labour market participation for people with disabilities. Wage subsidies are a form of financial assistance given to employers who employ people with reduced workability. The concept of workability is often poorly defined (Tengland 2010), and the use of the concept within the Public Employment Service is somewhat arbitrary (Swedish National Audit Office 2007). In Swedish labour market policies,'people with reduced workability' is used both as a term describing a category of unemployed people eligible for support from the Public Employment Service (i.e. someone with a disability who may be in need of a workplace accommodation upon employment) and as a term describing employees who are eligible for accommodations and wage subsidies in their employments. To be eligible for wage subsidies, an individual must have a medical certificate of work disability. The level of subsidy is based on the level of reduction in ability to carry out the actual work. In most cases, the reduction in ability is difficult to measure, so the decision is made in consensus with the parties concerned. The wage subsidies can last for up to four years but are supposed to be renegotiated annually. Participants in wage subsidy programmes receive regular salaries according to collective agreements.
The research on wage subsidies in Sweden consists primarily of evaluations of efficiency and outcomes. An economic evaluation shows that wage subsidies have a slight positive effect on labour market participation but a negative effect on regular employment because subsidized employment causes displacement (Calmfors, Forslund, and Hemströ m 2002). An investigation of the wage subsidies system (Swedish Ministry of Employment 2003) found problems with both the Public Employment Service's way of handling the system and the circumstances under which people with subsidized wages work. Major criticism was directed to the lack of job and career development, as well as to the effective 'pinning down' of candidates to low paid jobs (referred to hereafter as the pin-down effect). The Swedish National Audit Office (2007) examination of the Swedish Public Employment Service's manner of handling wage subsidies revealed inadequacies in almost every part of the wage subsidy process. The audit pointed to deficiencies in documentation, in definitions of disability, in choices of measures, in matches between workability and actual workplace demands, in follow-up of job development and in the negotiation of prolonged subsidies. One of the problems with wage subsidies, when evaluated in relation to ability to work, is that ability to work is not a definitive concept. As for all humans, ability to work is ever-fluctuating, depending on situation as well as context, and therefore difficult to measure. The same can be said about disability. This suggests that the wage subsidies system, as compensation for employing people with disabilities with reduced ability to work, is a system of uncertainty, as noted in these evaluations (Swedish Ministry of Employment 2003;Swedish National Audit Office 2007).
Supported employment
Supported employment (SE) is a method that focuses on individualized support for persons with various disabilities to assist them to enter the job market and obtain employment. In order to maintain employment, these individuals receive on-the-job training through their employers with the support of a job coach until they have acquired the necessary skills for the job. The job and the support are adapted to the individual's needs in relation to the employer's needs (Wehman et al. 2006). There has been increasing use of methods based on SE in many countries, including Sweden. The criterion for participation in SE programmes in Sweden is reduced workability. SE is often used in combination with wage subsidies in Sweden, and this can be seen as problematic in light of the above-mentioned criticism concerning the uncertain handling of wage subsidies (Swedish Ministry of Employment 2003;Swedish National Audit Office 2007). Criticism could also be raised against the use of SE together with wage subsidies: because SE aims to reduce barriers by matching individuals with jobs so that few or no hindrances arise in the given workplace, it stands to reason that, if this is done properly, there should be no need for wage subsidies because there should be no reduced workability in a given job.
Aim of the article
An understanding of the perceptions and experiences of employers can be one of the keys to understanding reasons behind decisions to employ or not to employ a person with disability. Despite this, only a few studies have focused on understanding employers' views of recruitment and employment of people with disabilities (Waterhouse et al. 2010). The aim of this article, therefore, is to examine employers' perspectives regarding the conditions for employment of people with disabilities. As the study takes place within a context in which wage subsidies are used, the question it investigates is as follows: what are the main factors behind decisions to employ people with disabilities within a context of wage subsidies?
Method
In 2007, a research project called Sustainable Work began in cooperation with three organizations working with SE. The aim of the project was to identify key components of a sustainable work situation for people with disabilities. The research was carried out from 2007 to 2010. Register data and open-ended interviews were the primary sources of data. The study reported in this article is a part of the project, and the primary source of data in this article consists of interviews with employers and supervisors who have employed a person with disability.
Selection of informants
The inclusion criteria for the informants in this study were as follows: collaborating with an organization working according to the principles of SE and having in their employ persons with disabilities for at least six months. In total, 317 employers who had current experience employing people with disabilities were listed as potential informants. Of these, 21 were selected. The sampling method was to choose participants with various experiences. One set of sampling criteria referred to the characteristics (age, gender, type of disability and length of employment) of the employee(s) with disabilities. Another set of sampling criteria referred to workplace characteristics (company size, sector and branch), as such could play a part in employment decisions and job characteristics. The rationale for sampling employers with various experiences was to see whether these employers differed in their perceptions based on the characteristics of their experiences.
The job coaches from each of the three SE organizations contacted the informants chosen, introduced the study, and asked if they were willing to participate. Those who agreed to participate were contacted by a researcher who informed them of the purpose of the study, their rights, the procedures to be undergone, and the potential risks and benefits of participation. Of the 21 employers contacted, 19 agreed verbally to participate in the study. Two employers declined to participate because they no longer worked at the company or workplace in question and did not consider themselves up to date with the employment in focus. As they no longer worked at the workplace of interest, they were not considered eligible participants, and one of them was replaced by another informant at another company who did fulfil the eligibility criteria. All participants were guaranteed confidentiality and were informed that they could discontinue their participation at any time. Five of the informants discontinued the study before the interviews took place, citing lack of time. Two of these informants were replaced by supervisors in their places of work. Another three supervisors were also interviewed following recommendations from the employers. The supervisors were chosen because they had all been responsible for the recruitment processes and employments in focus.Since the supervisors in these cases replaced the employers and had a demand-side perspective in the interviews, they are referred to as employers in the result and discussion sections.
Characteristics of the participants
In all, 20 informants participated in the study, 15 employers and five supervisors. There were 13 males and seven females. Sixteen were from the private sector (12 employers and four supervisors), while the remaining four were employed in the public sector. The participants represented different company sizes: five of the participants were from small companies with less than six employees; five were from companies with more than 50 employees and the remaining 10 were from mediumsized companies. The supervisors were mainly from big or medium-sized companies; they had acted as contact persons for the SE organization and as supervisors of the relevant employees, often replacing the employers in day-to-day issues concerning the employments in focus. The supervisors had been part of the employment decision, as they had been asked previously whether or not they wanted to supervise an employee with reduced workability. All the informants had current experience with employing or supervising employees with disabilities.The most common disabilities of which they had experience were neuropsychiatric diagnoses and learning disabilities, followed by psychiatric and somatic diagnoses. Other diagnoses represented among the employees in the study were hearing loss/deafness, brain injury, physical injury and loss of sight.
Data collection
Semi-structured face-to-face interviews were carried out by one researcher in six cases and two researchers in 14 cases. The interviews took place in 2008. The researchers used an interview protocol that addressed various aspects of being an employer: why they choose to hire the person in question, introduction in the workplace and training, job development, accommodations, career development and workability versus productivity requirements. Questions also addressed disabilityrelated issues such as the employer's former experiences of people with disabilities. Each interview lasted 30 to 90 minutes and was conducted at the informant's place of work. The interviews were recorded and transcribed verbatim.
Data analysis
A qualitative content analysis based on guidelines from Graneheim and Lundman (2004) was carried out. The selected focus of analysis in the study was the experience of being an employer of persons with disabilities. The data analysis was carried out in several steps. (1) All interviews were transcribed, read and reread in order to gain an overall sense of the content. (2) The next step was to identify items in the text as meaningful units. Criteria for deciding units as meaningful were that the units addressed the questions of why and how people with disabilities get and keep employment. (3) The units were categorized into different categories and subcategories. One category, for example, was job development, with subcategories including formal training/education and informal training/education.The categories were coded in an inductive manner, going from text to categories. (4) The meaning units were condensed into shorter descriptions close to the original text. (5) An interpretation of the underlying meaning of the condensed units was made with the aim of understanding the significance of these units in terms of the employment of people with disabilities (see Table 1). (6) The meaning units and their alternative interpretations were discussed by three researchers, resulting in consensus about how to interpret the meaning units. (7) The meaning units that described characteristic features of the employer's experiences and their interpretations were sorted into subthemes with the aim of finding themes that expressed the concealed content of the complete data. The following are some examples of sub-themes: being the right person to hire, expressed by the informants in terms of the skills or characteristics they appreciated in their employees, and suitable jobs, expressed in terms of the kinds of working tasks informants thought suitable for the employees. These sub-themes where seen as expressions of 'matching the right person with a suitable job', so matching then became the theme. (8) The sub-themes and themes were analysed and reflected upon in light of existing literature on the functioning of the labour market and the concept of disability. The analysis looked for contextual factors that could yield a more thorough understanding of the different descriptions (that are described in the earlier steps of the analysis) of employers' perceptions concerning the employment of persons with disabilities. All researchers in the research group were involved in the first five steps of the analysis to strengthen the trustworthiness of the interpretations. The data analysis was written in Swedish and later translated into English by a professional translator.
Central findings
Three themes emerged from the experience of being an employer of a person with a disability: (I) the employer's attitude formed by previous experience of people with disabilities and how this experience influenced social responsibility; (II) the match between the jobs that are considered suitable for people with disabilities and the personal characteristics desirable for the jobs offered and (III) the significance of wage subsidies as an incentive for employment of people with disabilities and as making possible the soft accommodations needed for such employees in the workplace.
Although severity and type of disability is likely to affect employment (Jones 2006b), the severity of the disability and the ways in which the disability affected each employee's ability to work were not explicitly expressed by the informants and did not appear as a theme in the analysis.
Theme I: the attitude of the employers
The vast majority of employers in this study had previous experiences of people with disabilities that influenced their decision to employ a person (or persons) with reduced work capacity. Such experiences, described in the interviews, mainly involved situations in which a person with disability had done something that the participant considered extraordinary Á something unexpected which was at odds with their expectations. Many of the experiences related by the participants were of family, school or leisure activities. Employers described themselves as impressed by what those individuals had accomplished despite their disabilities, as the following quote illustrates: In high school . . . it was an eye-opener . . . a lot of youth with disabilities . . . cerebral palsy . . . wheelchairs . . . some couldn't talk . . . couldn't control their bodies . . . but they were as smart as, if not smarter than, me and completely aware of their surroundings, and that was a real eye-opener. Maybe that's why I like to make it easier for people with disabilities. (Employer, manufacturer of metal products, seven employees) Many respondents described concrete situations in which people with disabilities had proved themselves extraordinary, as competent and complete persons in the respondents' views.
Another reason mentioned for hiring the disabled was the desire to make it easier for those who have difficulty finding a job. It was considered a matter of social responsibility, as this employer describes: I think that as an employer you have a social responsibility. (Employer, book shop, seven employees) For two of the employers from larger companies, the employment of people with disabilities was part of a stated policy of corporate social responsibility (CSR). In smaller companies, the hiring of a person with disabilities was mainly attributable to the employer's own views concerning the importance of diversity in the workplace and helping others in life. Some employers also thought that employment of people with disabilities had a surplus value for other employees in the workplace: Employment of people with disabilities can result in other employees developing and changing their attitudes. (Employer,restaurant,230 employees) These employers were eager to integrate such views into the workplace culture. The fact that they chose to concentrate on people with disabilities in their practice of social responsibility, however, seems to be connected to their previous positive experiences of people with disabilities.
Theme II: matching
Matching can be seen as a process containing two elements: the jobs that are considered suitable for people with certain characteristics (suitable jobs) and the personal characteristics necessary for the jobs offered. The types of jobs employers thought of as suitable or available to people with disabilities differed to some extent among the employers interviewed, but there were some common features. First and foremost, the employment of people with disabilities was seen as something unusual, as somewhat different from the employment of other people. This is illustrative of the participating employers' views concerning people with disabilities and working life: work was not seen as a disabled person's natural environment, and employing people with disabilities was often seen, at least initially, as something out of the ordinary. If the employment of a person with disabilities went well, however, respondents were happy to consider employing more people with disabilities, as several of the respondents had in fact done. Positive experiences of the initial employment of a person with a disability can thereby result in the employment of people with disabilities being seen as an ordinary, rather than unusual, occurrence.
In this study, most of the disabled employees were employed within industry or trade or in different services. Common occupational groups were factory workers, assistants and warehouse staff. Often the jobs involved assisting other workers in the workplace. The employees with disabilities were seen, in some cases, as complements to ordinary staff, as helpers. The jobs filled were mainly for low-or unskilled labour. Some of the tasks performed were monotonous and repetitive, and employers described these tasks as especially suitable for persons with disability. Petty jobs were also mentioned as especially suitable for employees with disabilities. This may be due to the fact that it is expensive to produce certain products in Sweden, so it makes sense to make use of subsidized labour for production tasks when possible. Employees with disability were seen as subsidized labour in that their employment entitled the company to wage subsidies.
We have very many petty jobs... sitting and cleaning small plugs... it's a lot of working with your hands, and such a thing is, of course, extremely expensive to pay for... we have jobs for virtually everyone, regardless of disability... there is always something to do. (Employer, plastics industry, 16 employees) Only a few of the workers in this study worked in self-governed or independent work situations and they were often well-or even over-qualified for their jobs. These workers had a somewhat different situation than those who worked with low-skilled tasks. They were less often seen as 'disabled', and the respondents attached less importance to disability and more to the employee's knowledge and experience. However, the word 'disability', as used by the employers studied, often seemed to suggest inability or reduced opportunity for development to the employers, and this is reflected in the characteristics of the jobs made available to most of the workers in the study. Employment of persons with disabilities was seen as a means to obtain employees who could do monotonous work and who would have no desire for advancement: It's a way to get people who can do certain jobs Á simple operations Á and who may not have the same desire to do a lot of things, but are quite happy to have jobs to go to. (Employer, packing industry, 15 employees) Employment of people with disabilities in monotonous jobs was described as having several advantages. It was a convenient way of getting the labour-intensive work done (due to subsidized salaries) without demands for professional/career development, as the employees were assumed to be quite happy having jobs at all.
To be seen as the right person for the job is another prerequisite for obtaining employment. Employers described many skills that made the employees attractive for employment. The fact that almost all employees in this study were employed in positions that did not require more than some basic skills may have affected what kinds of skills the employers in question regarded as necessary for employment. Skills mentioned as attractive were personal characteristics such as loyalty, not necessarily skills related to the work tasks. Motivation, willingness to work, commitment and having a strong work ethic were described as attractive skills, as were readiness and devotion: He is a pleasant and polite young man, a valued colleague, helpful. He never says no to an assignment Á he does it happily. (Supervisor, packing industry, 15 employees) To be a model employee in the workplace, social skills as well as experience are needed. The most attractive employees were those who had education, social skills and willingness to work: He is dutiful and loyal and he is trained as a mechanic... he has tremendous knowledge.(Employer, car rental, 10 employees) Formal education was a factor mentioned by all employers as important but not decisive. Some employers wanted a well-educated or trained employee, whereas others could see advantages with a 'blank slate', a person who could be formed to suit the workplace. Two of the employers offering unskilled jobs did not see lack of education or experience as a barrier to employment as long as they did not have to pay for an extensive learning period.
Although the matching process was successful in most cases, all but two of the employers saw no opportunities for career development for their employees with disabilities. In some cases, this was due to the disability and the employer's expectations of what the employee could cope with in work situations. In other cases, the nature of the tasks available in the workplace also played an important role in career development: there simply were no opportunities for advancement in some workplaces, especially not in unskilled jobs and in small companies.
Theme III: wage subsidies and accommodations as important conditions
There were two factors that were seen as important by the respondents when employing a person with disability: wage subsidies and accommodations. Wage subsidy was seen by all employers as compensation for reduced productivity. In the absence of wage subsidy, the employers emphasized, they would probably not have hired the person. On the other hand, employers reported that if a person with disabilities had the right education and/or former experience, they might consider employment. The wage subsidies created an important competitive edge, especially in small companies for which overall wage reduction was extremely important. Some of the employees were employed with wage subsidies that compensated up to 80% of their total salaries, lowering overall employment costs for the employer. One employer described why he saw advantages to employing at such low cost: You may only pay 20% (of the salary) the first and second years...if I can make money, then it becomes interesting. (Employer, gas station and restaurant, 40 employees) All employers saw the advantages of cheaper labour. A few employers considered that their employees' progress could eventually lead to employment without subsidy as long as their productivity reached a certain level. Where disability affected productivity, however, the employers could not justify, from a production perspective, employment in the absence of subsidy: It goes without saying that if we were to be in a position in which I would have to pay her full pay, then I could not justify it. I can't do it Á not from a productivity perspective. But with the approach that exists today, that is, with wage subsidy, she is an asset. She may do small things all the time, but it doesn't go very fast.(Employer, furniturecompany, 500 employees) Thus, in these cases, wage subsidies play a decisive role and serve as one of the reasons for the respondents to employ a person with disabilities.
Whereas wage subsidies can be seen as important for positive outcomes with regard to people with disabilities in the decision-making processes involved in hiring, accommodations can be seen as necessary for successful implementation of these hiring decisions, that is, for successful continued employment of an employee with disabilities. All employees in the study were in some kind of accommodated work situation due to their disabilities. The accommodations were exclusively 'soft', such as adjustments in work hours, pace or type of tasks, as opposed to 'hard', which would involve various types of technical or physical adjustments in the workplace. If an accommodated work situation is seen as a necessary condition for successful employment, then accommodations clearly have consequences for job development and career.
The accommodated work situation was seen by some employers as problematic in that accommodation often ruled out flexibility. The need for accommodation could be a barrier to employment from a productivity perspective. If a production process needs to be changed quickly, employers may lack the time to provide accommodation in the new work situation. However, none of the employers thought that the productivity of the employees with reduced workability was entirely related to adjustments in the pace of work or in tasks, and the wage subsidy was seen as a reasonable trade-off for accommodations.
He gets very, very stressed out when he doesn't handle the job... then he wants a lot more time than the others need... and we let him have it, because it's a wage subsidized job, so we can actually do so . . . (Employer, personal assistance, 5000 employees) Depending on the accommodations, the employers said it was difficult to see how the wage subsidy could be reduced in the long run because of the accommodations made. The tasks excluded in the adjustments made still needed to be performed by someone else who needed to be paid. Should the wage subsidy disappear and the employers have to pay an unsubsidized salary, employers described their choice as easy: they would have to employ someone who could perform all tasks without accommodation.
Discussion
The aim of this article is to describe employers' perspectives of the employment of people with disabilities. The employers were strategically chosen and had experience with employees with disability. The results show that four types of factors Á attitude, matching, wage subsidies and accommodations Á are important for the employment of people with disabilities within a context of wage subsidies.
It should be noted that all the employers in this study had had the opportunity to see the potential employee in action before making the hiring decision. The study took place in a context of SE in which the employers received support from an SE organization in the hiring/employment of a person with disability. Moreover, the results should be understood in light of the fact that economic incentive in the form of wage subsidy was used to encourage the hiring of people with disabilities. One can assume that the use of incentives rather than prescriptive legislation makes a difference with regard to which mechanisms are central to understanding employers' hiring decisions and accommodations in the workplace.
The size of the company may also influence employment decisions. Since employers' previous experience is an important factor for recruitment, it may be easier for a person with disability to obtain employment in smaller companies where the employer who has the experience of people with disabilities also has control over hiring decisions. In bigger companies, there is often more than one decision-making level, including human resource departments and supervisors, so a single person's experience might not have much weight. The employers in this study were mainly from medium-sized companies in the private sector, and this may have affected the employment decisions and recruitment processes. In Sweden, most people with reduced workability are employed in the private sector (Statistics Sweden 2009).
Limitations of the study
The method chosen for the interviews and analysis has implications. Because the employment of people with disabilities is a socially and politically sensitive topic, it is possible that the respondents expressed socially acceptable views, rather than disclosing their own personal views. The analysis may mirror a socially accepted view instead of illuminating what is actually the case. However, as shown, the respondents sometimes reported opinions that were not 'politically correct'. Furthermore, there is the risk of bias in content analysis Á the author's understanding of a phenomenon can influence the interpretation. To counteract this risk, all the authors of this article were involved in the analysis and carried on a running discussion of the interpretations.
The employer's attitude
A number of the respondents described previous experiences with people with disabilities, although these experiences did not directly relate to work capacity but generally had to do with areas other than work life. These experiences seem to be the basis for employers' decisions to open up the workplace to people with disabilities, functioning in this respect as a door opener to the labour market. This knowledge may serve as a reason for society to build integrated rather than segregated school and leisure areas, as the experiences reported took place in these kinds of contexts. But the kind of contact and the kind of disability can also influence the decision process. Research has shown that greater quality of contact with people with intellectual disabilities is associated with more positive attitudes (McManus, Feyes, and Saucier 2011). In this study, employers described a positive previous experience with emphasis on the positive aspects of the contact. Previous experience seems to affect the acceptance of people with disabilities in the workplace (Yuker 1994), and positive attitudes are related to experiences of working with people with disabilities . Earlier positive experiences from areas of life other than the workplace and concrete experiences from the workplace can thus play a role in the decision process. This research shows the importance of positive experiences taking place in inclusive, as opposed to segregated, meeting points in working and social life for further encouraging the hiring of people with disabilities.
The previous positive experiences, however, did not affect the respondents' ideas about which people with disabilities can contribute in the workplace in a decisive way. This may seem paradoxical, but an explanation might be that the experiences of people with disabilities in different (non-work) environments are difficult to apply to the workplace. The employers describe experiences of a person with disability having done something extraordinary, something that was not expected of them in the actual situation. This view of extraordinary capability expresses a rather pathological view of people with disabilities as abnormal. This view of abnormality may partly explain the low expectations that employers express concerning which people with disability can contribute in working life in terms of future productivity and work capacity.
Other studies of employers' perceptions of people with disability in the labour market also reveal lower expectations of productivity as a factor affecting employers' assessments of the potential work capacity of people with disabilities (Unger 2002;Louvet, Rohmer, and Dubois 2009;Domzal, Houtenville, and Sharma 2008;Fraser et al. 2010). This suggests that such low expectation may be a fairly widespread phenomenon. When assessing a person's employment potential, an employer's notion of people with disabilities comes up against his or her perceptions of the demands of the labour market. Disability then seems to function as an indicator of low productivity and reduced work capacity. In her study carried out at a Belgian car plant, Zanoni (2011) finds that the notion of disability tends to include a notion of lack of capability and flexibility Á considered two of the most important requirements in the labour market today (Jessop 1994;Magnusson 1999Magnusson , 2006. In this study, the respondents' expectations and their evaluation of their employees' workabilities may be influenced by the environment in which the employment takes place. There may be a disproportionate focus on productivity in a wage subsidy context due to the fact that wage subsidies are based on reductions in productivity. In this context, this leads to people with disabilities being evaluated on different premises compared to other employees. For other employees, there is no need to discuss and evaluate productivity in this way, and this may lead to a positive bias towards people without disabilities and to a negative bias towards people with disabilities. There often seems to be a general belief in working life that being ablebodied is the same as having the capacity to work, and from this notion there arises a kind of ideal, a norm, from which ideas of a person with disability differ in several ways (Garland Thomson 1997). The result indicates that the general approach among the respondents to hiring a person with disability is not based on a person's right to have a job, but rather on the person's proving him or herself worthy (in terms of productivity) of having a job.
Matching
The structure of the work and, especially, the perceived requirements of flexibility and adaptability in the labour market (Townsend, Waterhouse, and Malloch 2005;Jessop 1994;Magnusson 1999Magnusson , 2006 affected the employers' images of suitable occupational roles. People who do not meet the norm are disproportionately represented in low-skilled jobs. Occupations for people with disabilities are often entry level, with fewer requirements for information and communication skills (Kaye 2009), and it is less likely for people with disabilities to be found in supervisory jobs (Schur et al. 2009). They are often employed in part-time or temporary jobs (Schur 2003). It is not unusual for people with disabilities to be seen as a workforce reserve.
The occupational roles of the employees in this study were predominantly entrylevel positions devoid of requirements for specific skills. In these kinds of jobs, the tasks are often monotonous, with low control and few opportunities for learning new skills. Workers who perform tasks that are associated with low value are sometimes also seen as having low value due to the tasks they perform. Holmqvist (2009) identifies this feature of low value associated with certain kinds of tasks, referring to tasks with this feature as 'dirty work'. Some of the employees in this study were valuable to the employers precisely because they were performing low-paid 'dirty work'. This was their greatest asset in competing for the job, and this is the explanation for why they got hired. One of the problems that the Swedish Ministry of Employment's (2003) evaluation of the system with wage subsidies addresses is its pin-down effect on low-paid jobs. This pin-down effect is clearly illustrated in the present study. Matching people with disabilities to low-paid, wage-subsidized, lowvalue jobs is problematic because it may promote the detrimental, unfair and simply erroneous idea that people with disabilities are unsuited to more challenging jobs or tasks.These drawbacks should be seriously considered when addressing the wage subsidies system. The personal factors cited as important by the employers in this study were almost solely soft skills such as a good work ethic, readiness and positive manners. One reason that people with disabilities are hired is that employers seek to obtain workers with soft skills such as a positive attitude towards work (Gilbride et al. 2003). Such behaviour can be seen as compensation for lack of formal education and experience, but it is also typically expected in the occupational role of assistant or helper. People with disabilities are viewed as likable but not as competent , which might explain why they are placed in occupational roles in which soft skills are more important than efficiency. It could be the case that loyalty and compliance are behaviours triggered by harsh labour market conditions.
Important conditions: wage subsidies and accommodations
Two factors were considered to be important conditions for employment: wage subsidies and accommodations. The wage subsidy given to an employer can be viewed as a counter-weight to the labour market's demand for productivity, as the subsidy is meant to compensate for reduced productivity stemming from the disability. The wage subsidy thus functions as a bridge between the expectations of, on the one hand, productivity along with the ideal of 'able-bodiedness' and, on the other hand, the concessions employers feel they have to make in employing a person believed to be less productive. It is probable that considerations of productivity account for low employment rates for people with disabilities (Jones 2006a). This study indicates that such is the case: none of the employees would have retained their employment if they had not been able to satisfy the productivity requirements set by the respondents. In this line of reasoning among employers, wage subsidy is, primarily, compensation for individual inability to match the structure of working life and to meet the ever-increasing demands for maximum production and efficiency; that is, employers clearly see employability as an individual problem and solutions to this problem as on the individual level.
There are some risks associated with using the wage subsidies system as compensation for what is seen as individual inability. One major risk is that wage subsidy is regarded as compensation enough, causing little effort to be made to come up with other solutions, such as better job matches or accommodations. Another risk is that employees in subsidized employment are seen as 'second class employees' due to their being regarded as 'cheap labour' and because of the kinds of tasks they work with. To be seen as a 'second class employee' has serious consequences for the individual in both hiring and career opportunities. This risk could be counteracted by better job matches in order to avoid wage subsidies. The employees in this study who worked in independent jobs and who were well-qualified for their jobs were seen as 'first class' employees due to their knowledge and experience, and the respondents did not acknowledge them as 'disabled'. Experience and education were also factors that could lead to employers considering employment without subsidies. In a job for which the employee meets formal qualification requirements, there seems to be less need to discuss disability from a production perspective because the employee is valued according to his or her ability.
The way in which the disability affects the performance of duties in the workplace may in turn affect acceptance in the workplace (McLaughlin, Bell, and Stringer 2004). This may suggest that it is productivity in the workplace that is important for colleagues' acceptance, rather than the fact of having a disability or not. The accommodated work situation, in these cases, has made it easier for the employees to conform to commonly accepted ideas concerning work capacity. In these cases, the accommodations seem to have functioned as they are meant to: they eliminated barriers in the context so that disability did not emerge in the situation at hand. The accommodated work situations have also helped to change the employers' perceptions of their employees' abilities: the abilities are no longer seen as internal, stable factors, but rather as external, unstable factors, in accordance with attribution theory (Weiner 2010). This is an important insight, in line with the notion of handicap as relative to environment. Positive beliefs about the reasonableness of accommodations in the workplace are associated with positive attitudes towards people with disabilities , and this may be part of the reason that employers with previous experience of employees with disabilities, which is also associated with positive attitudes, are more positive than other employers towards hiring people with disabilities (Unger 2002;Knutsson and Persson 2001;Copeland et al. 2010).
Research about demand-side factors related to people with disabilities in the USA shows that knowledge of job accommodations was significantly associated with a company's commitment to hire people with disabilities and that an accommodated work situation has a positive effect on a person's ability to retain his/her employment (Johansson, Lundberg, and Lundberg 2006). However, because the tasks available at a place of work depend on the possibilities for accommodation, the need for accommodations in hours and pace of work can affect which sorts of jobs and tasks are seen as suitable. Respondents sometimes saw the need for an accommodated work situation as an obstacle to professional development and career advancement in the workplace. The lack of opportunities for professional development can create many disadvantages for people with disabilities (Rigg 2005), such as lower wages and less on-the-job training. This can also have profound implications for career opportunities in the workplace (Colella and Varma 1999). A fruitful way of dealing with this issue might be to educate and support employers in ways to promote future workability during an employment, given that employers see lack of resources to retain people with disabilities as a barrier to employment ) and because they express the need for assistance in identifying appropriate workplace accommodations (Stensrud 2007).
The majority of respondents in this study did not view the absence of professional development opportunities in the jobs offered as problematic. It seems as though the respondents attributed to the employees lack of desire and opportunity for professional development. This reveals a pathological view of people with disabilities, which may be one of the explanations of the overrepresentation of people with disabilities in jobs without opportunities for development and career (Rigg 2005). To counteract this inequity, labour market institutions and rehabilitation organizations should pay greater attention to this problem and create structures to support employers of people with disabilities to maximize their employees' full potentials throughout employment, and not only during the hiring process. However, respondents saw the presumed lack of need for professional development as valuable, because then the employee was not expected to make any demands with regard to having challenging work or career opportunities. Wage subsidies, in their current form, might constitute an obstacle to professional development because the subsidies are provided to compensate for loss of productivity and/or an accommodated work situation. There is a need to further study how wage subsidies and working conditions may contribute to reproducing prevailing notions about people with disabilities and the effects this has on individuals and society.
Conclusion
Several conditions important to employment in a context of wage subsidy emerge in the present study. Wage subsidies serve in this context as an incentive for hiring people with reduced work capacity, and soft accommodations are seen as necessary for the successful implementation of such hiring decisions (i.e. for the continued employment of those hired). Positive experiences and productivity seem to be two important factors for employers when hiring persons with disabilities.
Issues concerning work for people with disabilities are often discussed from a human rights perspective, whereas employers in this study mainly give voice to an individual perspective in which work for people with disabilities is discussed on the grounds of utility. The value of the human seems to be assessed in terms of the interests of productivity and, hence, in relation to economic profit. The human rights perspective, as outlined in the disability rights movement (Hurst 2003), was not brought to the fore in the employers' descriptions of their practical experiences, and the employers' perceptions of people with disabilities in the workplace were mainly based on perceptions of disability as limiting rather than enriching. People with disabilities were, in these cases, often seen as 'second class' employees because they were regarded as 'cheap labour' in subsidized employments and because they performed low-status tasks. Despite this, it appears that people with disability can also be highly valued in some positions in a wage subsidy context. | 10,666.6 | 2014-07-23T00:00:00.000 | [
"Philosophy"
] |
Novelty in News Search: A Longitudinal Study of the 2020 US Elections
The 2020 US elections news coverage was extensive, with new pieces of information generated rapidly. This evolving scenario presented an opportunity to study the performance of search engines in a context in which they had to quickly process information as it was published. We analyze novelty, a measurement of new items that emerge in the top news search results, to compare the coverage and visibility of different topics. Using virtual agents that simulate human web browsing behavior to collect search engine result pages, we conduct a longitudinal study of news results of five search engines collected in short bursts (every 21 minutes) from two regions (Oregon, US and Frankfurt, Germany), starting on election day and lasting until one day after the announcement of Biden as the winner. We find more new items emerging for election related queries (“joe biden,” “donald trump,” and “us elections”) compared to topical (e.g., “coronavirus”) or stable (e.g., “holocaust”) queries. We demonstrate that our method captures sudden changes in highly covered news topics as well as multiple differences across search engines and regions over time. We highlight novelty imbalances between candidate queries which affect their visibility during electoral periods, and conclude that, when it comes to news, search engines are responsible for such imbalances, either due to their algorithms or the set of news sources that they rely on.
Introduction
The 2020 US elections were one of the most viewed events of 2020, attracting 56.9M viewers on cable and broadcast TV at prime time alone (Nielsen, 2020). As shown by the record turn-out (Schaul et al., 2020), the stakes were high in a polarized nation (Boxell et al., 2020) whose citizens were deciding the direction of a major international power. Media outlets were ready to cover every detail that would keep their visitors engaged, reporting novel pieces of information every few minutes, if not seconds (e.g., Astor, 2020;Kommenda et al., 2020). At a proportional pace, digital intermediaries, such as search engines, frantically processed the material to show the latest and most relevant updates to their audience. This scenario presented an opportunity to explore the performance of search engines under an intensively mediated political campaign in which political actors competed for the spotlight (Kaid & Strömbäck, 2008). This paper reports how search engines covered the elections in terms of novelty, i.e., inclusion of novel items among their top news results, which, we argue, is essential for analyzing the coverage that a topic receives by the search engine.
Earlier research has shown that success in elections depends on the attention that the media spends on candidates (Hopmann et al., 2010;Maddens et al., 2006;Reuning & Dietrich, 2019;van Erkel et al., 2020). Coverage (and visibility) has not directly been addressed in news search scholarship because of the reactivity of search engines, namely search engines do not feature a selection of materials per se (e.g., as in a news website), but retrieve them in response to user queries. For any query, search engines return a long list of news articles, albeit in the majority of cases individuals will interact with only those at the top (Pan et al., 2007;. Because the relevance of news items changes over time, more relevant items can appear at the top when the individual searches again. This has consequences for the visibility of the topic, as the individual would be exposed to a more diverse set of news when the novelty is higher. In this paper, we analyze news search by investigating the novelty of results that emerges for 9 queries: 3 related to the US elections ("joe biden", "donald trump" and "us elections"), 3 topical ("coronavirus", "poland abortion", "nagorno-karabakh conflict") and 3 stable ones ("first world war", "holocaust", "virtual reality"). Data were obtained during the 2020 US presidential elections from 5 search engines (Google, Bing, DuckDuckGo, Yahoo! and Baidu); snapshots for each query were captured every ~21 minutes between Nov 3 rd , 07:31 a.m. and Nov 9 th , 06:40 a.m. Eastern Time (ET) using 240 virtual agents located in two geographical areas: Oregon (United States) and a non-US location Frankfurt (Germany).
Our focus is to investigate the evolution of news results across four periods, defined by three key events: (a) close of all polls, (b) call of Michigan's results, the 45 th state being called followed by 3 days without any calls, and (c) call of Pensylvannia's results, the state that indicated the victory to Biden.
Following (Kulshrestha et al., 2019), we include weights corresponding to the search ranking in our novelty metric to capture the tendency of individuals to click on top results more often (Pan et al., 2007;, including news articles (Ulloa & Kacperski, 2022). To analyze the data, we use linear mixed models, with repeated measures that stem from our longitudinal observations. First, we present evidence that novelty is indeed higher for election related queries, as well as for the COVID-19 pandemic, but neither for localized happenings outside of the US (e.g., Poland abortion protests), nor for stable queries. Then, we demonstrate that the novelty for the two candidates differs across search engines and regions, and that the novelty is disproportionally high, in particular for the query "donald trump" in Bing and Oregon.
Media coverage and elections
Neither voters' positions on political issues nor the candidates' personal traits matter if the candidate is not visible to the voter (Hopmann et al., 2010). Previous research shows that electoral success depends on the attention that the media pay to candidates (e.g., Hopmann et al., 2010;Maddens et al., 2006;Reuning & Dietrich, 2019;van Erkel et al., 2020). For example, observers have attributed Donald Trump's victory in 2016 to the amount of news coverage he received compared with his rivals (Shafer, 2016). Although news reports are guided by journalistic norms (Hackett, 1984;Muñoz-Torres, 2012), research indicates that there are market forces that influence the gatekeeping aspect of the media (Hamilton, 2011;Patterson, 2013), and that these factors were exploited by Trump during the 2016 elections (Callum Borchers, 2016;Confessore & Yourish, 2016).
Last-minute broadcasts which inform viewers about elections are of particular interest for the discussion of factors affecting electoral choices (Hofstetter & Buss, 1980). Such information is relevant for late deciding voters, the numbers of which have been rising in Western democracies, including in the US (see Yarchi et al. (2021) for a list of countries). For example, on election day, 12.5% of 2016 US voters were either undecided or said they planned to vote for third-party candidates (Silver, 2017). Not surprisingly, late deciding voters sometimes determine the final outcome of elections (Box-Steffensmeier et al., 2015;Schill & Kirk, 2017;Schmitt-Beck & Partheymüller, 2012). Voters that remain undecided are considered very unpredictable (Box-Steffensmeier et al., 2015, p.;Gopoian & Hadjiharalambous, 1994); they appear more reactive to campaign coverage (Fournier et al., 2004) and less critical about the information they consume (Samuel-Azran et al., 2022).
The period that follows the elections is also a sensitive one, as the legitimacy of the process is called into question by some elites that spread rumors of fraud (Minnite, 2011). Such rumors characterized the electoral campaign of 2020 US elections (Benkler et al., 2020;Berlinski et al., 2021;Enders et al., 2021) which were also accompanied by Trump's threats of not committing to a peaceful transfer of power (Crowley, 2020). Such claims continued after the election, including the period covered by our data collection 1 , leading to Trump supporters storming the capitol on January 6th (CNN, 2021). Hence, the post-election period is critical because the rumors are more likely to affect populations that are dissatisfied with the outcome.
Search engines as digital intermediaries
News organizations are becoming more dependent on digital intermediaries, such as search engines and social media platforms. These intermediaries represent short-term opportunities to engage audiences, even if these opportunities might result in the loss of control over their organization professional identity (Nielsen & Ganter, 2018). The technological companies behind these intermediaries are also leveraging their role to shape political communication (Kreiss & Mcgregor, 2018), while parties and candidates try to adapt their campaigns to the new media logic (Klinger & Svensson, 2015).
We focus on search engines, as they play a gatekeeper role in the current high-choice information environments (Van Aelst et al., 2017). Individuals frequently use them to seek information and learn from the results obtained (Fisher et al., 2015;Ward, 2021). Moreover, individuals rely on search engine ranking algorithms as a measure for content relevance (Edelman, 2021;Keane et al., 2008;Schultheiß et al., 2018;. Consequently, search engines became one of the most used technologies of finding political information (Dutton et al., 2017), which is crucial as there is evidence of their potential to shift voting preferences of undecided voters (Epstein et al., 2017;Zweig, 2017).
Specifically, we are interested in the coverage of topics in search engines. Instead of looking at a single result page in which virtually all items presented are pertinent to the query, we look at novelty, i.e., the number of novel items that emerge in the top results. We argue that higher novelty increases the visibility of the topic. First, an individual is more likely to encounter more information if they search more than once for the same topic at different points in time. Second, it increases the potential amount of information that can be circulated via the searcher's personal network due to the effects of interpersonal communication (Katz & Lazarsfeld, 2017;Schmitt-Beck, 2003). Third, given that recency plays an important role in the ranking of results (Dong et al., 2010), there could be spillover effects to other elements of search engine interfaces (e.g. news featured in the main search results).
Search engine auditing
Search engines have attracted a lot of attention in the algorithm auditing field, which investigates performance of algorithmic systems and their potential biases (Mittelstadt, 2016). First, researchers have reported a concentration of results of a few news sources for different Google interface components such as the main search results (Jiang, 2014), Google Top Stories (Kawakami et al., 2020;Trielli & Diakopoulos, 2019), news search (Nechushtai & Lewis, 2019) and video search (Urman et al., 2021a). These findings extend to the Dutch (Courtois et al., 2018) and German context (Haim et al., 2018;Unkel & Haim, 2019).
Second, Pariser (2011) argued that search personalization, i.e., content selected according to previous individual's consumption and preferences, could lead to filter bubbles, i.e., feedback loops of information which hinder exposure to different views. Current empirical evidence indicates that such concerns are overstated, and that, instead, search engines can lead to an increase of diversity of news sources that are consumed (e.g., Stier et al., 2022;Ulloa & Kacperski, 2022).
Third, several aspects of political representation have been investigated. Puschmann (2019) finds that some political parties and candidates can exert greater influence over how they are represented in search media (in terms of source type) than others. There is also evidence suggesting a (modest) left partisan leaning in Google search results (Robertson et al., 2018;Trielli & Diakopoulos, 2019), although the leaning is usually measured on the source and not necessarily the content level (Ganguly et al., 2020).
Only few studies conduct longitudinal investigations: Metaxas and Pruksachatkun (2017) reported that Google (but not Yahoo! and Bing) restricted variation of sources across time, favoring those that were considered "reliable" to prevent the surfacing of "fake news". Kawakami et al. (2020) found that a year before the US elections 2020, the number of unique news in Google's Tops Stories differed for different candidates, and it was higher for Donald Trump, which was attributed to him being the incumbent president. Pradel (2021) found gender and party differences in the amount of personal information related to politicians that appears on the search suggestions before and after the elections. Closer to our work, Metaxa et al. (2019) systematically analyzed daily search results, finding search outputs to be relatively stable, though some shifts suggested the existence of internal algorithmic factors, e.g., monthly synchronization of Google servers.
Research questions and hypothesis
Our aim is to analyze the rate at which new information is incorporated in the search results of different queries, search engines, and regions. To our knowledge, this is the first time that novelty of news search results is analyzed, i.e., we give first insight into the pace at which information is integrated into the search engines. The fine granularity of our data collection (every 21 minutes per query) allows us to capture sudden changes.
We first contrast US-related queries with other topical querieswe chose COVID-19 ("coronavirus"), the Poland abortion protests following the Constitutional Tribunal ruling on October 22, 2020 ("poland abortion") and the 2020 Nagorno-Karabakh conflict dated 27 September 2020 -10 November 2020 ("nagorno-karabakh conflict"), for which we also expected relatively high coverage and novelty of news articles. Additionally, we included stable queries ("first world war", "holocaust", "virtual reality"), for which we expected a low amount of novel news. These categories serve as a benchmarks to demonstrate the coverage given to novel items related to the US elections, see RQ1 in Table 1.
We further examine the evolution of the novelty for the US elections related queries. First, we divide our collection in four periods (denoted with roman numbers: I, II, III, and IV) defined by three key events: (a) close of all polls (Nov 4 th , 1:00 a.m. ET), (b) call of Michigan's result (Nov 4 th , 5:58 p.m. ET) , the 45 th state being called followed by 3 days without any calls, and (c) call of Pennsylvania's results (Nov 7 th , 11:25 a.m. ET), the state that gave the final victory to Biden. Then, we examine differences between periods, regions, and search engines. We pay special attention to differences between the queries of the two candidates to find imbalances in novelty. See RQ2 in Table 1.
RQ1 Is the novelty of queries related to the US elections higher than other queries during election day and the hours following it (*)?
H1a The novelty for queries related to the US elections is higher than for queries related to other topics, especially those not news-worthy during the same period (i.e., stable queries such as the First World War).
Consistently
H1b More novelty is displayed for topical queries during the collection (e.g., "poland abortion") than those not news-worthy (e.g., "first world war").
RQ2 Are there differences in novelty for the different US elections related queries, regions, periods and search engines?
H2a More novelty is displayed in Oregon (United States) than in Frankfurt (Germany). Given the role that localization plays in search results (Kliman-Silver et al., 2015), we assume that more attention is drawn to the topic in the US.
Consistently
H2b There are differences in the novelty of results shown by different search engines.
Consistently
H2c Specifically, Google will display less novelty than Bing and Yahoo! as previous research indicates that their organic results show less variation over time, presumably as a consequence of potential mechanisms to control web spammers (Metaxas & Pruksachatkun, 2017). We assume a similar trend for news results.
Partially
H2d Novelty of the US queries diminishes as results are more distant from election day and stories become less abundant in news.
Consistently
H2e There are no differences between novelty of the candidates in different search engines before the announced election result (Periods I, II, III).
Rejected
H2f Novelty of results for "joe biden" will be higher than for "donald trump" after the declaration of Biden as a winner (Period IV).
Consistently, but not for all period IV
Materials and Methods
For our data collection, we used virtual agents, i.e., software that simulates human behavior (Ulloa et al., 2021). The implementation of such an agent took the form of a browser extension (for Firefox and Chrome) that simulates the navigation of search result pages on a search engine, and that collects the HTML of the pages by sending it to a server. The agent collects at least 50 news search results (if available), and it iterates over the list of terms until terminated. Before starting the search for a new query, the browser data (e.g., history, cache) is cleaned, thus avoiding personalization effects based on previous browsing history.
We parsed the HTML pages to extract the top organic news results of each search routine.
Data collection
We used the news search engine results collected in two consecutives experiments which included 9 terms divided equally in three categories (see Table 2 Table 2. Terms of each query category. The first column displays the name of the query category, the second column the terms included in the category, the third column the topic they are related to, and the fourth column, the experiment (s) in which they were included For collection A, a total of 240 virtual agents were deployed simultaneously in the Amazon Elastic Compute Cloud (using 120 CentOS virtual machines, each hosting two virtual agents: one in Chrome and one in Firefox), and the agents were distributed equally to each experimental condition given by the combination of variables in Table 3. In total, each experimental condition was assigned to 4 different agents, so that we could account for the effects of results' randomizations by the search engines (Makhortykh et al., 2020).
Additionally, all machines on a given region were allocated in the same range of Internet In Appendix S1, we include a detailed analysis of the data collection coverage. In general, very good coverage can be reported for our analyses and although some systematic issues are reported, we make sure that our analysis are not directly affected by them. Additionally, the weighting of the ranking, presented in the next section, improve on potential distortions.
Definitions and metrics
Item. It describes the combination of a URL and a title in a news search result. An item is the main unit of analysis in this paper because some URLs are used as live streams (e.g., https://www.nytimes.com/live/2020/11/07/us/biden-trump) to dynamically publish different pieces of information. Thus, the URL does not uniquely identify a news search result.
New items. We define that an item in round j is new if it is the first time that it appears for a given query term and virtual agent; conversely, an item is not new if it appears in any previous round i (i.e., i < j) for that term and agent. The following items are discarded as we cannot ascertain if they are new or not: (1) items that appear on the first (successful) collection round, (2) items of a round j that follows a missing or incomplete round j-1.
Weighted rank. All our metrics (except diversity) consider the position (rank) of the search results. For this, we generalize the weights used to estimate the (political) biases on search results (Kulshrestha et al., 2019). In their work, each rank in the list is assigned a weight such that higher weights are assigned to higher ranked results (i.e., top results), which is then multiplied by the (political) bias score of the corresponding item. Let be the sequence of items ( ) of size N corresponding to the top results of a query in a given round, the weight for the rank is calculated as follows: Novelty. We define a parameter that takes the values 1 or 0 ( ) depending on whether the item is new or not, and use the weighted rank measure to calculate the novelty of the sequence : where ′ is a re-scaled weight that accounts for missing items, otherwise they would be implicitly counted as zeros, i.e., not new news items. We assume that missing items of an incomplete round should occur independently, therefore counting them as 0 would bias the calculation (decreasing the novelty). Let ′ be the set of collected items, then the weights are re-scaled as follows:
Study design and analysis
Our study considers several factors that affect the search results: search engine, region, query (or query category) and period. The first three are described in Table 2 and Table 3.
We define four periods according to three key events (close of all polls, call of Michigan's result and call of Pennsylvania's results). As an independent (continuous) variable, we analyze novelty as described before.
To answer the research questions (Table 1), we used linear mixed-effect models (Bates et al., 2015), fitting the interaction between the study factors (query or query category, period, engine and region). We considered the following random intercepts for repeated measures: query term (when query category is one of the factors), agent and round. However, we only report the models with the lowest Akaike's Information Criterion (AIC) (Akaike, 1974); in case of models not being statistically different, we kept the simplest of them. For novelty, we tested two types of models according to RQ1 (query category, engine, and region) and RQ2 (query, engine, region, period).
To evaluate our hypotheses, we count the relevant contrasts that are significantly different and support the hypothesis (or contradict it). The contrasts are calculated on the fitted model using the emmeans R package (Lenth, 2021). All our plots include bootstrapped confidence intervals (95%); in the case of time series, rolled averages (and confidence intervals) are calculated based on the observations of the previous 6 hours.
Results
We found a triple interaction between the query category, engine, and region; F(8, 207.266) = 9.205, p < .001 (Appendix S4). The US-related queries displayed significantly more novelty than the topical and stable queries for Bing, DuckDuckGo and Google in both regions (.10 < β < .23, p < .007) except between USand topical-related queries for DuckDuckGo in Frankfurt. No significant differences were found between the topical and stable queries.
Thus, we found support for H1a, but not for H1b. Figure 1 presents the results by query indicating that "coronavirus" is carrying the effect of the topical category. To confirm this, we fitted another model (Appendix S2) with an exclusive category for the "coronavirus" query, which was consequently removed from the topical category. In this new model, the USrelated queries displayed significantly more novelty than the topical and stable queries for all regions and engines (.08 < β < .22, p < .001), except for Baidu (NS). Additionally, for Google, the US-related queries displayed significantly more novelty than the "coronavirus" query (-.13 < β < -.07, p < .001). Given the generally low novelty of Baidu, we will not consider it for the rest of the analysis. To analyze the difference between the US-related queries, we fitted a model (Appendix S6) including the three queries and the four periods ( Figure 2). We found a quadruple interaction; F(18, 789420.858) = 17.45, p < .001. To understand the patterns of this interaction we analyzed the contrasts in four steps according to our hypotheses. First, we analyzed the hypothesis that the novelty was higher for Oregon (H2a), which was supported by 9 (out of 16) contrasts for "us elections" (-.11 < β < -.03, p < .001), by 6 (out of 16) for "donald trump" (-.11 < β < -.03, p < .001) and 1 (out of 16) for "joe biden" (β = -.06, p < .001), and rejected by 4 (out of 16) contrasts for "joe biden" (all corresponding to Bing; .03 < β < -.15, p < .001), 1 (out of 16) contrast for "donald trump" (β = .037, p = .001) and none for the "us elections". Second, we found significant differences between the novelty of different search engines (H2b) as shown in Table 1. Yahoo! consistently displayed the least novelty, while DuckDuckGo always occupied the first or second position. Bing occupied the first position in 4 (out of 6) combinations of query regions but shared the last position with Yahoo! for "joe biden" in Oregon, and "us elections" in Frankfurt. Google occupied the third position 3 times, the second 2 times and the first one time. Therefore, we only find partial support for H2c: Google displayed more novelty than Yahoo! in all cases and less novelty than Bing in 4 out of 6 cases; this held true for all periods regardless of the changes observed in specific periods (see last column of the table). Table 4. Engines sorted according to novelty. The first column displays the region, and the second the query. Column 3 to 6 indicates the position that the search engine took according to their novelty (in parenthesis); if there is no statistical difference between two engines, they are displayed in the same cell separated by column.
The last column indicates the periods for which the order held true; the italics indicates when the order of the non-statistical differences were switched.
Third, we analyzed the novelty of subsequent periods on all US-related queries for Oregon (H2d): 65 (out of 144) contrasts supported the hypothesized downward trend as time passed from election day (.04 < β < .21, p < .008). 10 contrasts contradicted the hypothesis (-.15 < β < -.04, p < .001), out of which, 6 involved period IV for "joe biden" which can be explained by the spike of news for "joe biden" after he was declared the winner (Period IV, Figure 3).
Fourth, we analyzed the contrasts between the candidates queries in Oregon for Period I to III (H2e). For Oregon, 8 out of 12 contrasts contradicted the hypothesis of unbiased novelty in Oregon; this included all contrasts of Bing (.14 < β < .22, p < .001), and DuckDuckGo (.06 < β < .13, p < .001), where we found more novelty for "donald trump" than for "joe biden", and one for Google (Period I, β = -.04, p < .001) and Yahoo! (Period II, β = -.03, p < .001), in which we found the opposite. The unbalance for Bing in Oregon is particularly disproportionate: at the end of the Period III, there are 3.24 times as many unique news items for "donald trump" (N=3599) as there are for "joe biden" (N=1110). This is followed by DuckDuckGo, with 1.99 times as many results for "donald trump" in Oregon (and 1.77 in Frankfurt, see Appendix S7 for other proportions). For Frankfurt, the results were more balanced: only 3 out of 12 contradicted the hypothesis in the same directions, according to search engine: Bing (Period II, β =.03, p < .001), DuckDuckGo (Period III, β = -.11, p < .001) and Google (Period II, β = -.04, p < .001).
Fifth, we analyzed novelty displayed by the candidate queries in Period IV (H2f): 3 (out of 4) contrasts in Frankfurt supported the hypothesis that Biden's query would display more novelty than Trump's (-.15 < β < -.02, p < .003). In Oregon, only one contrast was significant but contrary to the hypothesis (β = .05, p < .001). Since the evidence to support H2e remained contradictory, we supported it with a time series visualization (Figure 3). The spike of novelty generated, after Pennsylvania was called (Period IV), signaling the victory of Biden, is noticeable in all search engines; at their peaks, "joe biden"'s novelty was highest in all cases (also in Frankfurt, Appendix S8), but we also noticed that its novelty quickly declined, and, at least, in Bing and DuckDuckGo, "donald trump"'s novelty increased after the spike (similar to previous values). The latter observation is consistent with the bias noted in Periods I to III (which rejected hypothesis H2e). Additionally, the spike of Biden's novelty in Period IV was strong enough to explain the two contrasts that did not support the hypothesis of a downward trend in novelty as time passes (H2d). . The X-axis shows the day (major ticks) and hour (minor ticks) of the round in which the novelty was measured. The Y-axis shows the novelty truncated to .4 (maximum theoretical value 1.0). Each trace represents each of the query terms indicated on the legend. The green vertical lines divide each plot in four periods indicated in the label at the top. The gray dotted vertical line in Period II indicates the transition between collection A and B. Only the results collected in Oregon are shown. The bands indicate bootstrapped 95% confidence intervals.
Discussion
Using the novelty of news results, we confirm that the US elections were widely covered by all search engines (H1a) except for Baidu, the only non-US search engine we included. The "coronavirus" query was the only other query that displayed similar novelty; in other cases, we do not find differences between the topical and stable queries, which highlights the tendency to neglect localized but topical and news-worthy happenings such as the Poland abortion protests and the Nagorno-Karabakh conflict (H1b).
Although we find several differences between the novelty displayed by each search engine (H2b), we only find partial support for the hypothesis that Google displays less novelty than Bing and Yahoo! (H2c)), as was the case for the main search results in the US elections 2016 due to spam control mechanisms (Metaxas & Pruksachatkun, 2017). Specifically, there is partial support for Bing, but not for Yahoo!. It is possible that these search engines have now implemented spam control mechanisms similar to those of Google (thus, changing the trends from 2016).
We find support for decreasing novelty as searches become more distant from the elections period (H2d). We find that there was a rebound in novelty for "joe biden" in Period IV due to the spike of news caused by the victory of Biden. Nevertheless, the spike did not compensate for the downward trend in all cases, as shown in the time series visualization ( Figure 3).
We find differences in the novelty of the results concerning the election candidates in (Benkler et al., 2020;Berlinski et al., 2021;Enders et al., 2021). Independently of the potential for spreading misinformation, for the period before polls closed (Period I), there are still potential undecided voters seeking last-minute information who might be exposed to a higher number of articles about Trump.
Crucially, for the period after Biden was declared the President Elect (Period IV), we predicted that there would be more novel news articles for "joe biden" (H2f), but we found the novelty to be resilient: after a spike in the novelty that favored Biden at the beginning of the period it shifted back to similar values of previous periods, for example, in Bing, the novelty favored Trump again after the spike. Another result that sets Bing apart is that it was the only search engine which consistently displayed more novelty for Biden in Frankfurt, contradicting H2a. Such attrition of novelty could be attributed to stronger spam mechanisms in Oregon, however, that explanation would make even more puzzling the higher novelty for "donald trump" in Oregon, as it would indicate that more content was blocked for "joe biden" for no apparent reason.
In line with previous research (Urman et al., 2021b), we find multiple differences between search engines. However, we observe some qualitative parallels between two search engine pairs (Kulshrestha et al., 2017(Kulshrestha et al., , 2019 to our novelty measurement.
We list some limitations of our study. First, our results only cover one region in the US: Oregon. Instead of choosing two US regions, we decided to include Frankfurt as we were interested in international localization differences. Second, a list of three queries (of the same category) were assigned to each agent. Although the searches are synchronized across agents, the second query of the list is shifted 7 minutes (for each given round), and the third query, 14 minutes. Nevertheless, we argue that this should not affect the general patterns of the observed results as they are relatively small shifts. Third, we only include three queries per category. Fourth, we only analyze a small set of queries, but we point to potential spill overs that could emerge given the importance that search engines place on the recency of results. The observed imbalances could emerge (1) in other queries, for example, one could analyze if the novelty in the "us elections" is properly balanced between the candidates, and (2) in other sections of the search engines, for example, similar to the imbalances found for Google Top Stories section (Kawakami et al., 2020).
The present work also opens the door to future research regarding the visibility of candidates in news search, for example, the presence of each candidate in general queries (e.g., "us elections") or the news results that infiltrate the main search results. As the literature indicates, not only visibility, but also tonality, is important in terms of political choices (Hopmann et al., 2010), for which natural language processing techniques could be applied (Hutto & Gilbert, 2014). Finally, it is important to study the relation between novelty and information related to fraud claims and, in general, the presence of misinformation.
Conclusion
The existent relation between news organizations and political campaigns continues its transformation, as digital intermediaries such as search engines leverage their influence to shape political communication. We started this investigation to learn how search engines process the quick turnover of news content generated during highly mediated political events such as the 2020 US elections. We argue that our metric, novelty, allows the investigation of coverage and visibility of topics in search engines, and we demonstrate differences across search engines, regions, and periods. We find an imbalance in novelty between the candidate queries, particularly large for Bing in Oregon. Contrary to the main web search, in which biases can be explained by the difficulty of balancing the enormous quantity of content available online, the number of available news articles is comparatively small and limited to a more defined set of sources that search engines already control for. Thus, search engines share a larger responsibility in providing a balanced coverageeither in their algorithms or in the criteria used in the selection of news sources. Such imbalances in novelty affect the visibility of political candidates in news search. | 7,859.6 | 2022-11-09T00:00:00.000 | [
"Computer Science",
"Political Science"
] |
HL-LHC layout for fixed-target experiments in ALICE based on crystal-assisted beam halo splitting
The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) is the world’s largest and most powerful particle accelerator colliding beams of protons and lead ions at energies up to 7 Z TeV, where Z is the atomic number. ALICE is one of the experiments optimised for heavy-ion collisions. A fixed-target experiment in ALICE is being considered to collide a portion of the beam halo, split using a bent crystal inserted in the transverse hierarchy of the LHC collimation system, with an internal target placed a few metres upstream of the existing detector. This study is carried out as a part of the Physics Beyond Collider effort at CERN. Fixed-target collisions offer many physics opportunities related to hadronic matter and the quark-gluon plasma to extend the research potential of the CERN accelerator complex. Production of physics events depends on the particle flux on the target. The machine layout for the fixed-target experiment is developed to provide a flux of particles on the target high enough to exploit the full capabilities of the ALICE detector acquisition system. This paper summarises the fixed-target layout consisting of the crystal assembly, the target and the downstream absorbers. We discuss the conceptual integration of these elements within the LHC ring, the impact on ring losses, and expected performance in terms of particle flux on target.
I. INTRODUCTION
Advancements in the knowledge of fundamental constituents of matter and their interactions are usually driven by the development of experimental techniques and facilities, with a significant role of particle accelerators.The Large Hadron Collider (LHC) [1] at the European Organization for Nuclear Research (CERN) is the world's largest and most powerful particle accelerator colliding opposite beams of protons (p) and lead ions (Pb), allowing for unprecedentedly high centre-ofmass energies of up to 14 TeV and 5.5 TeV per nucleon, respectively.An ALICE fixed-target (ALICE-FT) programme [2] is being considered to extend the research potential of the LHC and the ALICE experiment [3].The setup of in-beam targets at the LHC is particularly challenging because of the high-intensity frontier of LHC beams [4].
Fixed-target collisions in the LHC are designed to be operated simultaneously with regular head-on colli-sions without jeopardising the LHC efficiency during its main p-p physics programme.Several unique advantages are offered with the fixed-target mode compared to the collider mode.With a high density of targets, high yearly luminosities can be easily achieved, comparable with luminosities delivered by the LHC (in the collider mode) and Tevatron [5].In terms of collision energy, the ALICE-FT layout would provide the most energetic beam ever in the fixed-target mode with the centre of mass energy per nucleon-nucleon of 115 GeV for proton beams and 72 GeV for lead ion beams [5], in between the nominal Relativistic Heavy Ion Collider (RHIC) and Super Proton Synchrotron (SPS) energies.Thanks to the boost between the colliding-nucleon centre-of-mass system and the laboratory system, access to far backward regions of rapidity is possible with the ALICE detector, allowing to measure any probe even at far ranges of the backward phase space, being utterly uncharted with head-on collisions [5].Moreover, the possibility of using various species of the target material extends the variety of physics cases, especially allowing for unique neutron studies [5].The physics potential [2,5] of such a fixed-target programme covers an intensive study of strong interaction processes, quark and gluon distributions at high momentum fraction (x), sea quark and heavy-quark content in the nucleon and nucleus and the implication for cosmic ray physics.The hot medium created in ultra-relativistic heavy-ion collisions offers novel quarkonium and heavy-quark observables in the energy range between the Super Proton Synchrotron (SPS) and the Relativistic Heavy Ion Collider (RHIC), where the QCD phase transition is postulated.
A significant innovation of our proposal is to bring particles of high energy collider to collisions with a fixed-target by splitting a part of the beam halo using a bent crystal.Particles entering the crystal with a small impact angle (≤ 2.4 µrad for silicon crystals and proton energy of 7 TeV [6]) undergo the channeling process resulting in a trajectory deflection equivalent to the geometric bending angle of the crystal body [7,8].Such a setup enables an in-beam target at a safe distance from the circulating beam.This type of advanced beam manipulation with bent crystals builds on the experience accumulated in different accelerators (see for example [9,10]), and in particular, on the successful results achieved in the multi-TeV regime for beam collimation at the LHC [11][12][13].The halo-splitting technique allows profiting from the circulating beam halo particles that are not contributing to the luminosity production and are typically disposed of by the collimation system.
The problem that we address is to design the machine layout that provides a number of protons on the target high enough to exploit the full capabilities of the ALICE detector acquisition system without affecting the LHC availability for regular beam-beam collisions.Our proposal of the ALICE-FT layout [14] follows general guidelines on technical feasibility and impact on the LHC accelerator of potential fixed-target experiments provided by the LHC Fixed Target Working Group of the CERN Physics Beyond Colliders forum [4,15].We also profit from the preliminary designs reported in [16,17] and from the design study of an analogous fixed target experiment at the LHC proposed to measure electric and magnetic dipole moments of short-lived baryons [18].
In this paper, we summarise the ALICE-FT machine layout.We report on the conceptual integration of its elements (crystal and target assemblies, downstream absorbers), their impact on ring losses, and expected performance in terms of particle flux on target.We also discuss a method of increasing the flux of particles on the target by setting the crystal at the optimal betatron phase by applying a local optics modification in the insertion hosting the ALICE experiment (IR2).This method is independent of the crystal location, allowing for a crystal installation in a place with good space availability.Moreover, it allows to recover the maximum performance of the system in case of changes in beam optics in the LHC.
II. MACHINE CONFIGURATION
A potential installation of the ALICE-FT setup will coincide with a major LHC upgrade in terms of instantaneous luminosity, commonly referred to as the High-Luminosity LHC (HL-LHC) [19], taking place in the Long Shutdown 3 (2026-2028), to make it ready for the LHC Run 4 starting in 2029.Some of the expected beam parameters, having a direct impact on the ALICE-FT experiment performance, are given in Table I.One key beam parameter to be upgraded is the total beam current that will increase nearly by a factor of two, up to about 1.1 A, leading to about 0.7 GJ of total beam energy stored in the machine.A highly efficient collimation system is therefore required in the LHC [20] to protect its elements, especially the superconducting magnets, from impacts of particles from the beam.The collimation system is organised in a precise multi-stage transverse hierarchy (see Table II) over two dedicated insertions (IRs): IR3 for momentum cleaning and IR7 for betatron cleaning.Each collimation insertion features a three-stage cleaning based on primary collimators (TCPs), secondary collimators (TCSGs) and absorbers (TCLAs).In addition, dedicated collimators are present in specific locations of the ring to provide protection of sensitive equipment (e.g.TCTP for the inner triplets), absorption of physics debris (TCL) and beam injection/dump protection (TDI/TCDQ-TCSP).The collimation system undergoes an upgrade, as described in [21], to make it compatible with HL-LHC requirements, but the general working principle will remain the same.The system is designed to sustain beam losses up to 1 MJ without damage and with no quench of superconducting magnets.The halo-splitting scheme relies on placing a crystal into the transverse hierarchy of the betatron collimation system, in between the primary and secondary stage of IR7 collimators, such that the collimation system efficiency is not affected.Placing the splitting crystal closer to the beam than the primary collimators would not be possible without designing a downstream system capable of withstanding the collimation design loss scenarios.Retracting the crystal at larger amplitudes avoids this problem while still allowing intercepting a significant fraction of the multi-stage halo, as shown below.In this scheme, a fraction of secondary halo particles redirected toward the target can be used for fixed-target collisions instead of disposing them at the absorbers, in a safe manner.Note that, for this scheme to work, the crystal does not need to be installed in IR7.In fact, the halo splitting is done in the vicinity of the experiment, where the target needs to be located.
III. ALICE-FT LAYOUT
A general concept of the ALICE-FT layout is illustrated in Fig. 1.A fraction of secondary halo particles are intercepted by the crystal and steered toward the target.Collision products are registered by the ALICE detector, which can handle in the order of 10 7 protons on target per second [5].Possible losses originating either from the crystal or from the target are intercepted by absorbers which would be installed downstream of the detector.The ALICE-FT will act on B1 due to ALICE detector geometry.We assume the layout to be operated with the optics envisioned for the HL-LHC (version 1.5 [22] at the moment), although it is not excluded that more optimized optics could be envisaged if needed.Since ALICE operates at low luminosities in proton runs, typically with offset levelling, this insertion does not demand tight optics constraints like the high-luminosity insertions for the general-purpose detectors [23].Actually, a minor, local modification of the optics in the IR2 region is proposed to enhance the system performance by optimising the betatron oscillation phase at the location of the crystal.This will not affect the rest of the machine.
Safe integration of the crystal into the hierarchy of the collimation system is achieved by setting the crystal in the shadow of IR7 primary collimators.As discussed in [18], the relative retraction of the crystal with respect to the IR7 primary collimator should not be smaller than 0.5 σ (RMS beam size), mostly to account for optics and orbit errors that, if not under control, could cause the crystal to become primary collimator accidentally.The retraction should be kept as low as possible to maximise the number of protons impacting the target [4] as shown in Fig. 7. Similarly to [18], we also assume that, for machine safety reasons, the distance from the deflected beam to the aperture and the distance from the target to main beams should be at least 4 mm.The system is to be installed in the vertical plane in order to avoid issues related to the beam dump system operating in the horizontal plane: this is the plane of the dump kickers and is subject to fast losses in case of over-populate abort gap or in case of (unlikely) dump failures [24,25].In the case of a horizontal setup, larger aperture margins on settings would be applied to lead to lower rates of impinging halo, which we prefer to avoid.
The beams crossing scheme at IP2 is also in the vertical plane.Furthermore, the main solenoid of the ALICE detector can be operated in two polarities which affects the slope of both beams at IP2.We mark the negative slope of the LHC beam 1 (B1) at IP2 as negX and the positive slope as posX.Given the trajectory of the B1 for both crossing schemes (negX and posX ), aperture restrictions and space availability, the optimal longitudinal coordinate for the crystal installation is 3259m (with 0 at IP1) as it allows for having just one crystal serving both crossing schemes.The graphical illustration of the proposed layout, which fulfils all the above requirements, is given in Fig. 2 and conditions inside the LHC tunnel are depicted in Fig. 3.
We consider a 16-mm long crystal made of silicon, with 110 bending planes and a bending radius of 80 m, the same as a bending radius of crystals already used in the LHC, following the parametric studies reported in [12] to ensure an optimum crystal channelling performance at the LHC top energy while keeping the nuclear interaction rate as low as possible.The crystal bending angle of 200 µrad is chosen to serve both crossing schemes and to fit the deflected beam within the available aperture.Any upstream location of the crystal would require having two crystals with different bending angles, depending on the crossing scheme.
The target assembly is planned to be installed nearly 5 m upstream from IP2 with a target made of either light or heavy material, e.g.carbon or tungsten of about 5 mm in length.Details on target design studies can be found in [26].
Four absorbers, of the same design as collimators already used in the LHC, are proposed to be installed about 150 m downstream from the IP2.The first three are proposed to be made of 1 m long carbon-fibre-carbon composite jaws, as the present TCSGs in the LHC, while the last one is to be made of 1 m long tungsten jaws, as the present TCLAs in the LHC; similarly as in [18].The difference is that in our study, we use a large opening of about 50 σ that still intercepts the channelled beam while being well in the shadow of the entire collimation system.Such a choice is driven by minimising the impact of these extra absorbers on the regular collimation system and machine impedance instead of searching for minimum gaps that maintain the collimation hierarchy.As will be shown later, we do not experience any cleaningrelated issues.The proposed setup of absorbers follows a performance-oriented approach with the potential to reduce the number of required absorbers, based on the energy deposition study to be done in the future.
IV. EXPECTED PERFORMANCE
The MAD-X code [27] is used to manage the HL-LHC model, to prepare suitable lattice and optics descriptions used as input to tracking studies, and to calculate the trajectory of particles experiencing an angular kick equivalent to the crystal bending angle.Detailed evaluation of the layout performance is done using multi-turn particle tracking simulations in SixTrack [28] that allows a simplectic, fully chromatic and 6D tracking along the magnetic lattice of the LHC, including interactions with collimators and bent crystals, and a detailed aperture model of the machine [29,30].In our simulations, we use at least two million protons, initially distributed over a narrow ring of radius r + dr slightly above 6.7 σ in the normalised transverse vertical position-angle phase space (y,y') which allows an estimation of the number of protons impacting the collimation system (including the crystal and the target of the ALICE-FT layout) as well as the density of protons lost per metre in the aperture with a resolution of 10 cm along the entire ring circumference.
The ALICE-FT experiment must be compatible with the standard physics programme of the LHC, meaning that it cannot add any operational limitations, mostly related to particle losses, which must stay within acceptable limits.This is demonstrated in Fig. 4 where a loss map of the machine including the ALICE-FT system does not contain any abnormal loss spikes comparing to the reference loss map of the machine without the ALICE-FT system.The only new spikes correspond to protons impacting the elements of the ALICE-FT setup.The setup is developed to provide as many protons as possible impacting the crystal (N PoC ) in order to maximise, for a given channeling efficiency, the number of protons on target.For a given crystal retraction relative to primary collimators, the proton flux on the crystal strongly depends on the betatron oscillation phase advance between the primary collimators at IR7 and the IR2 crystal.With the nominal optics, the phase advance is nearly least favourable, leading to a low N PoC .This can be corrected by a minor, local modification of the IR2 optics, implemented by changing strengths of IR2 quadrupole magnets labelled with natural numbers from 4 to 10 (lower the number, closer the magnet to the IP2) on both sides of the IP2.Quadrupoles upstream of the IP2 were constrained to shift the phase advance at the crystal while keeping the IP2 optics parameters unchanged.Quadrupoles downstream of the IP2 were constrained to recover the same optical parameters, including the betatron phase, as in the nominal optics.The maximum N PoC was found for phase advance shifted by −65 • , as shown in Fig. 5.The corresponding optical β y function in IR2 is given in Fig. 6 and changes in strengths of quadrupoles are summarised in Table III ification of optics is well feasible to be implemented at the IR2 and does not affect the rest of the machine, especially losses along the ring as shown in Fig. 4.
The number of protons hitting the target depends on the number of protons hitting the crystal, their phase-space distribution at the crystal entry face, parameters of the crystal and crystal position and orientation inside the beam pipe.All these phenomena are treated by the simulation code we use.Protons hitting the crystal emerge from the collimation system.Therefore, their number and phase-space distribution are subject to a complex multi-turn process which is also simulated.As a result, we obtain PoC and PoT being fractions of protons hitting the crystal or the target (respectively) over all protons intercepted by the collimation system.Three values of PoT resulting from performed simulations and depending on the relative crystal retraction are given in Fig. 7.
These scaling factors allow us to estimate the actual number of protons hitting the crystal and (more interestingly) the target under realistic conditions of the beam, where the most important factors are beam intensity and lifetime.An exponential decrease of the beam intensity is assumed, characterised by the time coefficient τ interpreted as a beam lifetime, which depends on beam parameters and machine state.A dominant contribution to the total beam lifetime comes from the beam burn-off due to collisions (τ BO ), while the number of protons on the target N PoT depends mostly on τ coll corresponding to beam core depopulation towards tails that are intercepted by the collimation system.Following the same assumptions as in [18], with I 0 being the initial beam intensity, time coefficients τ BO ≈ 20h and τ coll ≈ 200h, the number of protons impacting the target per 10 h long fill (T fill ) in 2018 operation conditions can be estimated as: This corresponds to an average flux of protons on target of 7.5 × 10 5 p/s being roughly one order of magnitude away from the design goal of 10 7 p/s.These two numbers are summarised in Table IV.Please also note that the intensity of HL-LHC beams will be about a factor of two larger, which most probably will result in a larger proton flux on target, much closer to the design goal.However, the final assessment can be established more precisely by studying the lifetime and beam loss performance of HL-LHC beams.
V. CONCLUSIONS AND OUTLOOK
This paper summarises the layout for ALICE fixed-target experiments based on crystal-assisted beam halo splitting.We have demonstrated that it can be operated safely without affecting the availability of the LHC for regular beam-beam collisions.The estimated proton flux on target is roughly one order magnitude away from the design goal of 10 7 , with a possible improvement resulting from larger intensities of HL-LHC beams, allowing to exploit of the full capabilities of the ALICE detector acquisition system.
FIG. 1 .
FIG.1.Working principle of the crystal-based fixed-target experiment (right side of the graphics) being embedded into the multi-stage collimation system (left side of the graphics).Graphics based on[18], mostly by D. Mirarchi.
FIG. 2 .
FIG. 2. The proposed layout of the ALICE-FT experiment.Both beams (B1 and B2) with their envelopes (7.3 σ) are given with solid lines for both ALICE solenoid polarities (posX and negX).Deflected beams are given in dashed blue lines.The machine aperture is given in solid black lines.Vertical dashed lines mark the locations of crystals, target and IP2, respectively.The location of absorbers is marked in the right bottom corner.
FIG. 4 .
FIG.4.Comparison of loss maps for the machine without (top) the ALICE-FT system, with the ALICE-FT system without the optics optimisation (middle) and with the ALICE-FT system with the optics optimisation (bottom).The local cleaning inefficency (vertical axis) is a measure of the number of protons not intercepted by the collimation system and impacting the machine aperture.The simulation limit of 1 proton lost in the machine aperture corresponds to 5 • 10 −7 m −1 in a 10 cm longitudinal bin.No abnormal loss spikes are present in the loss maps.
FIG. 5 .FIG. 6 .
FIG.5.The dependence of the fraction of the number of protons impacting the crystal over the number of protons impacting the primary collimator (PoC) on the betatron phase from the primary vertical collimator in IR7 (TCP.D) to the IR2 crystal (CRY).Crystal retraction is 7.9 σ.Statistical errors are in the order of 1%, which makes the bars hardly visible.
3 ]FIG. 7 .
FIG. 7.Fraction of the number of protons impacting the target over the number of protons impacting the primary collimator (PoT) for some values of crystal half-gap expressed as the number of beam sigma.Limits of the horizontal axis correspond to half-gaps of primary and secondary collimators in IR7.Statistical errors are in the order of 1%, which makes the bars hardly visible.
TABLE II .
HL-LHC collimation settings expressed in units of RMS beam size (σ), assuming a Gaussian beam distribution and transverse normalised emittance εn = 2.5 µm.
. Such a mod-
TABLE III .
Normalised strengths of quadrupoles for nominal and modified optics.IR2 left and IR2 right stand for regions upstream and downstream from the IP2, respectively.
TABLE IV .
Expected proton flux on target based on 2018 operation conditions compared with expected capabilities of ALICE detector acquisition system.The first number may grow up to a factor of 2 for HL-LHC conditions due to two times larger initial beam intensity. | 5,049.8 | 2022-10-24T00:00:00.000 | [
"Physics",
"Engineering"
] |
Linguistic Profiling of a Neural Language Model
In this paper we investigate the linguistic knowledge learned by a Neural Language Model (NLM) before and after a fine-tuning process and how this knowledge affects its predictions during several classification problems. We use a wide set of probing tasks, each of which corresponds to a distinct sentence-level feature extracted from different levels of linguistic annotation. We show that BERT is able to encode a wide range of linguistic characteristics, but it tends to lose this information when trained on specific downstream tasks. We also find that BERT’s capacity to encode different kind of linguistic properties has a positive influence on its predictions: the more it stores readable linguistic information of a sentence, the higher will be its capacity of predicting the expected label assigned to that sentence.
Introduction
Neural Language Models (NLMs) have become a central component in NLP systems over the last few years, showing outstanding performance and improving the state-of-the-art on many tasks (Peters et al., 2018;Radford et al., 2018;Devlin et al., 2019).However, the introduction of such systems has come at the cost of interpretability and, consequently, at the cost of obtaining meaningful explanations when automated decisions take place.
Recent work has begun to study these models in order to understand whether they encode linguistic phenomena even without being explicitly designed to learn such properties (Marvin and Linzen, 2018;Goldberg, 2019;Warstadt et al., 2019).Much of this work focused on the analysis and interpretation of attention mechanisms (Tang et al., 2018;Jain and Wallace, 2019;Clark et al., 2019) and on the definition of probing models trained to predict simple linguistic properties from unsupervised representations.
Probing models trained on different contextual representations provided evidences that such models are able to capture a wide range of linguistic phenomena (Adi et al., 2016;Perone et al., 2018;Tenney et al., 2019b) and even to organize this information in a hierarchical manner (Belinkov et al., 2017;Lin et al., 2019;Jawahar et al., 2019).However, the way in which this knowledge affects the decisions they make when solving specific downstream tasks has been less studied.
In this paper, we extended prior work by studying the linguistic properties encoded by one of the most prominent NLM, BERT (Devlin et al., 2019), and how these properties affect its predictions when solving a specific downstream task.We defined three research questions aimed at understanding: (i) what kind of linguistic properties are already encoded in a pre-trained version of BERT and where across its 12 layers; (ii) how the knowledge of these properties is modified after a fine-tuning process; (iii) whether this implicit knowledge affects the ability of the model to solve a specific downstream task, i.e.Native Language Identification (NLI).To tackle the first two questions, we adopted an approach inspired to the 'linguistic profiling' methodology put forth by van Halteren (2004), which assumes that wide counts of linguistic features automatically extracted from parsed corpora allow modeling a specific language variety and detecting how it changes with respect to other varieties, e.g.complex vs simple language, female vs male-authored texts, texts written in the same L2 language by authors with different L1 languages.Particularly relevant for our study, is that multi-level linguistic features have been shown to have a highly predictive role in tracking the evolution of learners' linguistic competence across time and developmental levels, both in first and second language acquisition scenarios (Lubetich and Sagae, 2014;Miaschi et al., 2020).
Given the strong informative power of these features to encode a variety of language phenomena across stages of acquisition, we assume that they can be also helpful to dig into the issues of interpretability of NLMs.In particular, we would like to investigate whether features successfully exploited to model the evolution of language competence can be similarly helpful in profiling how the implicit linguistic knowledge of a NLM changes across layers and before and after tuning on a specific downstream task.We chose the NLI task, i.e. the task of automatically classifying the L1 of a writer based on his/her language production in a learned language (Malmasi et al., 2017).As shown by Cimino et al. (2018), linguistic features play a very important role when NLI is tackled as a sentence-classification task rather than as a traditional document-classification task.This is the reason why we considered the sentence-level NLI classification as a task particularly suitable for probing the NLM linguistic knowledge.Finally, we investigated whether and which linguistic information encoded by BERT is involved in discriminating the sentences correctly or incorrectly classified by the fine-tuned models.To this end, we tried to understand if the linguistic knowledge that the model has of a sentence affects the ability to solve a specific downstream task involving that sentence.
Contributions In this paper: (i) we carried out an in-depth linguistic profiling of BERT's internal representations (ii) we showed that contextualized representations tend to lose their precision in encoding a wide range of linguistic properties after a fine-tuning process; (iii) we showed that the linguistic knowledge stored in the contextualized representations of BERT positively affects its ability to solve NLI downstream tasks: the more BERT stores information about these features, the higher will be its capacity of predicting the correct label.
Related Work
In the last few years, several methods have been devised to obtain meaningful explanations regarding the linguistic information encoded in NLMs (Belinkov and Glass, 2019).They range from techniques to examine the activations of individual neurons (Karpathy et al., 2015;Li et al., 2016;Kádár et al., 2017) to more domain specific approaches, such as interpreting attention mechanisms (Raganato and Tiedemann, 2018;Kovaleva et al., 2019;Vig and Belinkov, 2019), studying correlations between representations (Saphra and Lopez, 2019) or designing specific probing tasks that a model can solve only if it captures a precise linguistic phenomenon using the contextual word/sentence embeddings of a pre-trained model as training features (Conneau et al., 2018;Zhang and Bowman, 2018;Hewitt and Liang, 2019;Miaschi and Dell'Orletta, 2020).These latter studies demonstrated that NLMs are able to encode a variety of language properties in a hierarchical manner (Belinkov et al., 2017;Blevins et al., 2018;Tenney et al., 2019b) and even to support the extraction of dependency parse trees (Hewitt and Manning, 2019).Jawahar et al. (2019) investigated the representations learned at different layers of BERT, showing that lower layer representations are usually better for capturing surface features, while embeddings from higher layers are better for syntactic and semantic properties.Using a suite of probing tasks, Tenney et al. (2019a) found that the linguistic knowledge encoded by BERT through its 12/24 layers follows the traditional NLP pipeline: POS tagging, parsing, NER, semantic roles and then coreference.Liu et al. (2019), instead, quantified differences in the transferability of individual layers between different models, showing that higher layers of RNNs (ELMo) are more task-specific (less general), while transformer layers (BERT) do not exhibit this increase in task-specificity.
Our Approach
To probe the linguistic knowledge encoded by BERT and understand how it affects its predictions in several classification problems, we relied on a suite of 68 probing tasks, each of which corresponds to a distinct feature capturing lexical, morpho-syntactic and syntactic properties of a sentence.Specifically, we defined three sets of experiments.The first consisted in probing the linguistic information learned by a pre-trained version of BERT (BERT-base, cased) using gold sentences annotated according to the Universal Dependencies (UD) framework (Nivre et al., 2016).In particular, we defined a probing model that uses BERT contextual representations for each sentence of the dataset and predicts the actual value of a given linguistic feature across the internal layers.The second set of experiments consisted in investigating variations in the encoded linguistic information between the pre-trained model and 10 different fine-tuned ones obtained training BERT on as many Native Language Identification (NLI) binary tasks.
To do so, we performed again all probing tasks using the 10 fine-tuned models.For the last set of experiments, we investigated how the linguistic competence contained in the models affects the ability of BERT to solve the NLI downstream tasks.
Data
We used two datasets: (i) the UD English treebank (version 2.4) for probing the linguistic information learned before and after a fine-tuning process; (ii) a dataset used for the NLI task, which is exploited both for fine-tuning BERT on the downstream task and for reproducing the probing tasks in the third set of experiments.
UD dataset It includes three UD English treebanks: UD English-ParTUT, a conversion of a multilingual parallel treebank consisting of a variety of text genres, including talks, legal texts and Wikipedia articles (Sanguinetti and Bosco, 2015); the Universal Dependencies version annotation from the GUM corpus (Zeldes, 2017); the English Web Treebank (EWT), a gold standard universal dependencies corpus for English (Silveira et al., 2014).Overall, the final dataset consists of 23,943 sentences.
NLI dataset We used the 2017 NLI shared task dataset, i.e. the TOEFL11 corpus (Blanchard et al., 2013).It contains test responses from 13,200 test takers (one essay and one spoken response transcription per test taker) and includes 11 native languages (L1s) with 1,200 test takers per L1.We selected only written essays and we created pairwise subsets of essays written by Italian L1 native speakers and essays for all the other languages.At the end of this process, we obtained 10 datasets of 2,400 documents (33,756 sentences in average): 1,200 for the Italian L1 speakers and 1,200 for each of the other L1s included in the TOEFL11 corpus.
Probing Tasks and Linguistic Features
Our experiments are based on the probing tasks approach defined in Conneau et al. (2018), which aims to capture linguistic information from the representations learned by a NLM.In our study, each probing task Table 2: BERT ρ scores (average between layers) for all the linguistic features (All) and for the 9 groups corresponding to different linguistic phenomena.Baseline scores are also reported.
consists in predicting the value of a specific linguistic feature automatically extracted from the parsed sentences in the NLI and UD datasets.The set of features is based on the ones described in Brunato et al. ( 2020) which are acquired from raw, morpho-syntactic and syntactic levels of annotation and can be categorised in 9 groups corresponding to different linguistic phenomena.As shown in Table 1, these features model linguistic phenomena ranging from raw text ones, to morpho-syntactic information and inflectional properties of verbs, to more complex aspects of sentence structure modeling global and local properties of the whole parsed tree and of specific subtrees, such as the order of subjects and objects with respect to the verb, the distribution of UD syntactic relations, also including features referring to the use of subordination and to the structure of verbal predicates.
Models
NLM We relied on the pre-trained English version of BERT (BERT-base cased, 12 layers, 768 hidden units) for both the extraction of contextual embeddings and the fine-tuning process for the NLI downstream task.To obtain the embeddings representations for our sentence-level tasks we used for each of its 12 layers the activation of the first input token ([CLS]), which somehow summarizes the information from the actual tokens, as suggested in Jawahar et al. (2019).
Probing model As mentioned above, each of our probing tasks consists in predicting the actual value of a given linguistic feature given the inner sentence representations learned by a NLM for each of its layers.Therefore, we used a linear Support Vector Regression (LinearSVR) as probing model.
Profiling BERT
Our first experiments investigated what kind of linguistic phenomena are encoded in a pre-trained version of BERT.To this end, for each of the 12 layers of the model (from input layer -12 to output layer -1), we firstly represented each sentence in the UD dataset using the corresponding sentence embeddings according to the criterion defined in Sec.3.3.We then performed for each sentence representation our set of 68 probing tasks using the LinearSVR model.Since most of our probing features are strongly correlated with sentence length, we compared the probing model results with the ones obtained with a baseline computed by measuring the Spearman's rank correlation coefficient (ρ) between the length of the UD dataset sentences and the corresponding probing values.The evaluation is performed with a 5fold cross validation and using Spearman correlation (ρ) between predicted and gold labels as evaluation metric.As a first analysis, we probed BERT's linguistic competence with respect to the 9 groups of probing features.Table 2 reports BERT (average between layers) and baseline scores for all the linguistic features and for the 9 groups corresponding to different linguistic phenomena.As a general remark, we can notice that the scores obtained by BERT's internal representations always outperform the ones obtained with the correlation baseline.For both BERT and the baseline, the best results are obtained for groups including features highly sensitive to sentence length.For instance, this is the case of syntactic features capturing global aspects of sentence structure (Tree structure).However, differently from the baseline, the abstract representations of BERT are also very good at predicting features related to other linguistic information such as morpho-syntactic (POS, Verb inflection) and syntactic one, e.g. the structure of verbal predicate and the order of nuclear sentence elements (Order).
We then focused on how BERT's linguistic competence changes across layers.These results are reported in Figure 1, where we see that the average layerwise ρ scores are lower in the last layers both for all distinct groups and for all features together.As suggested by Liu et al. (2019), this could be due to the fact that the representations that are better-suited for language modeling (output layer) are also those that exhibit worse probing task performance, indicating that Transformer layers trade off between encoding general and probed features.However, there are differences between the considered groups: competences about raw texts features (RawText) and the distribution of POS are lost in the very first layers (by layer -10), while the knowledge about the order of subject/object with respect to the verb, the use of subordination, as well as features related to verbal predicate structure is acquired in the middle layers.
Interestingly, if we consider how the knowledge of each feature changes across layers (Figure 2), we observe that not all features belonging to the same group have an homogeneous behaviour.This is for example the case of the two features included in the RawText group: word length (char per tok) achieves quite lower scores across all layers with respect to the sent length feature.Similarly, the knowledge about POS differs when we consider more granular distinctions.For instance, within the broad categories of verbs and nouns, worse predictions are obtained by sub-specific classes of verbs based on tense, person and mood features (see especially past participle, xpos dist VBN), and by inflected nouns both singular and plural ( NN, NNS).Within the broad set of features extracted from syntactic annotation, we also see that different scores are reported for features referring e.g. to types of dependency relations: those linking a functional POS to its head (e.g.dep dist case, dep dist cc, dep dist conj, dep dist det) are Table 3: NLI classification results in terms of accuracy.We used the Zero Rule algorithm as baseline.
Note that, for each task, sentences of the 10 languages are paired with the Italian ones (e.g.KOR = KOR-ITA).
better predicted than others relations, such as dep dist amod, advcl.Besides, within the VerbPredicate group, lower ρ scores are obtained by features encoding sub-categorization information about verbal predicates, such as the distribution of verbs by arity (verbal arity 2,3,4), which also remains almost stable across layers.Since we observed these not homogeneous scores within the groups we defined a priori, we investigated how BERT hierarchically encodes across layers all the features.To this end, we clustered the 68 linguistic characteristics according to layerwise probing results: specifically, we performed hierarchical clustering using Euclidean distance as distance metric and Ward variance minimization as clustering method.Interestingly enough, Figure 3 shows that the traditional division of features with respect to the linguistic annotation levels has not been maintained.On the contrary, BERT puts together features from all linguistic groups into clusters of different size.In addition, these clusters gather features that are differently ranked according to the baseline scores (ranking positions are bolded in the figure).For example, the first cluster includes features with similar ρ scores, and both highly and lower ranked by the baseline.All these features model aspects of global sentence structure, e.g.sent length, functional POSs (e.g.upos dist DET, ADP, CCONJ), parsed tree structures (e.g.parse depth, verbal heads dist, avg links len), nuclear elements of the sentence such as subjects (dep dist nsubj), verbs ( VERBS), pronouns ( PRON).
The Impact of Fine-Tuning on Linguistic Knowledge
Once we have probed the linguistic knowledge encoded by BERT across its layers, we investigated how it changes after a fine-tuning process.To do so, we started with the same pre-trained version of the model used in the previous experiment and performed a fine-tuning process for each of the 10 subsets built from the original NLI corpus (Sec.3.1).We decided to use 50% of each NLI subset for training (40% and 10% for training and development set) and the remaining 50% for testing the accuracy of the newly generated models.
Table 3 reports the results for the 10 binary NLI tasks.As we can notice, BERT achieves good results for all downstream tasks, meaning that is able to discriminate the L1 of a native speaker on a sentencelevel regardless of the L1 pairs taken into account.The best performance is achieved by the model that was fine-tuned on the Korean and Italian pairwise subset, while the lowest scores are obtained with the model trained on the subset consisting of essays written by Spanish and Italian L1 speakers (SPA-ITA).Interestingly, these results seem to reflect typological distances among L1 pairs, with higher scores for languages that are more distant from Italian (Korean, Telugu or Hindi) and lower scores for L1s belonging to the same language family (FRE-ITA or SPA-ITA).
After fine-tuning the model on NLI, we performed again the suite of probing tasks on the UD dataset using the 10 newly generated models and following the same approach discussed in Section 4. Figure 4 reports layerwise mean ρ correlation values for all probing tasks obtained with BERT-base and the other fine-tuned models.It can be noticed that the representations learned by the NLM tend to lose their Figure 6: % of probing features for which the MSE of the sentences correctly classified by BERT-base (Pre-train) and the fine-tuned models (Fine-tune) is lower than that of the incorrectly ones.Results are reported for layers -12, -7 and -1.precision in encoding our set of linguistic features after the fine-tuning process.This is particularly noticeable at higher layers and it possibly suggests that the model is storing task-specific information at the expense of its ability to encode general knowledge about the language.Again, this is particularly evident for the models fine-tuned on the classification of language pairs belonging to the same family, SPA-ITA above all.To study which phenomena are mainly involved in this loss, we computed the differences between the probing tasks results obtained before and after the fine-tuning process.We focused in particular on the scores obtained on the output layer representations (layer -1), since it is the most task-specific (Kovaleva et al., 2019).For each subset, Figure 5 reports the difference between the score of each linguistic feature obtained with the pre-trained model and the fine-tuned one.Not surprisingly, the loss of linguistic knowledge reflects the typological trend observed for overall classification performance.In fact, when the task is to distinguish Italian vs German, French and Spanish L1, BERT loses much of its encoded knowledge for almost all the considered features.This is particularly evident for the morpho-syntactic features (i.e.distribution of upos dist and xpos dist) and for features related to lexical variety (i.e.ttr form, ttr lemma).It seems that for typologically similar languages BERT needs more task-specific knowledge mostly encoded at the level of morpho-syntactic information rather than the structural level.On the contrary, the drop is less pronounced and in most cases not significant for models fine-tuned on the classification of more distant languages (e.g.models fine-tuned on KOR-ITA or TUR-ITA).In this case, the quite stable performance on the probing tasks may suggest that those features were still useful to perform the downstream task.Interestingly, the class of features that decreases significantly in all models are those encoding the knowledge about the tense of verbs.This is particularly the case of the third-person singular verbs in the present tense (xpos dist VBZ) and of verbs in the past tense (xpos dist VBD).A possible explanation could be related to the prompts of essays, which are the same across the NLI dataset.Thus, the textual genre could have favored a quite homogeneous use of verbal morphology features by students of all L1s.This makes this class of features less useful for the identification of native languages.
6 Are Linguistic Features useful for BERT's predictions?
As a last research question we investigated whether the implicit linguistic knowledge affects BERT's predictions when solving the NLI downstream task.To answer this question we have split each NLI subset into two groups, i.e. sentences correctly classified according to the L1 and those incorrectly classified.For the two groups of each NLI subset, we performed the probing tasks using the pre-trained BERT-base and the specific NLI fine-tuned model.For each sentence of the two groups, we calculated the variation between the actual and predicted feature value obtaining two lists of absolute errors.We used the Wilcoxon Rank-sum test to verify whether the two lists were selected from samples with the same distribution.As a general remark, we observed that much more than half of features vary in a statistically significant way between correctly and incorrectly classified sentences.This suggests that BERT's linguistic competence on the two groups of sentences is very different.To deepen the analysis of this difference, we calculated the accuracy achieved by BERT in terms of Mean Square Error (MSE) only for the set of features varying in a significant way. Figure 6 reports the percentage of features for which the MSE of the sentences correctly classified (MSE Pos) is lower than that of the incorrectly ones (MSE Neg).This percentage is significantly higher, thus showing that BERT's capacity to encode different kind of linguistic information could have an influence on its predictions: the more BERT stores readable linguistic information into the representations it creates, the higher will be its capacity of predicting the correct L1.Moreover, we noticed that this is true also (and especially) using the pre-trained model.In other words, this result suggests that the evaluation of the linguistic knowledge encoded in a pre-trained version of BERT on a specific input sequence could be an insightful indicator of its ability in analyzing that sentence with respect to a downstream task.
Interestingly, if we analyze the average length of correct and incorrect classified sentences, the correct ones are much more longer than the others for all tasks (from 3 tokens more for SPA-ITA to 9 for TEL-ITA).This is quite expected for the NLI task, since a higher number of linguistic events possibly occurring in longer sentences are needed to classify the L1 of a sentence (Dell'Orletta et al., 2014).At the same time, longer sentences make more complex the probing tasks because the output space is larger for almost all them.This is an additional evidence that BERT's linguistic knowledge is not strictly related to sentence complexity, but rather to the model's ability to solve a specific downstream task.To confirm this hypothesis and verify whether such tendency does not only depend on sentence length, we trained another LinearSVR that takes as input the sentence length and predict our probing tasks according to correctly or incorrectly classified NLI sentences.Table 4 reports the average Spearman's correlation coefficients between gold and predict probing features for the two classes of sentences.Results showed that, for all the considered language pairs, the LinearSVR achieved higher accuracy for the probing tasks computed with respect to the incorrectly NLI classified sentences.This is an additional evidence that deeper linguistic knowledge is needed for BERT to correctly classify the L1 of a sentences.
Conclusion
In this paper we studied what kind of linguistic properties are stored in the internal representations learned by BERT before and after a fine-tuning process and how this implicit knowledge correlates with the model predictions when it is trained on a specific downstream task.Using a suite of 68 probing tasks, we showed that the pre-trained version of BERT encodes a wide range of linguistic phenomena across its 12 layers, but the order in which probing features are stored in the internal representations does not necessarily reflect the traditional division with respect to the linguistic annotation levels.We also found that BERT tends to lose its precision in encoding our set of probing features after the finetuning process, probably because it is storing more task-related information for solving NLI.Finally, we showed that the implicit linguistic knowledge encoded by BERT positively affects its ability to solve the tested downstream tasks.
Figure 2 :
Figure 2: Layerwise ρ scores for the 68 linguistic features.Absolute baseline scores are reported in column B.
Figure 3 :
Figure 3: Hierarchical clustering of the 68 probing tasks based on layerwise ρ values.Bold numbers correspond to the ranking of each probing feature based on the correlation with sentence length.
Figure 4 :
Figure 4: Layerwise mean ρ scores for the pre-trained and fine-tuned models.
Average ρ scores for sentences correctly and incorrectly classified using only sentence length as input feature.
Table 1 :
Linguistic Features used in the probing tasks. | 5,871.6 | 2020-10-05T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Nanostructured thin films of TiO2 tailored by anodization
Although nanostructured TiO2 layers have been widely prepared by anodization, thin films with thicknesses under 1 μm, over substrate other than Ti foils, with structures beyond the nanopores, had remained a challenge. In this work, such nanostructured TiO2 thin films were synthesized by anodization of Ti films deposited by sputtering on FTO/glass substrates. Anodization was performed in an electrolyte based on 0.6 wt% of NH4F, a graphite cathode and the application of 30 V during lapses ranging from 3 to 14 min. The amorphous TiO2 structures acquired the crystal anatase phase after a post-annealing treatment at 450 °C/4 h. Porous morphologies were observed for anodizing times of 3 and 4 min, sponges were formed with 5 and 6 min and vertical tubular structures were achieved by using 7 up to 9 min; dissolution was observed for longer times. Pore diameters of the structures were in the range of 27 to 47 nm, lengths were within the 330 and 1000 nm interval, transmittance was in the visible range of 70 ± 10%, the energy gap was 3.37 ± 0.02 eV and the wet contact angle was between 20 to 27°. One major contribution of the findings herein developed, is that they can be extended to TiO2 thin films, with a specific nanostructure, grown on a wide gamma of substrates, relevant for particular applications.
Typically, an ETL is a compact layer that can be made from semiconductors such as ZnO, Zn 2 SO 4 , In 2 S 3 , SnO 2 , Zn 2 SnO 4 and Nb 2 O 5 [17], but TiO 2 is the most widely used because of its excellent optoelectronic properties, photoelectrochemical stability [18] and compatible band gap upper level (∼4.05 eV) with the perovskite (3.97 eV) [19].When on top of this compact TiO 2 layer, a mesoporous layer is placed, the conversion energy efficiency of the HPSC is increased due to the generation and conduction of charge carriers is improved as well [20].This efficiency can be further increased if the mesoporous film is nanostructured, as in the case of TNS rods [21] or cones [22] synthesized using the hydrothermal procedure, and columns obtained through thermal oxidation [23].However, anodization, a low-cost, simple and reproducible technique [24], has been barely used, and only tubular structures have been reported for ETLs [25][26][27].In the anodization technique, the concentration of the NH 4 F electrolyte controls the oxidation speed [28], the applied voltage influences the diameter of pores and tubes [29] and the synthesis time determines the length of the TNS [30,31].
TNS for hybrid solar cells are challenging to obtain due to the particular requirements for such application.Among these requirements are the use of a glass substrate, annealing temperatures below 500 °C, structure lengths in the range of 300 to 1000 nm [27,28] and a desirable (004) preferred orientation for higher efficiencies [32].These characteristics are fulfilled under the conditions we report in this work, in which TNS with different morphologies were attained by anodization, a valuable technique due to its simplicity and low cost.
Materials and methods
Titanium films were deposited by DC magnetron sputtering, using a 99.99% Ti target (Kurt J Lesker), and FTO/ glass substrates (Ossila TEC 15 S304).The deposition conditions were an applied power of 100 W, 38 sccm of Ar (2 mTorr), a target-substrate distance of 5 cm, growth rate of 15 nm min −1 and deposition times of 20 and 40 min for film thicknesses of 300 and 600 nm, respectively.The base pressure was 6 × 10 −6 Torr.Prior deposition the target was sputter-cleaned for 5 min applying 100 W. Morphology and roughness of the as grown Ti films were characterized by AFM (SPM Park Systems XE-70) and the thickness was verified in a SEM JEOL JIB-4500.
The TNS were synthesized by anodization of the Ti/FTO/glass films, using a graphite rod cathode (6 mm diameter, 4 cm length) placed 1.5 cm apart.The electrolyte solution was 0.6 wt% NH 4 F, 2 wt% of deionized water and the balance of ethylene glycol.A DC voltage of 30 V was applied between the film and rod during periods ranging from 3 to 14 min, while the solution was magnetically stirred.For the shorter times (3 to 6 min), Ti films 300 nm thick were used to keep the final TiO 2 structure in the length range of 300 to 1000 nm; for times between 7 and 14 min, 600 nm Ti films were needed for the same purpose.The current versus time response was monitored in an automatized homemade system.
The amorphous TiO 2 samples, previously rinsed in ethanol and deionized water, were annealed at 450 °C for 4 h.The TNS morphologies and cross section images were acquired in a SEM JEOL JSM-7600F.XRD analysis were conducted in a Philips X'pert MPD diffractometer, with a λ Cu Kα = 1.5405Å, in a Bragg-Brentano configuration, using steps of 0.02°and 0.5 s.Raman measurements were carried out in a Thermo Scientific DXR Raman Microscope (532 nm wavelength, 10 mW).The water contact angle (WCA) was determined in a homemade device with a high precision digital camera (108 mega pixels) and drops of 9 μl; measurements, corroborated in different areas of the samples, were performed at room temperature and a relative humidity of 40%.A UV-vis spectrometer (Agilent Carry 60) was used to acquire the transmittance spectra in the 200 to 1100 nm range.Band gap energies were obtained from Tauc plots.Pore diameters were determined by the ImageJ v1.53k software, average values were calculated with at least thirty data in each sample.The same software was used for the contact angle determination.All measurements were confirmed in a second set of samples prepared in the same conditions.
Results and discussion
The Ti films deposited by sputtering showed a granular homogeneous morphology (appreciated in the AFM image of figure 1(a)) with a RMS roughness of 20 nm and a uniform thickness (as the observed in the SEM micrograph in figure 1(b) for the sample deposited during 20 min).
Current density versus anodization time curves were taken during the anodization process to provide a guide of the TNS formation and dissolution.Figure 2 shows a representative curve in which four stages (s1-4) are appreciated.The chemical reactions at the Ti-electrolyte interface [33,34] give rise to a formation of a superficial compact oxide layer, known as the barrier layer due to the conductivity reduction of the once pure metallic surface, being the origin of the current density drop at s1 [33,34].The appearance of pits (s2) gives rise to a peak [33,34].In our study, three of them (s2a-c) that are associate with a 1st, 2nd and even a 3rd family of pits, which act as nucleation sites for pores.With the increment of the synthesis time, the material at pore edges gradually get dissolve under the influence of F -ions, removing the thin oxide layer that covers the structure formed underneath.The electric field generated by the applied voltage also leads to a gradual increase in pore diameter over time.The nanotubes are formed at a constant current (starting at s3), then dissolution overcomes the formation rate at s4 [33,34].Beyond this point, there is an increment in the current, which indicates the approximation to the conductive FTO film (needed in HPSC), different from non-conductive substrates for which no current should be observed.
SEM images of the samples anodized at different times during the constant current stage (s3 to s4), were taken before and after a post-annealing treatment at 450 °C/4 h, verifying that the annealing did not change the morphology.Porous structures were observed in the 3 min and 4 min samples, as is depicted from figures 3(a)-(b).For 5 and 6 min, dissolution of material between pores was appreciated (figures 3(c)-(d)), indicative of the formation of a transition structure between the just porous and tubular, known as nanosponges [33].Films with larger pore sizes show lower contact angles (see table 1), which are preferred for a higher surface wettability [36,37].The contact angles obtained from images as the ones in figure 6(bottom), were between 20 to 27°, values low enough to promote a more homogeneous growth of a hybrid perovskite film.
Raman spectra of all samples exhibit only the characteristic vibrational modes associated to TiO 2 in the anatase phase, as seen in figure 7 (top) for the 7 min sample.The anatase phase was confirmed by XRD using de JCPDS No. 21-1272 card.
Figure 7 (bottom) shows the diffractograms for porous and tubular structures (4 and 8 min).A crystallite size of 25 nm was calculated by the Debye-Scherrer equation [38][39][40].A preferential (004) orientation was observed in both structures, also reported for other TNS [32], but the preferential orientation was stronger in the tubular structures in which the intensity ratio of the (101) to the (004) peaks was 1:3 in comparison to the 1:2 for the porous.
The UV-vis transmittance curves of anodized TiO 2 samples obtained from 3-9 min are shown in figure 8 (top).Transmittance was in the 70 ± 10% range for wavelengths over 500 nm, typical and usable values for HPSC [41].In general, the difference in transmittance can be associated to the differences in morphology and thickness of samples [38] (except for the lower transmittance of the 6 min sample which should be related to an incomplete oxidation of the Ti layer [31,38]).The observed oscillations are attributed to multiple reflections at the interfaces between the film and the substrate as well as the film and air [42].The small difference in the intersections may be associated to the morphologic differences of samples [43].The average indirect band gap of 1 summarizes the pore and/or tube inner diameters as well as the lengths of the structures.The length of the sample anodized for 3 min (516 nm) is longer than the one in the initial Ti film (300 nm), which is due to the formation of the oxide.When the anodizing time increases to 4 min, the pore size increases and the length decreases (to 339 nm) because of the dissolution of the structure.The same trend in the length is observed in the TNS obtained by anodization of the 600 nm Ti films for times between 5 and 10 min, but the pore size remains similar, in agreement for the anodization process at a fixed applied voltage [35,51,52].All the obtained TNS lengths are in the interval of 300 to 1000 nm required for applications in perovskite solar cells [41].
Conclusions
Nanostructured TiO 2 films with porous, sponges and tubular morphologies were attained by anodization of Ti/ FTO/glass films, using 30 V and anodizing times ranging from 3 to 9 min.After a heat treatment at 450 °C/4 h, all the structures exhibited the anatase phase with a (004) preferential orientation.Pore diameters were in the 27 ± 4 to 47 ± 6 nm range.Lengths were kept within the 300 to 1000 nm interval.Transmittance in the visible range was 70 ± 10%, the energy gap was 3.37 ± 0.02 eV and the wet contact angle was between 20 to 27°.These characteristics fulfill the technical requirements, such as a glass substrate, nanometric pore size, length range, preferred orientation, transmittance, energy gap and wet contact angle, that make the achieved nanostructured TiO 2 thin films, potential candidates for the electron transport layers in hybrid perovskite solar cells.In addition, the results of this work open the possibility to prepare, at a low-cost, TiO 2 in thin film form, with a specific nanostructure, on diverse substrates relevant for particular applications (as the mentioned in references [6,7] and [12][13][14][15][16]).
Figure 1 .
Figure 1.(a) AFM and (b) SEM of the transversal section of a Ti film deposited by sputtering during 20 min.
Figure 2 .
Figure 2. Current density versus Anodization time curve taken during the anodization of a 300 nm Ti film.
For
longer anodizing times, 7 to 9 min, morphologies were as shown in figure 4. The well define structures attained with 8 and 9 min (figures 4(b)-(c)), different from the fuzzy obtained with 7 min (figure 4(a)), are associated to the total formation of nanotubes, which is in accordance to literature [31].Formation of tubes was corroborated by SEM images of the cross section, as the one in figure 4(d).The tubular structures begin to collapse at 10 min because of their dissolution (figure 5(a)), which is increased for longer times as is observed in figure 5(b) for the sample anodized for 14 min.
Figure 3 .
Figure 3. SEM images of TiO 2 structures obtained by anodization of Ti films 300 nm thick, during (a) 3 min and (b) 4 min; and Ti films 600 nm thick anodized for (c) 5 min and (d) 6 min; all of them post-annealed at 450 °C/4 h.
Figure 4 .
Figure 4. SEM images of the TiO 2 structures obtained by anodization of 600 nm Ti films during (a) 7 min, (b) 8 min and (c) 9 min, post-annealed at 450 °C/4 h.(d) SEM of the cross section of the sample in (b).
Figure 6 .
Figure 6.Wet contact angles of the FTO substrate and compact TiO 2 (top), and nanostructured TiO 2 samples obtained by anodization for 3 and 7 min (bottom) and a post-annealing at 450 °C/4 h.
Figure 7 .
Figure 7. Top: Raman spectra of the sample with 7 min of anodization.Bottom: (a) Ti/FTO/glass substrate and the TiO 2 structures obtained by anodization during (b) 4 min and (c) 8 min and a post-annealing at 450 °C/4 h.
Table 1 .
Pore and/or inner diameters, lengths, band gaps and contact angles of the TNS samples prepared by anodization of Ti films, using different times and post-annealed at 450 °C/4 h. | 3,162.6 | 2024-02-19T00:00:00.000 | [
"Materials Science"
] |
DYRE: a DYnamic REconfigurable solution to increase GPGPU’s reliability
General-purpose graphics processing units (GPGPUs) are extensively used in high-performance computing. However, it is well known that these devices’ reliability may be limited by the rising of faults at the hardware level. This work introduces a flexible solution to detect and mitigate permanent faults affecting the execution units in these parallel devices. The proposed solution is based on adding some spare modules to perform two in-field operations: detecting and mitigating faults. The solution takes advantage of the regularity of the execution units in the device to avoid significant design changes and reduce the overhead. The proposed solution was evaluated in terms of reliability improvement and area, performance, and power overhead costs. For this purpose, we resorted to a micro-architectural open-source GPGPU model (FlexGripPlus). Experimental results show that the proposed solution can extend the reliability by up to 57%, with overhead costs lower than 2% and 8% in area and power, respectively.
Introduction
Currently, GPGPUs are a major workhorse in applications involving data-intensive operations, such as multimedia and high-performance computing (HPC). Moreover, GPGPUs are now widely used in the electronics equipment for safetycritical systems, for example, in the automotive and robotics fields [1]. In all these cases, the reliability can be limited by the effects of transient and permanent faults affecting the GPGPU hardware [2]. In fact, GPGPUs are implemented using the latest technology scaling to increase performance and reduce power consumption. However, some studies [3] show that devices manufactured with cutting-edge technologies can be particularly prone to faults arising during the operational-life caused by aging, wear-out, or external effects (e.g., radiation) [4], so compromising their reliability [5]. As a result, traditional end-of-line testing is no longer sufficient to address these emerging reliability challenges properly.
In order to tackle reliability issues, some solutions have been proposed in the literature [6]. These can be divided into three main categories: software, hardware, and hybrid.
The software solutions rely on modified versions of the application code to harden and mitigate fault effects [7]. These solutions are noninvasive, flexible, and have been proven in GPGPUs [8], but can be very costly in terms of performance [9]. In [10], the authors developed fault-tolerance solutions for parallel processors by adjusting the instruction-level parallelism, increasing the reliability at the cost of workload performance. On the other hand, authors in [11] propose a reduced precision Duplication with Comparison (DWC) approach to increase the reliability in GPUs by replicating instructions and operating them in execution units at different precision, so obtaining redundancy at zero cost, but degrading performance and output precision.
The hardware solutions are based on special structures devoted to verifying the correct operation and mitigate errors in the modules. They may use design for testability (DfT) structures, e.g., Built-In Self-Test (BIST), to detect faults, hardware redundancy, and spare modules to provide fault-tolerance (Built-in Self-Repair or BISR). Among the possible hardware solutions, a popular one relies on including redundancy to guarantee the long-term reliability of GPGPU devices. Special strategies, such as Error Correcting-Codes (ECCs), contribute to reducing the sensitivity to faults in some large structures, such as memories, register files, and communication interfaces [12]. However, the mitigation of faults in other modules, e.g., execution cores, scheduling controllers, and dispatchers, is more complex. Authors in [13] employed a dual-lockstep structure to provide fault-tolerance capabilities to processor-based devices against transient fault effects. In contrast, Authors in [14] explore several fault-tolerant strategies to harden a processor against faults. The results show that complex devices can take advantage of several strategies depending on the affected module to reduce the total overhead, which in principle can also be adopted into GPU devices. Other strategies, such as DWC [15], Triple Modular Redundancy (TMR), BISR or combinations of redundancy, custom controllers and hardware diversity [16] [17], are 1 3 DYRE: a DYnamic REconfigurable solution to increase GPGPU's… adopted for elaborated modules with considerable costs in terms of hardware and power overhead [18]. However, the additional cost in terms of design and manufacturing of these detection and mitigation strategies may be unaffordable. In the past, several solutions have been proposed (at different levels of granularity) for hardening processor-based systems. These solutions include the repair of pipeline stage modules [19], reconfigurable structures for processing elements in a device [20], repair of embedded SRAM memories [21] and the test and repair of different modules in parallel architectures [22]. However, their extension to GPGPUs has not been fully explored.
Hybrid mechanisms are optimized solutions that combine hardware structures and software mechanisms to detect [23] and mitigate faults [24,25]. The hardware and hybrid solutions must be adopted during the design stages of a device and may significantly impact the devices. Nevertheless, their main advantage is the low-performance overhead during the operation [26].
A compelling case is when we aim at protecting the execution cores of a GPGPU. These are regular structures that represent a considerable percentage of the area and are the principal operative elements inside the GPGPU. In [27], the authors propose a fault detection and mitigation technique for large modules by employing a DWC mechanism. In [28], a hybrid approach called Dynamic Duplication with Comparison (DDWC) is presented aimed to detect faults in the execution cores during the in-field operation. Similarly, in [29], and [30], the authors propose mitigation solutions for similar structures by adapting the BISR mechanism to replace faulty modules during the manufacturing process and the in-field operation, respectively. Nevertheless, most currently adopted fault-tolerance solutions for GPGPUs do not provide the detection and the mitigation of faults using the same architecture. Moreover, only in rare cases, the solutions are intended to operate and be flexibly activated during the in-field execution of a device.
In this work, we propose a solution called DYnamic REconfigurable structure for in-field detection and mitigation of faults (DYRE) based on the coalescence of the classical DWC mechanism and the BISR approach. DYRE is intended to increase the reliability and operative-life, by supporting both detection and mitigation of permanent faults in the execution cores of a GPGPU. This mechanism allows the reconfiguration of the GPGPU to identify (through comparisons) and mitigate (by module replacement) possible faults arising during the in-field operation. The architecture of a GPGPU architecture adopting DYRE can be dynamically re-configured using custom instructions purposely added to the instruction set. Finally, the DYRE architecture is designed to avoid significant changes in the original GPGPU design and minimize its impact on execution performance.
The main contributions of this work are given as follows: 1. The proposal of one architecture to detect permanent faults in the execution units of GPGPU cores and mitigate their effects during the in-field operation; 2. The evaluation of the hardware, power, and performance cost involved by the DYRE architecture and of its benefits in terms of reliability enhancement.
3
The results and analyses show that the overall GPGPU's reliability in the execution cores is improved by up to 57% when the DYRE architecture is used. Moreover, DYRE introduces less than 1% of performance degradation, less than 5% of hardware (Area) costs, and less than 8% of additional power consumption. Hence, we claim that the proposed approach may represent a viable and promising solution to develop highly reliable GPGPUs with minimum design and overhead costs with respect to commercially available GPUs. The manuscript is organized as follows: Sect. 2 introduces the architecture of a GPGPU, also detailing the model employed to implement the proposed approach (FlexGripPlus). Section 3 describes the proposed fault-tolerance technique. Section 4 reports the experimental results and the performance features of the proposed fault-tolerance mechanism. Finally, Sect. 5 draws the main conclusions of the work and highlights some future works.
Classical fault-tolerance mechanisms
This subsection describes the two classical approaches to provide fault detection (Duplication with Comparison) and fault mitigation (Build-In Self-Test) into digital devices. These approaches are the basis for the development of the proposed DYRE structure.
Duplication with comparison
DWC is a classical fault-tolerance approach and can be employed at several levels of abstractions. For this work, we describe the DWC approach used at the hardware abstraction level. However, the same concept can also be applied from the hardware up to the system level. In these levels, DWC employs the concept of Sphere of Redundancy/Replication [31], which is based on replicating one or more components (modules or systems) with the purpose of increment the fault-tolerance capabilities of a device or system.
The replication of components is used to perform simultaneous parallel operations in the original and redundant components. The output results of both (original and redundant ones) are compared to identify any mismatch, which is employed to determine errors or indicate the presence of a fault. Thus, the active use of DWC increases the fault detection capabilities during the system's operation. The DWC structure is commonly complemented by other modules, including input and output selector switches, one comparison unit, and one general controller.
Built-in self-repair
BISR is a fault-tolerance mechanism that allows the mitigation of faults in a system by replacing an affected component (module or system) with a spare and fault-free copy. The BISR follows a regular switching strategy that consists of adding one or more spare copies of a component and activating them (switching the input and outputs of the component) when a faulty component is detected, so correcting the fault effect and extending the operative life and reliability of the system.
The BISR strategy includes input and output switching modules attached to the target components and the spare ones, which are located in parallel. One general redundancy manager is employed to manage the active components' operation in a system and perform the swapping among the components in the system.
Fundamentals of GPGPU organization
A GPGPU is organized following the single instruction multiple data (SIMD) paradigm and its adaptations, such as the single instruction multiple thread (SIMT) by NVIDIA. The microarchitecture of a GPGPU is based on multiple processing modules (also known as Streaming Multiprocessors, or SMs). The SMs are composed of local controllers, schedulers, cache memories, a register file, and several execution units (EUs, CUDA cores, or Scalar/Streaming Processors (SPs)) devoted to simultaneously executing one instruction on multiple data.
One general controller (Block scheduler controller) submits tasks to each SM in the GPGPU. The local controllers in the SM manage and trace the operation of the assigned tasks by submitting groups of threads (Warps) to the available SPs for parallel operation. Each SP can be composed of an integer and a floating-point module. Modern SMs also include hardware accelerators for specific tasks, e.g., matrix operations and neural networks processing.
FlexGripPlus [32]
is an open-source RT-level model of a soft-GPGPU described in VHDL and is an extended version of the FlexGrip model [33] initially designed by the University of Massachusetts. FlexGripPlus corrected some significant architectural restrictions and programming bugs presented in FlexGrip, meanwhile preserving the original descriptions.
FlexGripPlus implements the G80 architecture of NVIDIA and is also compatible with the CUDA programming environment with the compute architecture SM_1.0. The architecture of FlexGripPlus is based on a custom SM core composed of five pipeline stages (Fetch, Decode, Read, Execute, and Write), as shown in Fig. 1. The SM includes a warp scheduler controller that manages the execution of the instructions and controls the operations of each thread. One warp instruction is fetched, decoded, and dispatched to be executed into the available SPs in the SM. In the Read and Write stages, the operands are loaded and stored from/to one of the memory resources (register file, or shared, constant and global memory) in the system. The number of SPs in the SM can be selected among 8, 16, or 32.
The SPs are composed of multiple sub-modules to operate signed and unsigned arithmetic and logic operations. The inputs to each SP core are organized as data channels (iDCx) consisting of 32 bit-size input data operands (SRC1, SRC2, and SRC3) and predicate flags (4 bit-size). The output data channel (oDCx) is composed of the 32 bit-size result and the output predicate flag from each SP. These output channels are connected statically to the next pipeline stage in the SM. Similarly, the SPs are statically assigned to any thread/task by the controller in the SM. In the fault-tolerant architecture proposed in this work, the external control signals are redundantly used to configure the SP's operation with the instruction to operate.
Proposed solution
DYRE is a fault-tolerance architecture intended to detect permanent faults in the SP cores of an SM (in the GPGPU) and mitigate their effects. This mechanism takes advantage of the high regularity and homogeneous composition of the SP cores, the parallel execution of the thread/tasks on the SPs, and the distribution of the tasks among the SPs to reduce the cost in terms of hardware and performance. The DYRE architecture is based on the addition of one or more spare SPs (SSPs) in the Execute stage of the SM. Each additional SSP can be employed for results comparison or replacement purposes. It includes a mechanism for dynamically deciding how to use the available SSPs, thanks to the introduction of two additional instructions in the GPGPU instruction set. The execution of these instructions allows for pairing an SSP with a given SP (comparing the results they produce) or substituting one SP with a given SSP, respectively.
In particular, an SSP can 1. be paired to an SP, so that it performs the same operations on the same input data; hence, the results produced by the paired SP and SSP can be compared, and this allows detecting possible faults affecting one of the two modules 2. replace a faulty SP core.
The architecture of a DYRE GPGPU differs from a normal one only in the Execute stage (see Fig. 2). It includes one or more SSP cores, three crossbar units (input, middle, and output), some configuration registers, one comparator block (COMP), a controller unit, and some decoding logic. This structure provides flexibility allowing two non-exclusive operational features: (1) the in-field detection of faults and (2) the in-field mitigation of faults in the SPs.
The specific architecture of a DYRE GPGPU can be flexibly and dynamically decided by executing ad hoc assembly instructions introduced in the GPGPU instruction set to activate the fault detection and fault mitigation features. The DETection Trigger (DETT) instruction enables the DYRE comparison structure and configures and selects an SSP to be paired with an SP. The SSP and SP to be compared are included as part of the instruction format. Thus, when active, the DETT instructions enable comparing the selected pair (SP and SSP) results for fault detection purposes. Similarly, the MITigation Trigger (MITT) instruction enables the replacement structure of DYRE and reconfigures the GPGPU, substituting one SP with an SSP for mitigation purposes. The instruction format in MITT includes fields to select the SP and SSP to be commuted. For both cases, a programmer can employ any selection policy to control the comparison and replacement among the available SPs and SSPs.
Moreover, both instructions can reconfigure the DYRE architecture with the cost of only one instruction cycle and are intended to be included in a running application, so dynamically enabling both features of a DYRE GPGPU for in-field operation. Both operational features (detection and mitigation) are intended to use the same hardware structures, thus reducing the overall hardware cost. However, more than one SSP is required to use both operational features in DYRE simultaneously.
Fault detection
This operational feature is inspired by the DWC mechanism and uses a sphere of redundancy composed of the active SPs in the SM. The DYRE architecture uses this feature to detect faults through the comparison of results. When DETT instruction is executed, the local controller enables the fault detection feature, and one SSP and one SP are selected to perform all the following instructions in parallel. This procedure is transparent for the execution of the application. The SP and the SSP can be paired by a time interval or the entire execution of the application. Moreover, the target SSP or SP can be replaced with another core at any moment of the in-field operation by executing a new DETT instruction.
More in detail, the DYRE architecture uses two crossbars (input and middle) to select a target SP. Both crossbars select and duplicate the input and output data channels to feed the SSPs core and the comparator block, respectively.
After each operation, the results of the SP and the SSP are compared. The comparator triggers a faulty flag when a mismatch is detected. The flag is propagated to the next stage and sent to the exceptions unit in the GPGPU or the Host.
Fault mitigation
This operational feature is based on an adaptation of the BISR mechanism, and it is intended to mitigate the effect of faults in the cores by disabling and replacing one affected SP core with one of the available SSPs in the system. The SSPs are organized as cold standby modules and are active only when required. Correspondingly, the inactive SP cores are disabled to reduce the power consumption during inactivity.
The static distribution of tasks among the SPs allows the correction of faults by switching the input data from a faulty unit to a fault-free unit. This behavior also reduces further changes in other modules of the GPGPU. For this purpose, it is possible to mask the replacement of a faulty SP by an SSP. Thus, the fault-mitigation structure operates transparently from the memory and scheduling controller's point of view.
More in detail, the execution of the MITT instruction activates two crossbars (input and output, as depicted in Fig. 2) to redirect the data-flow of the data channel 1 3 DYRE: a DYnamic REconfigurable solution to increase GPGPU's… from one active SP (faulty core) to the selected SSP (fault-free), so mitigating the fault effect. The effect of the MITT remains active for all subsequent instructions.
Suggested methods of use
The DYRE architecture is intended to operate in two cases: (1) in the Power-on/reset phase of the device and (2) during the in-field operation of an application.
At the power-on, the DYRE architecture is inactive. Hence, the SSPs are initially idle as cold standby modules. A specially crafted test program applies patterns to check the possible presence of permanent faults in each SP. This program includes several DETT instructions that activate one SSP and swap the available SPs to perform comparisons when executing the same instructions on the same data. If a mismatch is found, the SP is labeled as faulty. The program replaces the faulty SPs with SSPs through MITT instructions, and the application starts. It is worth noting that the generation of suitable test programs for the SPs is out of the scope of this work. However, previous works [34] showed that generating them is feasible.
Nevertheless, the use of DYRE during in-field operation requires adding one or several DETT instructions in the application code. Each DETT instruction selects one SSP, so activating the fault detection through comparisons. When comparison produces a mismatch between the results, during the execution of the application's instructions, a fault is identified as detected. Then, a subroutine activates the MITT instruction to replace the faulty SP with one SSP. This subroutine can be launched when a mismatch is generated or during the idle times of an application. The replacement subroutine (with MITT) is intended to substitute the faulty core with minimal latency in the execution of the application, considering the low reconfiguration cost of the mitigation feature.
It is worth noting that the DYRE architecture does not include any fault administration structure to store the actual configuration state and possibly be restored after a device power-off or reset. This fault administration structure could be composed of a non-volatile memory (NVM) and some controllers to store the state and role of the SPs and SSPs of both operational features. Hence, at each power-on, a complete test is required to build the map of faulty/fault-free cores. Alternatively, the map can be updated with a given frequency, depending on the characteristics of the application and parameters in the structure, such as the number of SSPs.
Implementation
DYRE was implemented in FlexGripPlus, modifying the Decode, Read, and Execute stages. The hardware to support the DETT and MITT instructions was added in the Decode stage. Similarly, a bypass mechanism and some changes in the memory controllers were performed in the Read stage to add flexibility to the instructions. The implementation allows the adoption of the DYRE architecture with any of the three SP configurations (8,16, and 32) of the model.
The Execute stages include the additional SSPs, the crossbars, and the controllers of the DYRE architecture. The main purpose of the crossbars is the selection of the input and output data channels (iDCx and oDCx) to feed the SPs and SSPs in the system. The input crossbar selects one of the iDCx feeding the active SP cores and can duplicate or switch the input data to one of the SSPs. In case of duplication, the selected SSP redundantly executes precisely the same operation of the selected SP. In contrast, in the case of switching, the input crossbar substitutes the iDCx of one SP core and feeds a selected SSP. The control signals of the SP cores are statically shared among the SP and SSPs in the system.
The middle crossbar is composed of two independent crossbars used to feed the two inputs of the COMP module. COMP is only used during the fault detection operation and is composed of a bitwise comparator that compares the results and output flags from two execution units (SPs or SSPs). On the other hand, the output crossbar manages the results coming from the active SPs and SSPs. This crossbar is used to select the output channels (osDCx and ossDCx) from the active SPs and SSPs and feed the next pipeline. The flexibility of the middle crossbar allows the comparison of two SSPs when the mitigation and duplication mode are simultaneously activated.
The input and output crossbars are indeed meta-crossbars and multiplexer structures used to preserve the same type of input and output data channels in the Execute stage and from and to other stages of the SM.
Some configuration registers are employed to select among the operational features (detection, mitigation, or both). The local controller configures the DYRE architecture using decoded commands that came from the DETT and MITT instructions. Some decoding logic is included to manage the two operational features when controlling the crossbar structures.
The DETT and MITT instructions were designed to select the channels or target cores using operands coming from an immediate value or a general-purpose register. This flexibility in the instruction format allows the dynamic selection of the target core during the in-field operation. Both instructions use a format composed of six bits stating the instruction type. The other five bits select the input data channel to be switched for duplication or replacement, and five bits select the target SSP core to be used.
In order to use DETT and MITT, a programmer only needs to add any or both instructions, as part of the application, to activate the detection (DETT) or mitigation (MITT) features of DYRE. In the first case, DETT and MITT can be added before the original application code, so activating static detection and mitigation. Advanced use of DYRE requires the application's adaptation to include the instructions, so one or several DETT instructions enable different comparisons among the cores and spare cores. Once a fault is detected, the MITT instruction replaces the faulty core with one available spare one. Finally, there is also the possibility of developing special test routines, including DETT and MITT, to perform functional testing on the cores before starting an application's execution. This alternative is intended for the Power-on/off stages in the system.
Experimental evaluation
Two evaluations are performed on the proposed mechanism. Firstly, the overhead assessment determines the cost in terms of hardware, power, and performance of the DYRE architecture. For this purpose, the DYRE architecture is compared against the original design, DDWC, which is based only on fault-detection [7], and with BISR, which is based only on fault mitigation [9]. The original GPGPU and the three fault-tolerance mechanisms were synthesized using the Design Compiler tool using the 15 nm Nand gate Open-cell library and one clock of 500 MHz. It is worth noting that the internal memories were not synthesized. Figure 3 reports the results of the hardware and power overhead for each setup. Finally, a second evaluation analyses the reliability features of the proposed mechanism.
Hardware overhead analysis
Two cases were considered for the hardware overhead evaluation: (1) considering the affected modules, only and (2) considering the whole system. In the first case, the assessment was performed considering the modules affected by modifications when implementing DYRE. In the second case, the cost of the entire design is evaluated. All evaluations were performed using the three configurations with 8, 16, and 32 SPs.
According to the results, the hardware cost of implementing the instructions in the Decode stage is lower than 5% and almost negligible for the Read stage ( ≈ 0.3% ). Nevertheless, the SSPs' implementation directly affects the hardware cost in the Execute module. For a configuration of 8 SPs, the cost of using two SSPs is lower than 13%, but it increases to 42% when DYRE is configured to use the same SPs and SSPs. Among the SP configurations, it can be noted that in the Execute module and the entire design, the hardware overhead follows a proportional inverse relation. Thus, large SP configurations present low hardware overhead. In Execute, when adding 25% of SSPs, the cell and area costs are around 10% and 8%, respectively. In the case of adding 50% of SSPs, these costs are about 22% and 17%.
On the other hand, the hardware overhead in the design's logic is lower than 7% for all configurations. In the case of two SSPs, the cell and area overhead are lower than 2%, causing a minimum impact on the design when using DYRE. When the SM is configured with 32 SPs, the addition of one or two SSPs caused negative Fig. 3 Percentage of overhead cost of the DYRE architecture on each adapted module and in the entire GPGPU percentages of hardware overhead. However, these values are due to the optimization constraint in the synthesis tool, and the effect is translated as power overhead for these configurations.
Power and performance analysis
From Fig. 3, the power consumption in the Decode module indicates a minimum overhead ( < 5% ), and it is almost negligible in the Read module for all SP and SSP configurations. In the Execute module, the addition of one or two SSPs in all SP configurations causes a moderate average cost of power from 14 to 17%. When DYRE is configured to include 50% of SSPs for each SP configuration, the power cost is moderate (around 23.7% and 25.9%). Moreover, the overhead reaches up to 34% when the number of SSPs and SPs is equivalent. Nevertheless, the entire logic cost remains stable and is lower than 8% in all configurations.
In terms of performance, the DYRE architecture does not introduce more than 1% of degradation in the critical path for all the evaluated configurations.
Although the synthesis of the model used only the clock constraint, the results in Fig. 3 show the distribution and the trend to consider when implementing the DYRE solution. In this way, the addition of two SSPs can be affordable in terms of hardware ( < 2% ) and power ( < 8% ) costs.
An overhead comparison of the DYRE structure with DDWC and BISR architectures is reported in Fig. 4. Each strategy was implemented and synthesized for the three possible SP configurations. In principle, results show that hardware overhead in DYRE is the lower of the three strategies ( < 5% ) and decreases when increasing the number of SPs in the design.
It must be noted that DYRE is a reconfigurable structure, so the power consumption of the solutions directly depends on the number of active features in the structure. When the comparison mechanism is active, the additional power cost is mainly caused by the active SSP. In contrast, when the mitigation feature is employed, the power consumption remains the same as the original design. The replacement of one SP by one SSP does not add any power consumption load. It is worth noting that the DYRE structure uses a hot sparing strategy and the added SSPs remain as cold standby modules. However, during the configuration phase, a transient increment of power is presented when activating the controller and managing the switches.
Nevertheless, this transient power cost is almost negligible as most of the consumption is due to the active SPs in the SMs. From synthesis results, the SP cores consume about 70% of power in the Execute pipeline of the SM and about 55% for the entire design. Thus, the average increment of using the DYRE structure and one active SSP for fault detection purposes is equal to 15.75% of additional power in the Execute pipeline of the SM, see Fig. 3 (DYRE with 8 SPs and 1 SSP).
A way to balance the trade-off between the performance and power consumption needs to evaluate the power consumption in a workload. This consumption is used as the base for selecting a feasible switching period for the fault detection feature in the DYRE structure. Thus, the detection capabilities of DYRE remain active with a controlled cost in terms of performance (by the added instructions), and the power consumed.
More in detail, the DYRE architecture increases the overall power consumption of the system from 4.55% (8 SPs) to 8.72% (32 SPs) with respect to the original design. This behavior can be explained considering that the additional structures (controller, multiplexers, and the comparator block) remain active, so consuming static power even when the DYRE architecture is inactive. However, for synthesis purposes, this cost might be reduced by including power optimization strategies. It should be noted that power optimization techniques were not used during the experiments.
Reliability analysis
The reliability of the DYRE architecture is estimated by determining the probability of correct operation, which depends on the number of available and fault-free SP and SSP modules. The proper execution of the system is obtained when all thread operations are performed without failures affecting the execution cores. This probability of correct operations can be complemented and expressed as the probability of failure (when some SPs or SSPs fails). The dual-modules feature of the DYRE architecture influences the reliability calculation and the number of cumulative faults affecting SPs or SSPs before the overall architecture produces a failure.
During fault-free operations, both groups of SP and SSP modules are identical and operate in parallel independently among them. Considering this scenario, the probability of correct operation of the DYRE architecture ( R DYRE ) can be computed by adopting a binomial distribution function using n SPs and m SSPs module, respectively. R DYRE is composed of the probability of a fault in an SP(P core(t) ) at a given time t and a K limit related to the active operational features (mitigation and detection), as reported in Eq. 1.
In detail, when the fault mitigation feature is active, a failure in the overall system occurs when k = (m + 1) execution units (SP or SSPs) are faulty, hence it is not possible to complete the thread operations without errors. However, if both features are active, the system produces a failure when k = m execution unit fails since one SSP is used as a comparator during the in-field fault detection. Finally, in case the fault detection feature is enabled or both detection and mitigation features are disabled, there are no available SSPs dedicated to fault mitigation, therefore k = 0 and m = 0.
In order to determine the advantage, in terms of probability of correct operation, for the SM using the DYRE architecture, we introduce Eq. 2. Equation 2 is composed of two terms. The first term corresponds to the probability of correct operation of the SM without the DYRE architecture ( P SM(t) ). In contrast, the second term represents the improved probability of correct operation by adopting the DYRE architecture ( △R DYRE ). This term also includes the probabilities of correct operation for the switching modules ( P sw(t) ) and the controller ( P c(t) ).
As it can be noted in Eq. 2, the number of SSPs (m) determines the probability of correct operation in the GPGPU. The behavior of △R DYRE concerning the probability of correct operation on SPs ( P core(t) ) is plotted in Fig. 5. The graph describes the relationship between P core(t) and △R DYRE for multiple values of m. The almost stable behavior of about 20-40% of positive increment impacts △R DYRE when m increases and P core(t) thoroughly decreases. Moreover, Fig. 5 reports the different benefits when selecting a limited number of additional SSPs (m). According to results and considering a probability of correct operation between 0.9 and 1.0, the Fig. 5 Reliability benefit in the system for multiple probabilities of correct operation 1 3 DYRE: a DYnamic REconfigurable solution to increase GPGPU's… best trade-off is observed when two additional SSPs are used in the DYRE structure. Thus, the reliability relation shown in Fig. 5 allows the design exploration of a potential DYRE composition. Furthermore, the comparison between the reliability behavior of a standard GPGPU ( P SM(t) ) and the one of an architecture adopting the two features of the DYRE architecture (mitigation only ( R1 DYRE ) and detection+mitigation ( R2 DYRE )) is plotted in Fig. 6, using a typical probability function ( P core(t) = e − t ) in both cases. This figure shows the reliability when adding two SSPs in the system. As it can be observed, the reliability of a DYRE GPGPU (R1 and R2) remains higher than without DYRE, so extending its operative life. A detailed analysis revealed that in some points the reliability is increased by up to 57%, when the mitigation and detection features are active, and 72% with the mitigation only feature.
Although the DYRE structure was validated using the FlexGripPlus model with the G80 architecture, we still claim that the proposed structure can be adopted into modern architectures of GPGPUs. DYRE targets the SP cores, which are also present in modern devices. Moreover, the implementation requires zero changes to the memory hierarchy and scheduling mechanisms. Finally, minimum effort is required to implement the custom control instructions. Furthermore, modern trends of GPGPU architectures include more SPs per SM, so increasing the volume of transistors in the device as the area and power consumption. However, as reported in results from Fig. 4, The DYRE structure requires a limited percentage of additional area, which follows a proportionally inverse relation with the number of SPs to harden. Thus, DYRE provides reliability benefits, and its cost drops when the number of SPs increases.
In contrast, DYRE may require special adaptations procedures (splitting or replication of structures) when the target hardening scope includes modules with different features, such as floating point units (FPUs), integer units (INT), and special function units (SFUs) or with different precision formats (i.e., 32 or 64 bits).
Finally, the DYRE structure employs active redundancy and hot sparing strategies to increase the fault-tolerance in the SP cores of GPGPU devices. A direct comparison with classical passive fault-tolerance strategies, such as DWC and TMR, can show that DYRE is less expensive in terms of hardware overhead and
Conclusions
We introduced an in-field dynamic architecture (DYRE) to detect and mitigate permanent faults affecting the execution units in GPGPUs. DYRE provides a solution that can be employed during the operative life of a GPGPU and extend the reliability capabilities by up to 57% for most configurations of the execution units of these devices. The proposed solution (targeting execution units, only) can be easily integrated by others targeting the remaining modules of a GPGPU.
The proposed strategy was implemented in a representative GPGPU, and the hardware and power overhead were measured. The results let us affirm that adding the proposed mechanism into a GPGPU design requires a minimum to moderate cost that directly depends on the number of additional cores included to support fault detection and mitigation.
As future works, we plan to extend the proposed mechanism, so exploring reliable architectures for in-field detection and mitigation of faults in other modules of GPGPU devices, including special function units (SFUs), controllers, and other unprotected structures. Moreover, the proposed architecture can also be adapted into other parallel architectures, so additional analyzes and evaluations can be performed as future activities. | 8,568 | 2021-03-29T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Efficient Actor Recovery Paradigm for Wireless Sensor and Actor Networks
The actor nodes are the spine of wireless sensor and actor networks (WSANs) that collaborate to perform a specific task in an unverified and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power consumption of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. Therefore, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). In this paper, we propose an Efficient Actor Recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of a Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balancing the network performance. The packets are handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets; either from actor or sensor. This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for a longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, the proposed algorithms were tested using OMNET++ simulation.
In WSANs, the actor nodes possess sophisticated features that increase the power capability and network usage. Thus, maintaining the inter-actor connectivity is indispensable in WSANs. The failure of an actor may cause loss of communication or a network disconnect. Thus, actors must communicate with each other to guarantee the entire network coverage and to harmonize their actions for the best response. In case of actor failure, adjacent actors should restore the process or they may be replaced by a backup actor. This solution, however, could be costly and infeasible. Alternatively, the actor can be replaced by one of its neighbor actors. Distributed localization algorithms are used to handle these situations. In this case, "hello" messages are exchanged by the neighbor actors in distributed localization algorithms despite the limitation of the network resources. Furthermore, the nodes can compute their distances and positions by exchanging the information carried in these messages. The distributed localization techniques are used for calculating the distance between the nodes, identifying the physical locations of actor nodes, reallocating the locations, and salvage of sensor or actor node failure.
Received Signal Strength Indicator (RSSI) is a range base localization technique that is applicable for determining the received signal strength including the distance between the sender and the receiver nodes. At the receiver side, RSSI technique is used to measure the signal strength [4]. Higher levels of RSSI indicate stronger signals. Some localization approaches utilize the RSSI for nodes distance measurement. In WSANs, most of the node localization and routing approaches depend on the hop count information rather than table-based routing protocols. With these motivations, this study introduces a mathematical model for determining actor forwarding capacity in WSAN using RSSI message information. The model aims at guaranteeing contention-free forwarding capacity. The paper also contributes to the literature by providing the best RSSI value to improve the traffic forwarding process. State-of-the-art research is used to provide the best node failure recovery process to extend the network lifetime. A tradeoff between the QoS improvement and reduction in the power consumption is considered.
The paper introduces three novel algorithms and optimized low latency deterministic model based on RSSI. First, Node Monitoring and Critical Node Detection algorithm checks the entire network to determine the critical node before it fails which greatly improves and balances the network performance; Second, Network Integration and Message Forwarding algorithm mostly focuses on improving the QoS by handling the packet forwarding process that has the capacity to decide the source of the forwarding packet (either from sensor node or actor node). As, this algorithm introduces much easier process to deal with packet forwarding flow. In addition, accurate packet forwarding process reduces the latency and bandwidth consumption; Third, Priority-Based Routing for Node Failure Avoidance algorithm determines the significant information available in each forwarding packet. Based on the nature of the information, the packets are routed to the next node. As, this algorithm particularly handles the redundant data prior to routing the packets; Finally, optimized RSSI model is introduced that selects the different power strengths for each beacon in order to ensure the proper delivery of the beacon to each node. This aims to reduce the latency and estimating the prediction of the node energy-level. As a result, QoS provisioning is maintained while extending the network lifetime.
Contribution and Paper Organization
The main difference between WSAN and WSN is the existence of actor nodes. Actor nodes add major enhancements and robustness to the sensor network. In WSNs detection, processing and handling are usually done using sensor nodes which make those node subject to failure and degrades the network performance, while in WSANs, sensor nodes detect events and forward their information to actor nodes. Then, the actor nodes handle the processing of information, decision making and event handling. Thus, the actor node is a major component of WSAN. The failure of an actor node can degrade the overall network performance. Furthermore, the failure of an actor node may result in a partitioning of the WSAN and may limit event detection and handling. In this paper, a new method is proposed for efficient actor recovery paradigm (EAR) which guarantees the contention-free traffic-forwarding capacity. Unlike previous studies, EAR aims to provide efficient failure detection and recovery mechanism while maintaining the Quality of service. Thus, the proposed paradigm contributes three novel algorithms and optimized low latency deterministic model based on RSSI work as follows: • Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the nodes types and distinguish critical nodes. The NMCND algorithm checks the entire network to determine the critical node during the network life time and pre-assign a backup node for each critical node; so in case the failure of critical node, this node takes place in order to improve and balances the network performance. • Optimized RSSI model is introduced that selects the different power strengths for each beacon in order to ensure the proper delivery of the beacon to each node. This aims to reduce the latency and estimating the prediction of the node energy-level. As a result, QoS provisioning is maintained and extended the network lifetime. • Network Integration and Message Forwarding (NIMF) improves the QoS by handling the packet forwarding process. NIMF works to reduce packet forwarding through critical nodes and enhances network lifetime. Moreover, NIMF has the capability to decide the source of the forwarded packet which enhances the packet forwarding flow. Thus, accurate packet forwarding process reduces the latency and bandwidth consumption. • Priority-Based Routing for Node Failure Avoidance algorithm (PRNFA) handles the routing process. PRNFA analyzes and evaluates the information of the packet in order to route it to the next node. It determines the priority of the forwarded packets. In addition, PRNFA eliminates redundant data prior to routing the packets.
The remainder of the paper is organized as follows: Section 3 presents a comprehensive review of relevant literature. Section 4 formulates the problem and provides details of the optimized deterministic actor recovery system model. Section 5 provides the simulation setup and presents the analysis of our findings. Finally, the summary of the paper is provided by the conclusions in Section 5.
Related Work
In this section, the salient features of relevant related approaches are discussed. WSANs are difficult to deploy, even though WSANs are known to improve the overall network performance. One shortcoming of WSANs is that these networks adversely are affected by inadequate positioning, power restraints, and routing limitations. To avoid these issues, the sensor and actor nodes should be deployed randomly or at fixed position based on the application requirements. Actor nodes can either be mobile or static. Hence, the node mobility improves the network performance metrics such as coverage, connectivity and lifetime [5]. A number of localization techniques were studied and introduced for WSANs. Some techniques focus on nodes positioning are provided in [6], while some other studies focus on failure node recovery process [7][8][9][10][11][12][13][14][15].
WSANs are mainly deployed in harsh areas, and are supposed to support long network lifetime. As we mentioned earlier, actor nodes are very essential nodes in WSAN. Thus, it is essential for WSAN algorithms to support not only actor node deployment and actor mobility, but they should be robust enough to provide failure detection and self-healing network recovery.
Some works presented in the literature manage to handle the actor node deployment and mobility [16][17][18][19][20][21][22][23]. Nevertheless, they do not provide solutions in the case of the occurrence of actor node failure. The study in [20] presents a framework for real-time data report and task execution (RTRE) which aims to achieve event assignment through multi-actor coordination and data collected coordination between sensor and actors real-time sensor-actuator data collection. The authors of [23] present a self-organizing mobility control in WSANs based on virtual electrostatic interactions which aim to enhance actor node deployment and control actors' mobility. The main purpose of the previously mentioned techniques is to manage actor mobility during deployment or during event handling. Meanwhile, actor mobility features may cause continuous changes to the network topology. Thus, an actor may go outside a specific region range and is then defined as failed.
In WSANs, actor failure can be due to their limited power, mobility, or topology changes. The mobility feature can cause actors to go outside the communication range. Moreover, network topology may be affected by such behavior. Effective topology management techniques should be implemented. Several proposed mechanisms where introduced in order to manage network failure concerning topology management [5,11,13,[24][25][26][27][28][29][30][31][32].
Fault detection mechanisms are classified into proactive and reactive methods. In proactive methods, fault and restoration mechanisms are addressed during the network setup. Some mechanisms implement fault tolerance topologies in the network setup while others use redundant and backup nodes to ensure fault tolerance [33]. On the other hand, a reactive scheme pursues to utilize the network resources and performs recovery through node repositioning. Reactive schemes require network monitoring in order to maintain node status. Network status, recovery algorithms, and recovery scope are important factors in reactive schemes. Reactive recovery algorithms are classified into centralized algorithms or distributed algorithm and have been widely used in the literature [26,29,30,[32][33][34][35][36][37][38]. Scope of recovery refers to how many nodes are involved in the recovery. Some mechanisms require a single node [35,36] while others identify a block of nodes for the recovery process [10,39].
Moreover, actor fault impact can vary depending on the node's importance and type. Some fault management detection and recovery procedures classify the actor nodes into critical nodes and non-critical nodes [10,24,37]. A critical node refers to a node which failure causes network partitioning. Most algorithms define the critical nodes using 2-hop message exchange information [8,36]. On the other hand, a study conducted by Imran et al. [40], used 1-hop message exchange to identify the critical actors. This is performed by calculating the distance from the actor to its adjacent nodes. If the distance is less than the neighbor's communication range, the actor is defined as non-critical; otherwise, the actor is defined as critical.
The Distributed Recovery from Network Partitioning in Movable (DRNPM) [40] and Actor Positioning with Minimal Movement (APMM) [7], in Wireless Sensor and Actor Networks approaches were introduced for node recovery. These studies applied pre-assigned backup procedures to recover the actor node failure. However, the approaches fall short in addressing the energy efficiency during backup node selection.
In [35] a two-hop actor node failure recovery algorithm is introduced in which, an adjacent actor and best candidate are selected to handle the failure node. However, this approach suffers due to the overhead. Haider et al. [36] introduced Nearest Non-critical Neighbor (NNN) algorithm that attempts to achieve inter-actor connectivity caused by network splitting. The proposed algorithm uses a localized and distributed approach. When the neighbors of a critical actor detect the failed actor, they initiate the reinstatement process. This process consists of the replacement of the critical node with the nearest non-critical actor to control any further splitting overhead. The overhead may occur when a critical actor movement is selected for node replacement. Distinguishing between critical and non-critical actor nodes favors the NNN procedure to have slighter recovery scope in comparison to [35]. However, the network is adversely affected due to spilt transposition overhead. A new energy efficient recovery method for recovery of lost connectivity (RLC) based on two-point crossover genetic algorithm (GA) is proposed to reconnect the partitioned network to discover the actor node failure process [41].
A study that handles the tolerating simultaneous failures (TSF) in WSANs is proposed by [42]. In this study, TSF is based on the ranking of the network nodes pertinent to a pre-assigned root actor. Ranking uses a tree that helps the coordination process among the nodes. The nodes are virtually grouped in order to minimize the recovery overhead. Cluster-based node failure algorithms were proposed by [43,44] that are based on 1-hop node failure recovery process.
Furthermore, the route duration improvement (RDI) algorithm based on decision tree is incorporated in the reactive routing protocol to support WSANs [45]. This aims to select the most long-lived routers. The decision tree supports the node mobility. The improved flooding control is used to improve the route performance and reduce the overheads, whereas, power consumption caused by control packets is not handled.
Similarly, recovering from node failure (RNF) based on the Least-Disruptive Topology Repair (LeDiR) algorithm is used to handle actor failure/recovery. The RNF handles the autonomous transposition for the subset of the actor nodes to restore connectivity [10]. The LeDiR algorithm depends on the local view of the actor node to develop a recovery plan that rearranges the least number of nodes and confirms that no path between any pair of nodes is protracted. In addition, LeDiR attempts to detect and manage cut-vertex node failure and perform recovery using path discovery and routing information. In the case of node failure in LeDiR, neighboring nodes will re-compute their routing tables and drive their enrollment decisions for the recovery process. In response to the failure of a critical node, the neighbor containing the smallest block replaces this node. LeDiR assumes that each node calculates the shortest path to every other node and stores this information in its routing table. In case node failure occurs, the 1-hop neighbors identify if the node is critical or non-critical using the shortest path routing table. Then, the smallest block is identified. Within the smallest block, a neighbor of a failed node is chosen as the candidate node (CN) to replace the failed node. If more than one neighbor node is part of the smallest block, the neighbor nearest the failed actor is chosen to manage the block movement. The RNF aims to control existing route discovery activities in the network and enforce no extra pre-failure communication overhead. However, RNF consumes an additional power. Moreover, large number of nodes are involved in the recovery process which leads to more topology changes; thus, network is subject to extended failure causing a cascade recovery.
Grid-based approaches are introduced in many WSANs. As one of the interesting grid-based approaches, an actor-supported repositioning approach supports the single static actor. Each grid consists of single static actor, in addition to several sensor nodes that sense the events. Static actors are responsible to obtain the information of sensors location along with grid information. Hence, the grid information collects the region information. Once an event occurs, the sensors nodes forward sensed data to the static actor node. The overhead of the approach increases when using single static actor node for multiple grid region monitoring [46]. This problem surges when multiple reporting regions instantaneously report to the same static node [14,30,[47][48][49].
The distributed prioritized connectivity restoration algorithm (DPCRA) [11] is introduced to cover the partitions and reinstate the node connectivity by using small number of nodes. The algorithm aims to identify the negative effect of the actor on the partitions. Repairing processes are done locally while storing minimum information in each node. The main focus of the work is to use multiple backup nodes for the partitioned recovery. Nevertheless, the algorithm fails to address backup node selection criteria which leads that those nodes may have higher probability of failure. Thus, this can affect the overall network performance, especially energy consumption, which leads to higher probability of node failure throughout the network.
The actor critical recovery (ACR) algorithm is proposed for efficient resource utilization [7]. The algorithm aims to minimize the delay and determine the primary backup node to satisfy application requirement. In this study, Akkaya et al. proposed the Distributed Partition Detection and Recovery algorithm (DPDR) to handle cut-vertex node failure recovery. The main objective of the work is to minimize node movement distance during the recovery process. Cut-vertex node determination is done using Depth First Search (DFS). DPDR assigns a failure handler (FH) node for each cut-vertex. FH is responsible for the network recovery when the cut-vertex failure occur. FH chooses to replace the failed node with the node which has the closet distance to the failed node. The main drawbacks of DPDR are the involved communication and calculation overheads, and FH recovery assignment criteria.
The Advanced-self-healing Connectivity Recovery Algorithm (ACRA) is introduced to recover failed actors. The ACRA determines the nature of the actor whether it is a cut vertex or node-connectivity actor [13]. This approach applies a depth-first search algorithm to determine the nature of the actor. For handling suddenly failed nodes, the minimal block backup nodes are used until the network is restored. The actor node with high transmission power and higher coverage area is selected and connectivity is recovered. This type of process consumes more energy since cluster head and actor node has to be selected in case of their absence in the network. The algorithm is based on two point crossover Genetic Algorithm (GA) to reconnect the partitioned network. Sensor and actor nodes are scattered randomly in the area of interest and form clusters. In this network, all the nodes are equipped with a failure detection system and are able to detect the failure of cut-vertex actor nodes by using the shared stored information of their 1-hop neighbors. Whenever a cut-vertex failure is detected, the neighbor CHs broadcast Recovery messages to all their neighboring nodes toward the sink node until the next actor node or the next CH is found for lost connectivity. Recovery Phase is executed by finding a stable sensor with high transmission power and higher coverage. The stable sensor CHs (as per the GA-based criterion) among their neighbor nodes is the bridging router for connecting the disjoint network. Even though actor nodes include higher transmission power, the process still consumes more energy; the algorithm consumes energy because of the use of clustering. Moreover, sensor involvement in the recovery process impacts the overall network lifetime and performance due to the sensors' limitations.
The Partitioning detection and Connectivity Restoration (PCR) algorithm is introduced to endure critical actor failure [50]. The PCR regulates critical/non-critical actors using localized information and replaces each critical node with a suitable backup. The pre-designated backup process determines the failure if it is primary actor node and starts the post-failure recovery process that includes coordinated multi-actor relocation. The author constructed the formal specification of PCR using Z notation. An update of the selection of the actor's polling points is proposed in [51]. The update selection process involves the residual energy and the locations of the nodes. The approach dynamically generates the multi-hop routing trees used by the polling points in order to balance energy consumption of the node to prolong the network lifetime. However, the paper focused on sensor nodes and also did not address the actual node failure recovery process.
All existing approaches either attempt to recover the failure actor or try to reduce the overhead [8,9,[30][31][32][33][34][35]38,[52][53][54]. We conclude that existing approaches have attempted to replace the critical node with another backup node, but they fail to maintain the QoS parameters and energy consumption. For guaranteeing QoS in our EAR proposed algorithm, the Node Monitoring and Critical Node Detection algorithm (NMCND) monitors the activities of the nodes to determine the nodes' types to distinguish critical nodes. Additionally, our proposed approach not only determines the critical nodes, but handles the packet forwarding process when a primary node fails. To handle packet forwarding, the Network Integration and Message Forwarding (NIMF) algorithm is introduced. In addition, the Process-Based Routing for Node Failure Avoidance algorithm (PRNFA) is developed to handle the routing process and to eliminate routing process of the redundant packets to other nodes in order to avoid the network congestion and reducing the latency. Therefore, the goal of this work is to improve the recovery node process while maintaining the QoS provisioning and power efficiency.
Problem Formulation
WSANs comprise actors with powerful resources and sensor nodes with limited computation, power, and communication capabilities. The sensors and actors in WSANs collaborate together to monitor and respond to the surrounding world. WSANs can be applied to a wide range of applications, like health or environmental monitoring, chemical attack detection, battlefield surveillance, space missions, intrusion detection, etc. However, WSANs are greatly affected due to environmental changes, frequent changes in event-detection and actor failure processes. The failure of an actor node can result in partitioning of the WSAN and may limit event detection and handling. Actors may fail due to hardware failure, attacks, energy depletion, or communication link issues. Sensor node failure may cause loss of event detection of the assigned environment covered by the sensor. The probability of the actor failure is less than that of sensor failure and can be controlled through the relocation of mobile nodes due to their powerful characteristics; however, actor failure can cause more damage than sensor failure. Actor failure can cause a loss of coordination and connectivity between nodes, limitation in event handling, and can lead to a disjoint WSAN.
The actor failure occurrence is very critical as it degrades the network performance. The failure of a critical actor may cause high impact to the whole network. Critical actor nodes refer to actors which their failure cause network partitioning. Figure 2 illustrates the concept of critical actor nodes. Assume that actor A3 failed. Its failure will cause the network to disjoint. Thus, A3 is a critical node. Actor nodes A2, A6, and A7 are critical nodes as well.
Problem Formulation
WSANs comprise actors with powerful resources and sensor nodes with limited computation, power, and communication capabilities. The sensors and actors in WSANs collaborate together to monitor and respond to the surrounding world. WSANs can be applied to a wide range of applications, like health or environmental monitoring, chemical attack detection, battlefield surveillance, space missions, intrusion detection, etc. However, WSANs are greatly affected due to environmental changes, frequent changes in event-detection and actor failure processes. The failure of an actor node can result in partitioning of the WSAN and may limit event detection and handling. Actors may fail due to hardware failure, attacks, energy depletion, or communication link issues. Sensor node failure may cause loss of event detection of the assigned environment covered by the sensor. The probability of the actor failure is less than that of sensor failure and can be controlled through the relocation of mobile nodes due to their powerful characteristics; however, actor failure can cause more damage than sensor failure. Actor failure can cause a loss of coordination and connectivity between nodes, limitation in event handling, and can lead to a disjoint WSAN.
The actor failure occurrence is very critical as it degrades the network performance. The failure of a critical actor may cause high impact to the whole network. Critical actor nodes refer to actors which their failure cause network partitioning. Figure 2 illustrates the concept of critical actor nodes. Assume that actor A3 failed. Its failure will cause the network to disjoint. Thus, A3 is a critical node. Actor nodes A2, A6, and A7 are critical nodes as well. Most of the existing approaches attempted to replace the critical node with another backup node, but they failed to maintain the QoS parameters and energy consumption. For instance RNF manage to handle failure by moving a small block of neighbor actors toward the failed node in order to recover the communication among them. Even though this manages the recovery of the network but it enlarges the recovery scope and cascade relocation. Such behavior should be eliminated in recovery algorithms. In addition, due to the fact that WSANs are deployed in harsh areas and require long term monitoring/acting process, proposed methods should offer robust self-healing failure detection/ recovery techniques which ensure that the network lifetime is maximized as much as possible while maintaining QoS.
In WSANs, the nodes track their neighbors by using heart beat messages to avoid any possible interruption. Moreover, algorithms are used to define the critical nodes using 1-hop or 2-hop message exchange information [8,36,40]. In addition, they identify the actor node failure by the interaction of those heartbeat messages with this particular actor node. Thus, there is possibility of interruption due to losing the trail of heart beat messages. Monitoring actor failure detection using 2-hop neighbor list is efficient once it is combined with QoS measurement capabilities, i.e., packet delivery and forwarding techniques should support efficient packet handling and forwarding. Also, we should minimize the through critical actor nodes. Thus, in our proposed model, we assume that each actor Most of the existing approaches attempted to replace the critical node with another backup node, but they failed to maintain the QoS parameters and energy consumption. For instance RNF manage to handle failure by moving a small block of neighbor actors toward the failed node in order to recover the communication among them. Even though this manages the recovery of the network but it enlarges the recovery scope and cascade relocation. Such behavior should be eliminated in recovery algorithms. In addition, due to the fact that WSANs are deployed in harsh areas and require long term monitoring/acting process, proposed methods should offer robust self-healing failure detection/ recovery techniques which ensure that the network lifetime is maximized as much as possible while maintaining QoS.
In WSANs, the nodes track their neighbors by using heart beat messages to avoid any possible interruption. Moreover, algorithms are used to define the critical nodes using 1-hop or 2-hop message exchange information [8,36,40]. In addition, they identify the actor node failure by the interaction of those heartbeat messages with this particular actor node. Thus, there is possibility of interruption due to losing the trail of heart beat messages. Monitoring actor failure detection using 2-hop neighbor list is efficient once it is combined with QoS measurement capabilities, i.e., packet delivery and forwarding techniques should support efficient packet handling and forwarding. Also, we should minimize the through critical actor nodes. Thus, in our proposed model, we assume that each actor node stores the information up to 2-hops to keep the extended trail information. This helps determine the forwarding capability of the actor nodes. The model aims to ensure the contention-free forwarding capability that minimizes the loss of packets in case of node failure. To determine the actor's forwarding capability, each actor conveys the group of beacon messages using different power strengths. Furthermore, the neighbor of each actor listens and returns the value in response. After the neighbor actor receives the message, it starts calculating its RSSI value and sends it back to the sender actor. The RSSI model is used to calculate the distance. It has also been combined with further techniques for better accuracy and to find the relative error. Equations (1)-(5) illustrate applying RSSI in actor nodes [55]. RSSI can also be used to determine the link quality measurement in wireless sensor networks [56].The RSSI shows the relationship between the received energy of the wireless signals and transmitted energy and the required distance among the actor-sensor nodes. This process helps determine failure node recovery process given in Definition 1. The relationship is given by Equation (1): where E r : Received energy of wireless signals, E t : transmitted energy of wireless signals, r: Distance between forwarding and receiving node, and β: Path loss transmission factor whose value depends on the environment. Taking the logarithm of Equation (1) provides: where 10 log E: Description of energy that could be converted into dBm. Therefore, Equation (2) can be converted to its dBm form as: where γ: transmission parameter.
Here, γ and β represent the relationship between the strength of the received signals and the distance of the signal transmission among sensor-to-sensor, actor-to-sensor or actor-to-actor.
RSSI propagation models cover free-space model, log-normal shadow model and ground bidirectional [20]. In this study free space model is used due to following conditions. The transmission distance is larger than carrier wavelength and antenna size. There is no obstacle between forwarding actor and either receiving actor or sensor.
The transmission energy of the wireless signals and the energy of the received signals of sensor nodes located at distance of 'r' can be obtained by Equations (4) and (5): where λ: 1/Frequency of the actor node, A gt & A gr : An antenna gains, ω: failure factor of the actor and r: distance of the node: Equation (5) represents the signal attenuation using a logarithmic expression. Assume a field with k actor nodes a 1 , a 2 , a 3 ,..., a k . The coordinates of the actor nodes are (p i , q i ) for i = 1, 2, . . . , k.
The actor nodes transmit the information regarding their location with their signal strength to the sensor nodes {s 1 , s 2 , . . ., s n }. The locations of the sensor nodes are unknown. The estimated distances of the actor nodes are calculated from the received signals. In the proposed model, the actor nodes broadcast signals to all sensors. The actor nodes are also responsible for estimating the distances between them and sensor nodes. Let a i be an actor node located at (y i , z i ) and sensor node is located at (y, z). Focusing on the relative error r e relating to a i , suppose that the actor node reads a distance r i , but the correct distance is r j . Therefore, the relative error can be obtained by Equation (6): The relation between actual distance and the measured distance can be obtained by Equation (7): which can be reduced to Equation (8): The probability distribution of the location of the actor node based on beacon messages is described in the following definition. Definition 1. Let a i be an actor node located at (y i , z i ) that sends information to a sensor using RSSI model with standard deviation σ and path loss β .
Let r i be the calculated distance from the actor node a i at the sensor node. The probability density function for correct location (y, z) of the sensor node is obtained by Equation (9): Probability distribution can be simplified due to an actor a i with Equation (10): This can further be extended by using finite set of actors A = {a 1 , a 2 , a 3 , . . . , a k } that produces Definition 2. Definition 2. Let A = {a 1 , a 2 , a 3 ,..., a k } be the set of the actors sending information to the set of sensor nodes using RSSI model with path loss exponent β .
If the calculated distance from the actor node a i at the sensor nodes S = {s 1 , s 12 , . . . , s n }, then the probability density function of correct location (y, z) of the sensor nodes can be obtained by: where Ψa i (y, z): probability distribution because of an actor a i .
Definition 3.
Assume an actor node a i reads a sample distance R = {r 1 , r 2 , r 3 , . . ., r n } using beacon messages B = {b 1 , b 2 , b 3 , . . ., b n } that is modeled with RSSI with path loss β and standard deviation σ .
If 'R' provides the mean sample distances and σ i is the mean standard deviation, then the square of the actual distance from actor to sensor using beacons can be determined as: Furthermore, square standard deviation can be found as: The definition shows that the actual distance is greatly dependent on the distribution of the measured ranges.
Hence, our proposed formulas for RSSI-based wireless node location are optimized and modified. They are different from the original RSSI-based formulas. We focused particularly on the energy consumed for transmission and receiving the data including determining the distance between actor-sensor nodes and error rate for finding location of the node that helps identifying the accurate position of the deployed actor nodes for events. Thus, the previous model is used by our proposed algorithm in order to identify the node locations during deployment in addition to during the network lifetime.
Optimized Deterministic Actor Recovery System Model
The network consists of multiple actors and sensor nodes that are structured with the hierarchical structure of the nodes. The hierarchical structure of the nodes provides an efficient, fast and logical packet forwarding patterns. It also determines the features of all nodes connected with WSANs. Another advantage of the hierarchical structure is that it helps to start with little multiplexing process for intra-domain routing. As the packets travel further from the source node the model helps to develop higher degree of multiplexing. The nodes of different categories in WSAN as depicted in Figure 3 possess the assorted nodes types. The network aims to use the resources efficiently for each packet forwarded by an actor node. In addition, it reduces the latency while keeping the network more stable. The definition shows that the actual distance is greatly dependent on the distribution of the measured ranges.
Hence, our proposed formulas for RSSI-based wireless node location are optimized and modified. They are different from the original RSSI-based formulas. We focused particularly on the energy consumed for transmission and receiving the data including determining the distance between actor-sensor nodes and error rate for finding location of the node that helps identifying the accurate position of the deployed actor nodes for events. Thus, the previous model is used by our proposed algorithm in order to identify the node locations during deployment in addition to during the network lifetime.
Optimized Deterministic Actor Recovery System Model
The network consists of multiple actors and sensor nodes that are structured with the hierarchical structure of the nodes. The hierarchical structure of the nodes provides an efficient, fast and logical packet forwarding patterns. It also determines the features of all nodes connected with WSANs. Another advantage of the hierarchical structure is that it helps to start with little multiplexing process for intra-domain routing. As the packets travel further from the source node the model helps to develop higher degree of multiplexing. The nodes of different categories in WSAN as depicted in Figure 3 possess the assorted nodes types. The network aims to use the resources efficiently for each packet forwarded by an actor node. In addition, it reduces the latency while keeping the network more stable. The EAR consists of actor nodes, sensor nodes, and base station. Actor node can be critical or non-critical. Critical actor node is the actor node which its failure causes network partitioning. Noncritical nodes are regular actor node. Sensors node are used to monitor the network for event detection.
EAR Actor
Critical Non-critical Sensor Base Station
Definition 4.
Critical actor node is the actor node which its failure cause network partitioning.
Definition 5.
Non-critical nodes (NCNs) are regular actor nodes. Definition 6. Cut Vertex Nodes (CVNs) are nodes which have a cut-vertex link with a critical node, i.e., neighbors of critical node. Definition 7. The Critical Backup Nodes (CBNs) are actor nodes that are assigned to be the backup nodes for a critical node.
The EAR consists of actor nodes, sensor nodes, and base station. Actor node can be critical or non-critical. Critical actor node is the actor node which its failure causes network partitioning. Non-critical nodes are regular actor node. Sensors node are used to monitor the network for event detection.
In this topology, Cut Vertex Nodes (CVNs) are responsible for the removal of the paths that lead to the critical nodes. When the actor node becomes a critical node, then it is necessary to redirect the traffic of the neighbor nodes to the non-critical nodes. Thus, this task is done by removing the vertex (a path leading to critical nodes) and redirecting the traffic, as it further helps improving the throughput performance and reduces the latency. The Critical Backup Nodes (CBNs) replace the actor nodes when the actor nodes become the critical nodes. We assume that the number of critical backup nodes are more than actor nodes in the network. If all the actor nodes become critical nodes, then replacement should be much easier to avoid any kind of interruption or data loss. There could a possibility of disconnecting the direct communication links of the actor nodes towards the backup nodes when the actor nodes start moving. Thus, we also assume that the links of the actor nodes lead to the backup nodes are always stable despite the mobility. Therefore, there is a high possibility to easily replace the critical nodes with CVNs. The actor node has a privilege to collect the data from event-monitoring nodes (sensor nodes), then it forwards the packets to either base station or sensor /actor nodes in the network. On the other hand, the least degree Non Critical Nodes (NCNs) are preferred to be labeled as backup nodes for event-monitoring nodes.
A neighbor node that is available in Node Distance range (ND) has a similar Cut Vertex Node Distance (CVND). This helps reduce the recovery time and overhead which is important for resource-constrained mission-critical applications.
In the network, each actor node maintains its 2-hop neighbors' information using heartbeat messages. This information helps to maintain the network state, defining critical actor; as well as assigning backup node for the critical actor. Each actor node saves its neighbors information which includes node ID, RSSI value, number of neighbors which is denoted by the degree of node, node criticality (critical actor/non-critical actor), and node distance. Once a critical node is detected, the backup assignment process is executed in-order to assign a backup node for this critical node. The 2-hop node information is used in the process of backup node selection. Depending on RSSI value (extracted from mathematical model), the non-critical node with the least node degree is preferred to be chosen as a backup node. In case there is more than one neighbor with the same node degree, the neighbor with the least distance is preferred. For each critical actor, a pre-assigned backup actor node is selected which is called Critical Backup Nodes CBN. Consequently, CBN monitors its critical node through heartbeats messages and handles the backup process in case the failure of its critical node. Missing a number of successive heartbeats messages at CBN indicates the failure of the primary.
The topology of the WSAN can be changed during the network lifetime due to the mobility feature of actors, actor node failure, or event handling. Backup nodes are subject to failure as well. Therefore, there are primary backup nodes that select other backup nodes in case of primary backup nodes fail or move beyond the range of ND. To ensure the effectiveness and availability of backup node, we introduce novel backup node selection process in case of primary backup is either failed or in critical condition given in Algorithm 1. Algorithm 1 shows the backup selection process. In this process, the condition of primary backup P b node is checked. If a primary backup node is in critical condition or ready to move, then a secondary backup node S b is chosen. However, if a secondary backup node S b is in critical condition or ready to move, then tertiary backup node T b is notified to play a role as primary backup node. If the tertiary backup node is in critical condition then the backup assignment algorithm executes and a backup node is assigned.
Since actor nodes are rich-resource nodes, we further use the least square approximation formulation to identify the power strength. This involves little computational overhead which can be easily attuned in an actor so that we identify the power strength because there is a probability of the node to mislay the power at some particular time.
1.
Input: (P b ) 2. Output: If P b = Pb C || Pb m //The condition of primary backup node is checked.
4.
Notify S b and Set (S b , P b )//Secondary backup node is assigned as primary backup node 5.
If S b == Sb C || Sb m //The condition of Secondary backup node is checked. 7.
Notify S b and Set (S b , P b )//Tertiary backup node is notified to play a role as primary backup node 8.
end if
We use node monitoring process to monitor the node pre-failure causes, the post-failure causes and allocates the recovery options. Once each critical actor node picks a suitable backup, then it is informed through regular heartbeat messages (Special signals are sent to neighbor node to play a role as backup node for critical node). Furthermore, the pre-designated backup initiates monitoring its primary actor node through heartbeats. If a number of consecutive heartbeats are missed from the primary actor, then it notifies that the primary actor failed. Thus, a backup node replacement process is started as given in algorithm 1. Before substantiating the post failure process, we must ensure that connection is not interrupted because of the network. In addition, any redundant action of the network must be controlled to avoid any possible increase of the network overhead. The pre-failure backup node process is given in Algorithm 2.
Determine N c //Initiate critical node discovery process 7.
If ∀ A p : A p ∈ N c then 8.
N b assigns N c
9.
Set N c = M hb 10. end if 11. If M hb NotDelivered N c then 12. Set N b for data delivery 13. end if 14. Process A p ∃ R pr //Primary actor node recovery process is conducted 15. end if As shown in Algorithm 2, A p represents the number of actor nodes in the network. Then, the sink node N s broadcasts the message to all primary actor nodes to determine pre-failure actor nodes. If primary actor is not identified as pre-failure, then the process of determining the critical actor node will be started in order to choose the backup node. Next, the critical node discovery process is initiated. For each primary node, if the primary node is found as critical node then this node will be defined as critical node N c and a backup node N c selection is assigned. The critical node will broadcast a message to its neighbors which includes the information of its backup node. This information is stored by the neighbors and it is used when starting the network recovery process in case the neighbors detect the failure of their critical actor neighbor. If consecutive heartbeat messages are not received from the critical node, then back node starts replacing the critical node to avoid any kind of packet-forwarding delay. In addition, neighbors of the critical node will use stored information to communicate with the backup node in order to restore connectivity. In conclusion, the recovery process is conducted.
After the node monitoring process is executed, we proceed further with checking the backup assignment and the critical back up assignment of the nodes. This allows to maintain the network connectivity without generating any disjoint procedure of the network. The possibility of the recovery depends on the cut vertex node. If the backup is a non-critical node, then it simply substitutes the primary actor node, and the recovery process is initiated to confirm the backup actor node. If the backup is also a critical node, then a cut vertex node replacement is completed. The pre-assigned backup actor node instantly activates a recovery process once it senses the failure of its primary actor node. The complete node monitoring including failure, recovery and replacement processes are depicted in Figure 4. In complete node monitoring process, first, the node identification process is initiated. The node is identified based on local neighbor information (LNI) that involves global data position, node property and node degree. The critical node selection process is decided using Algorithms 1 and 2. Once a critical node is identified, it will be assigned a backup node; second, the backup node selection process is started if an actor node fails. The selection process is decided based on monitoring the algorithms explained earlier. Once the backup node selection process is complete, then the backup process starts working in case of node failure. If an actor node does not fail, then the node connectivity monitoring process is started and routing connectivity metrics are checked to ensure whether there is no problem of the router. information is stored by the neighbors and it is used when starting the network recovery process in case the neighbors detect the failure of their critical actor neighbor. If consecutive heartbeat messages are not received from the critical node, then back node starts replacing the critical node to avoid any kind of packet-forwarding delay. In addition, neighbors of the critical node will use stored information to communicate with the backup node in order to restore connectivity. In conclusion, the recovery process is conducted. After the node monitoring process is executed, we proceed further with checking the backup assignment and the critical back up assignment of the nodes. This allows to maintain the network connectivity without generating any disjoint procedure of the network. The possibility of the recovery depends on the cut vertex node. If the backup is a non-critical node, then it simply substitutes the primary actor node, and the recovery process is initiated to confirm the backup actor node. If the backup is also a critical node, then a cut vertex node replacement is completed. The preassigned backup actor node instantly activates a recovery process once it senses the failure of its primary actor node. The complete node monitoring including failure, recovery and replacement processes are depicted in Figure 4. In complete node monitoring process, first, the node identification process is initiated. The node is identified based on local neighbor information (LNI) that involves global data position, node property and node degree. The critical node selection process is decided using Algorithms 1-2. Once a critical node is identified, it will be assigned a backup node; second, the backup node selection process is started if an actor node fails. The selection process is decided based on monitoring the algorithms explained earlier. Once the backup node selection process is complete, then the backup process starts working in case of node failure. If an actor node does not fail, then the node connectivity monitoring process is started and routing connectivity metrics are checked to ensure whether there is no problem of the router. After completion of the actor node failure and node assignment processes, the actor nodes should be linked to forward the collected data to the base station. In response, the base station sends its identity (ID) using network integration message (NIM). When an actor node receives NIM from the base station, it saves the destination address of the base station for packet forwarding (PF). Subsequently, NIM is broadcasted in the network among all the actors. At least one actor node is within the range of the base station to avoid any bottlenecks. Otherwise, the base station receives the data through sensor nodes that could be the cause of packet delay and loss. The actor node saves the information of the first actor node from which it receives NIM to use PF process and further forwards NIM with its ID. If an actor gets NIM from multiple actors, then it stores the identity of additional actors in the buffer list. Identity of the saved actors is used in case of topology changes due to mobility or node failure. The detailed process of an actor node that receives NIM is presented in Algorithm 3.
Our protocol applies a simple algorithm to process the NIM. The actor node first transmits NIM to its higher hop neighbor actors/sensors. When it gets the first NIM from the higher hop actor/sensor, then it forwards to its lower-hop neighbor actors/sensors to ensure the transmission of NIMs in the entire network. All other NIMs are then dropped by the actor nodes. Therefore, if each actor is ensured to be in the communication range of at least one actor, then the NIMs should not require to be managed at sensor nodes.
Let us assume that an actor node transmits the number of bits in each packet P r that uses encoding mechanism to reduce the complexity of each packet. The sensor nodes monitor the events which check its contribution table that specify the important events. If events are of the significant interest, then the sensor nodes generate the packets and forward to the actor node. The complete process of monitoring the events and forwarding the routing of the data packets is given in Algorithm 4. Input: {P r , P } 2. Output: If P received by N a //If actor node receives the packet 4.
If P ∈ S i then//If the received packet is of significant interest 5.
Set F i //If condition in step-4 is satisfied, received first packet is considered as significant of interest. 6.
Set N sc +1 & decrease N c by P r //Sharing capacity of the node is increased 7.
If N c > 0 then//Determine the power of node Forward P //Received packet is forwarded 10. Set F i //Flag of interest is set in the buffer 11. Else if N c < 0 then 12. Process P r > E r & P ∈ F u //Showing that packet rate is higher than efficient packet rate 13. Increase N c by P r / remaining capacity of the actor node is increased 14. Reduce N f //When capacity of the node is increased, less possibility of node failure 15. end else 16. end if
Simulation Setup and Experimental Results
There are two processes running throughout the network's deployment and monitoring, the underlying process obtains individual node properties while the second monitors the network consistency. Our goal is to prolong the network lifetime while maintaining the minimum overhead and determining the nodes' failure causes. We have implemented and simulated efficient actor recovery protocol over wireless sensors and actor networks. The simulation is conducted on OMNET++ simulator. The size of the network is 1400 × 1400 square meters. Nodes are deployed randomly in the network. The main objective of simulation is to determine the performance of the proposed EAR algorithm in order to ensure the effectiveness of the protocol in presence of QoS parameters, energy efficiency when incident of node failure occurs. In addition, the performance of proposed EAR algorithm is compared with known similar type of schemes such as RNF, DPCRA, ACR, and ACRA.
RNF, DPCRA, ACR, and ACRA are state-of-the-art actor failure recovery algorithms. Detailed description was provided in Section 3. The proposed algorithms manages cut-vertex actor failure and recovery while they differ in their selection and objective obtained while recovery as given in Table 1. The similar parameters including properties have been used for testing purposes.
The simulation scenario consists of 400 nodes including 27-54 actor nodes and 173-356 sensor nodes with a transmission range of 70 m. The sensor/actor nodes are arbitrarily deployed in a mesh fashion. The initial energy of the actor nodes is set 20 J and sensor nodes have 4 J. The bandwidth of the actor node is 4 Mbps, and maximum power consumption of the sensor/actor node for receiving and transmitting the data is set to 13.5 Mw and 15.0 Mw respectively. Sensing and idle modes have 12.4 mW and 0.60 mW, respectively. The total simulation time is 36 min that is enough to determine the effectiveness of the proposed versus stat-of-the-art schemes. However, the simulation time could also be minimized or maximized, and the pause time is 20 s set to warm up the nodes before beginning of the simulation. The results demonstrate presented here are the average of 10 simulation runs. The simulation parameters are summed up in Table 2. The simulation consists of three simulation scenarios that replicate the real wireless sensors and actor wireless sensor network environment. The obtained simulation results are equitably significant and indistinguishable to realistic tentative results.
• Scenario-I: Sensors-to-actor communications. In this scenario, the source nodes are set as the sensors, while the destination nodes are set as actors. The multiple connections are setup with one actor. Thus, the actor node acts as the sink of the communication. There is 86.5%:13.5% ration of sensor-to-actor, and 20% mobile sensor nodes are set. In this scenario, we used different sizes of the network; 1000 × 1000 m 2 , 1200 × 1200 m 2 and 1400 × 1400 m 2 . • Scenario-II: Actor-to-actor communications. In this scenario, the distance between the two actors is 300 m. The distance is covered by less than 4 hops. This scenario involves multi-hop communication among the actors. In this scenario, a maximum 54 actors are used. • Scenario-III: Actor-to-sensor communications. In this scenario, communication is done between actors and sensor. The distance between actor and sensor is set to 250 m. The number of hops are 5 and mobility of the nodes is 20% in this communication. There is 13.5%:86.5% ration of sensor-to-actor, and 20% mobile sensor nodes are set. In this scenario, we used different sizes of the network; 1000 × 1000 m 2 , 1200 × 1200 m 2 and 1400 × 1400 m 2 .
15-70 connections are set up among the nodes. The connections start working randomly during the warm up time. The source and destination nodes are randomly chosen in each scenario. Based on the simulation, we obtained interesting results including the following parameters:
Number of Alive Days
Extended network lifetime has a significant role in improving the performance of the applications. In Figures 5-7, the performance of EAR is shown and compared with RNF, DPCRA, ACR, and ACRA in form of number of alive nodes. In these experiments, we used the results of three scenarios with different network topologies. In scenario-1, we used 1200 × 1200 m 2 network size with number of maximum 200 nodes that include 27 actor and 173 sensor nodes. Sixty five connections are established to cover the entire scenario. Based on the results, we observed that 24 nodes have 78 alive days in our approach and same days with compared approaches, but when the number of nodes increase up to maximum 200 nodes, then the number of alive days are different. Our approach has slight edge over other competing approaches. In our approach, the nodes are alive up to 367 days as compared with other approaches that have less alive days. RNF approach has 323 alive days, and ACRA has 362 alive days. In scenario-1, our proposed EAR has improvement over other competing approaches of 1.36-11.98%. In scenario-2, we used 1000 × 1000 m 2 network size with number of maximum 54 actor nodes with 15 connections. Based on the results, we observed that actors are alive for 643 days in our approach, while other approaches have actor life of 512-592 days. ACR approach has less 512 alive days, and RNF has 588 alive days. In scenario-2, our proposed EAR has improvement over other competing approaches of 7.93-20.37%. In scenario-3, we used 1400 × 1400 m 2 network size with number of maximum 400 nodes that include 54 actor and 346 sensor nodes. Seventy five connections are established to cover the entire network scenario. Based on the results, we observed that nodes have lifetime 671 days in our proposed EAR approach, whereas other approaches have alive nodes 485-571 days. The performance of the network in ACRA is greatly affected which has minimum of 485 alive days. Therefore, our proposed EAR has improvement over other competing approaches of 14.9-27.71% in scenaroio-3.
The reason of the better stability of our approach is the usage of the RSSI model that helps to determine the proper distance between sensor-to-sensor, sensor-to-actor and actor-to-actor nodes. Furthermore, the network integration message process connects the entire network. As a result, bottlenecks are avoided. In case of the node failure, the backup node discovery process is initiated, that does not only improve the throughput, but also extends the nodes' lifetime.
Residual Energy
The residual energy is the remaining energy level of the actor/sensor nodes when concluding the event(s). Here, we discuss an average residual energy level of the actor/sensor nodes after monitoring of different number of events. Figures 8-10 compare the residual energy of EAR with those of the RNF, DPCRA, ACR, and ACRA at nine, 18 and 27 events respectively. The sensor/actor nodes have a higher residual energy with the EAR after completion of the events. In this experiment, the results are obtained based on three scenarios: In scenario-3, we used 1400 × 1400 m 2 network size with number of maximum 400 nodes that include 54 actor and 346 sensor nodes. Seventy five connections are established to cover the entire network scenario. Based on the results, we observed that nodes have lifetime 671 days in our proposed EAR approach, whereas other approaches have alive nodes 485-571 days. The performance of the network in ACRA is greatly affected which has minimum of 485 alive days. Therefore, our proposed EAR has improvement over other competing approaches of 14.9-27.71% in scenaroio-3.
The reason of the better stability of our approach is the usage of the RSSI model that helps to determine the proper distance between sensor-to-sensor, sensor-to-actor and actor-to-actor nodes. Furthermore, the network integration message process connects the entire network. As a result, bottlenecks are avoided. In case of the node failure, the backup node discovery process is initiated, that does not only improve the throughput, but also extends the nodes' lifetime.
Residual Energy
The residual energy is the remaining energy level of the actor/sensor nodes when concluding the event(s). Here, we discuss an average residual energy level of the actor/sensor nodes after monitoring of different number of events. Figures 8-10 compare the residual energy of EAR with those of the RNF, DPCRA, ACR, and ACRA at nine, 18 and 27 events respectively. The sensor/actor nodes have a higher residual energy with the EAR after completion of the events. In this experiment, the results are obtained based on three scenarios: In Figure 8, 70 connections are established for nine events. The actor-to-actor are 12 connections, actor-to-sensor are 32 connections and sensor-to-sensor are 26 connections. Each connection consumes different energy. However, we obtained an average of overall residual energy for the entire network based on the number of connections. We observed in Figure 8 that the residual energy of our proposed approach has 8.4 J with nine events as compared with other approaches have residual energy ranging from 6.9-8.2 J. When we increased the events up to 18 in the Figure 9, the residual energy of our approach marginally dropped and became 7.4 J and competing approaches have residual energy from 4.2-5.9 J. In Figures 8 and 9, RNF has less residual energy due to sending additional control message during the actor node failure process. In Figure 10, EAR has 6.7 J of residual energy whereas other competing approaches have 3.4-5.2 J residual energy. ACR has minimum residual energy when the number of events increase in Figure 10. The reason of the minimum residual energy is due to the decision tree that is incorporated in the reactive routing protocol. Our approach has higher residual energy for all events because our proposed model determines the forwarding capacity of each sensor/actor node prior to transmission which helps to avoid the node failure. The residual energy of sensor/actor is calculated using Equation (14) and the description of the used notations is given in Table 3. In Figure 8, 70 connections are established for nine events. The actor-to-actor are 12 connections, actor-to-sensor are 32 connections and sensor-to-sensor are 26 connections. Each connection consumes different energy. However, we obtained an average of overall residual energy for the entire network based on the number of connections. We observed in Figure 8 that the residual energy of our proposed approach has 8.4 J with nine events as compared with other approaches have residual energy ranging from 6.9-8.2 J. When we increased the events up to 18 in the Figure 9, the residual energy of our approach marginally dropped and became 7.4 J and competing approaches have residual energy from 4.2-5.9 J. In Figures 8 and 9, RNF has less residual energy due to sending additional control message during the actor node failure process. In Figure 10, EAR has 6.7 J of residual energy whereas other competing approaches have 3.4-5.2 J residual energy. ACR has minimum residual energy when the number of events increase in Figure 10. The reason of the minimum residual energy is due to the decision tree that is incorporated in the reactive routing protocol. Our approach has higher residual energy for all events because our proposed model determines the forwarding capacity of each sensor/actor node prior to transmission which helps to avoid the node failure. The residual energy of sensor/actor is calculated using Equation (14) and the description of the used notations is given in Table 3.
Actor/Sensor Recovery Time
The actor recovery time is of high significance for network improvement and running applications on it. When the actor fails, then it is important to initiate the prompt recovery process to avoid the reduction in the network performance. Figures 11 and 12 show the actor recovery time of the proposed EAR algorithm and other competing approaches: RNF, DPCRA, ACR, and ACRA. In these experiments, we used two different network topologies: 1200 × 1200 m 2 and 1400 × 1400 m 2 . In Figure 11, we used 1200 × 1200 m 2 network topology with 48 connections.
Actor/Sensor Recovery Time
The actor recovery time is of high significance for network improvement and running applications on it. When the actor fails, then it is important to initiate the prompt recovery process to avoid the reduction in the network performance. Figures 11 and 12 show the actor recovery time of the proposed EAR algorithm and other competing approaches: RNF, DPCRA, ACR, and ACRA. In these experiments, we used two different network topologies: 1200 × 1200 m 2 and 1400 × 1400 m 2 . In Figure 11, we used 1200 × 1200 m 2 network topology with 48 connections.
Actor/Sensor Recovery Time
The actor recovery time is of high significance for network improvement and running applications on it. When the actor fails, then it is important to initiate the prompt recovery process to avoid the reduction in the network performance. Figures 11 and 12 show the actor recovery time of the proposed EAR algorithm and other competing approaches: RNF, DPCRA, ACR, and ACRA. In these experiments, we used two different network topologies: 1200 × 1200 m 2 and 1400 × 1400 m 2 . In Figure 11, we used 1200 × 1200 m 2 network topology with 48 connections.
Based on the results, we observed that EAR has overall minimum actor/sensor recovery time. We determined an actor recovery time for maximum 27 failure nodes including 11 actors and 16 sensors nodes. At the maximum of 27 failure nodes, EAR has 3.25 s actor/sensor recovery time while other approaches have 3.6-4.7 s. The results show that EAR has 3.19-20% improvement over other competing approaches. In Figure 12, we used 1400 × 1400 m 2 network topology with 60 connections. Based on the results, we observed that EAR has overall minimum actor recovery time. We determined an actor/sensor recovery time for maximum 27 failure nodes including 11 actors and 16 sensors nodes. At the maximum 27 failure nodes, EAR has the same time of 3.25 s as obtained with 1200 × 1200 m 2 network topology with 60 connections. It is confirmed that the increase in the network topology does not affect the actor/sensor recovery time while other approaches have 3.62-4.75 s. The result show that EAR has 3.21-20.8% improvement over other competing approaches. In Figure 12, we used 1400 × 1400 m 2 network topology with 60 connections. Based on the results, we observed that EAR has overall minimum actor recovery time. We determined an actor/sensor recovery time for maximum 27 failure nodes including 11 actors and 16 sensors nodes. At the maximum 27 failure nodes, EAR has the same time of 3.25 s as obtained with 1200 × 1200 m 2 network topology with 60 connections. It is confirmed that the increase in the network topology does not affect the actor/sensor recovery time while other approaches have 3.62-4.75 s. The result show that EAR has 3.21-20.8% improvement over other competing approaches. Based on the results, we observed that EAR has overall minimum actor/sensor recovery time. We determined an actor recovery time for maximum 27 failure nodes including 11 actors and 16 sensors nodes. At the maximum of 27 failure nodes, EAR has 3.25 s actor/sensor recovery time while other approaches have 3.6-4.7 s. The results show that EAR has 3.19-20% improvement over other competing approaches.
In Figure 12, we used 1400 × 1400 m 2 network topology with 60 connections. Based on the results, we observed that EAR has overall minimum actor recovery time. We determined an actor/sensor recovery time for maximum 27 failure nodes including 11 actors and 16 sensors nodes. At the maximum The results confirm the soundness EAR in terms of an actor recovery time due to contention-free forwarding capacity of the nodes. In addition, particular RSSI value is selected for traffic forwarding process that makes the process of actor recovery much easier. As, all of the existing approaches either attempt to recover the failure actor or try to reduce the overhead, but our proposed approach reduces the power consumption and delivers the data without contention. Furthermore, it improves the backup node selection process in case of node failure or being disjoint. These characteristics of EAR help reduce the actor recovery time as compared with other approaches.
Data Recovery
Although data loss is very critical issue, very little information is publically released even when substantial data is lost. A wide variety of failures can cause physical mutilation to the quality of service of the applications. To retain the lost data, the backup recovery approaches perform vital role. However, data recovery methods are not capable enough particularly in wireless sensor and actor networks. In our proposed approach, we have a node monitoring algorithm that monitor the status of the node prior to failure as well as post-failure.
As a result, backup nodes take the responsibility of storing the data. Figures 13 and 14 show data loss and recovered data with 1400 × 1400 m 2 network topology using 72 connections. In Figure 13, the total data loss is 15 KB when monitoring 10 events. Based on the results, we observed that EAR lost 15 KB data and recovered 15 KB that shows our scheme of data recovery is fault-tolerant, whereas other approaches also lost the same amount of data, but recovered 11.1-13.5 KB data. It is confirmed that EAR has 10%-26% improvement over other competing approaches. The results confirm the soundness EAR in terms of an actor recovery time due to contention-free forwarding capacity of the nodes. In addition, particular RSSI value is selected for traffic forwarding process that makes the process of actor recovery much easier. As, all of the existing approaches either attempt to recover the failure actor or try to reduce the overhead, but our proposed approach reduces the power consumption and delivers the data without contention. Furthermore, it improves the backup node selection process in case of node failure or being disjoint. These characteristics of EAR help reduce the actor recovery time as compared with other approaches.
Data Recovery
Although data loss is very critical issue, very little information is publically released even when substantial data is lost. A wide variety of failures can cause physical mutilation to the quality of service of the applications. To retain the lost data, the backup recovery approaches perform vital role. However, data recovery methods are not capable enough particularly in wireless sensor and actor networks. In our proposed approach, we have a node monitoring algorithm that monitor the status of the node prior to failure as well as post-failure.
As a result, backup nodes take the responsibility of storing the data. Figures 13 and 14 show data loss and recovered data with 1400 × 1400 m 2 network topology using 72 connections. In Figure 13, the total data loss is 15 KB when monitoring 10 events. Based on the results, we observed that EAR lost 15 KB data and recovered 15 KB that shows our scheme of data recovery is fault-tolerant, whereas other approaches also lost the same amount of data, but recovered 11.1-13.5 KB data. It is confirmed that EAR has 10%-26% improvement over other competing approaches. In Figure 14, data loss is 30 KB with 20 events. As some of the events are not highly critical so that less amount of data is lost with 20 events. As EAR recovers 29.82 KB out of 30 KB that is quite better recovery as compared with other competing approaches. The other competing approaches are greatly affected due to the increase in events so that other approaches have recovery data from 23.8 to 29.1 KB. The least adaptable data recovery algorithm is DPCRA with 20 events. The results also validates that EAR has 2.41%-20.66% improvement over other competing approaches.
Time Complexity
The quality of the running applications depends on the time complexity of algorithm. The time complexity is normally measured by calculating the number of basic operations and time consumed for those operations performed by the algorithm. The algorithm that takes less time improves the performance of the running applications. In Figure 15, we show the average time consumed for input data processing by EAR algorithm in comparison with RNF, ACRA, ACR and DPCRA. Based on the experimental results, we observed that EAR sent more input data in minimum time as compared with other competing algorithms. EAR sent maximum 54 KB input data within 0.065 s, whereas other protocols took 0.067-0.094 s in sending the same amount of data. EAR achieves minimum time because using the single operation for either pre-failure or post-failure recovery processes help reducing the time complexity.
To analyze the time complexity of EAR, lets determine the processes involved in the pre-failure and post failure process. In EAR, each critical actor node has a pre-assigned backup actor node which monitors its critical node. If consecutive heartbeat messages are not received from the critical node, backup actor node handles the recovery process. Let's assume the critical actor is designated as (AC) and its backup node is designated (AB), the following instructions illustrate the pre-failure process: {If (AB.HeartbeatMonitor(AC) == false); Ab.Recover(AC); } While post failure process is handled by the backup node (AB). Thus, AB moves towards the failed actor (AC) location in order to recover the network partition. Also, the neighbors of the critical node will use the stored information to communicate with the backup node (AB) in order to restore connectivity.
PostFailure(AC, AB) { Move(AB, c); Connect(AB, Neighbors(AC)) } In Figure 14, data loss is 30 KB with 20 events. As some of the events are not highly critical so that less amount of data is lost with 20 events. As EAR recovers 29.82 KB out of 30 KB that is quite better recovery as compared with other competing approaches. The other competing approaches are greatly affected due to the increase in events so that other approaches have recovery data from 23.8 to 29.1 KB. The least adaptable data recovery algorithm is DPCRA with 20 events. The results also validates that EAR has 2.41%-20.66% improvement over other competing approaches.
Time Complexity
The quality of the running applications depends on the time complexity of algorithm. The time complexity is normally measured by calculating the number of basic operations and time consumed for those operations performed by the algorithm. The algorithm that takes less time improves the performance of the running applications. In Figure 15, we show the average time consumed for input data processing by EAR algorithm in comparison with RNF, ACRA, ACR and DPCRA. Based on the experimental results, we observed that EAR sent more input data in minimum time as compared with other competing algorithms. EAR sent maximum 54 KB input data within 0.065 s, whereas other protocols took 0.067-0.094 s in sending the same amount of data. EAR achieves minimum time because using the single operation for either pre-failure or post-failure recovery processes help reducing the time complexity.
To analyze the time complexity of EAR, lets determine the processes involved in the pre-failure and post failure process. In EAR, each critical actor node has a pre-assigned backup actor node which monitors its critical node. If consecutive heartbeat messages are not received from the critical node, backup actor node handles the recovery process. Let's assume the critical actor is designated as (AC) and its backup node is designated (AB), the following instructions illustrate the pre-failure process: {If (AB.HeartbeatMonitor(AC) == false); Ab.Recover(AC); } While post failure process is handled by the backup node (AB). Thus, AB moves towards the failed actor (AC) location in order to recover the network partition. Also, the neighbors of the critical node will use the stored information to communicate with the backup node (AB) in order to restore connectivity. To calculate the time complexity for the previous operation, we assume that each process takes a time T(n) which is illustrated in Table 4. As shown in Table 4, each statement takes O(1). Figure 16 is used to represent the time complexity analysis of different algorithms. It is used to illustrate the complexity description of the algorithms. The time complexity of EAR and other competing algorithms is obtained using Big O notation is given in Table 5. To calculate the time complexity for the previous operation, we assume that each process takes a time T(n) which is illustrated in Table 4. As shown in Table 4, each statement takes O(1). Figure 16 is used to represent the time complexity analysis of different algorithms. It is used to illustrate the complexity description of the algorithms. The time complexity of EAR and other competing algorithms is obtained using Big O notation is given in Table 5. Ab.Recover(AC) T(n) = 1 O(1) Post-Failure 1 move(ab, ac) T(n) = 1 O(1) 2 Connect(AB, Neighbors(AC)) T(n) = 1 O(1) Figure 16 is used to represent the time complexity analysis of different algorithms. It is used to illustrate the complexity description of the algorithms. The time complexity of EAR and other competing algorithms is obtained using Big O notation is given in Table 5. O(log n) Good ACRA [13] O(n log (n)) Bad ACR [7] O(n) Fair DPCRA [11] O(2n) Worst
Overall Performance of EAR
Based on the experimental results, we show the significance and improvement of EAR approach and comparison of other approaches in Table 6.
Conclusions
An Efficient Actor Recovery (EAR) algorithm is introduced in this paper. The approach is based on the Received Signal Strength (RSSI). Unlike most published approaches, EAR differentiates between critical and non-critical nodes and allocates a suitable backup node from its neighboring nodes, which is also chosen based on the signal strength and regulates the nodes in its surrounding locality. EAR consists of novel RSSI model that helps applying probability density function for finding the correct location of the sensor node. In addition, it shows the relationship between the received energy of the wireless signals and transmitted energy including the required distance among the actor-sensor nodes. Furthermore, EAR is supported by three algorithms for performing the network monitoring process, network integration and message forwarding process, and routing process for actor node to avoid the failure node. EAR approach has been validated using simulation of OMNET++ and compared with other known approaches; RNF, DPCRA, ACR, and ACRA. The experimental results demonstrate that EAR outperforms other competing approaches in terms of data recovery, number of alive days of the nodes, residual energy and data loss. | 18,040.4 | 2017-04-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Performance Monitoring of Energy Flow in the Power Transmission and Distribution System Using Grid Computing
,
INTRODUCTION
The Growth of the Internet, along with the availability of powerful computers and high-speed Networks as low-cost commodity components, is changing the way scientists and engineers do computing, and are also changing how society in general manages information. Grid Computing has been identified as an important new technology by a remarkable breadth of scientific and engineering fields as well as many commercial and industrial enterprises. Grid Computing is a form of distributed computing that involves coordination and sharing of computing application, data storage or network resources across dynamic and geographically dispersed organizations. In short, it involves virtualizing computing resources. [2] The Grid is defined as 1. The flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions and resources [4,5,6] 2. A type of Parallel and distributed system that enables the sharing, selection and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost and users quality of service requirements [1] While the notion of grid computing is simple enough, the practical realization of grids poses a number of challenges. The key issues that need to be dealt with are security, heterogeneity, reliability, application composition, scheduling, and resource management [1] . The Microsoft .NET Framework provides a powerful toolset for all of these, in particular support of remote execution, multithreading, security, asynchronous programming, managed execution and cross-language development, making it an ideal platform for grid computing middleware. Alchemi [3] is a windowsbased desktop grid computing framework implemented on Microsoft .NET platform and developed at the University of Melbourne. Alchemi is implemented on top of the Microsoft .NET framework and provides the runtime machinery of constructing and managing desktop grids. It also provides and object oriented programming model along with web service interfaces that enable its services to be accessed from any programming environment that supports SOAP-XML abstraction.
While transmitting and distributing power to meet the consumers demand, Power losses occur in the EHV, HV and LV lines and also in the transformers. This technical loss in the power system is an inherent characteristic and it cannot be totally eliminated but can be reduced to an optimum level which will increase the revenue when monitoring and controlling of lines are done as and then by using the emerging technology Grid Computing.
Our work focuses on developing an application for monitoring the Power Transmission and Distribution system using the Alchemi Framework.
Architecture of Alchemi Framework:
Alchemi follows the master-worker parallel programming paradigm [3] in which a central component dispatches independent units of parallel execution to workers and manages them. This smallest unit of parallel execution is a grid thread, which is conceptually and programmatically similar to a thread object (in the object-oriented sense) that wraps a "normal" multitasking operating system thread. A grid application is defined simply as an application that is to be executed on a grid and that consists of a number of grid threads. Grid applications and grid threads are exposed to the grid application developer via the objectoriented Alchemi .NET API .
Alchemi offers four distributed components as in Fig.1 designed to operate under three usage patterns. Manager: The Manager manages the execution of grid applications and provides services associated with managing thread execution. The Executors register themselves with the Manager which in turn keeps track of their availability. Threads received from the Owner are placed in a pool and scheduled to be executed on the various available Executors. A priority for each thread can be explicitly specified when it is created within the Owner, but is assigned the highest priority by default if none is specified. Threads are scheduled on a Priority and First Come First Served (FCFS) basis, in that order. The Executors return completed threads to the Manager which are subsequently passed on or collected by the respective Owner.
Executor: The Executor accepts threads from the Manager and executes them. An Executor can be configured to be dedicated, meaning the resource is centrally managed by the Manager, or non-dedicated, meaning that the resource is managed on a volunteer basis via a screen saver or by the user. Thus, Alchemi's execution model provides the dual benefit of:
flexible resource management and 2. flexible deployment under network constraints
Owner: Grid applications created using the Alchemi API are executed on the Owner component. The Owner provides an interface with respect to grid applications between the application developer and the grid. Hence it "owns" the application and provides services associated with the ownership of an application and its constituent threads. The Owner submits threads to the Manager and collects completed threads on behalf of the application developer via the Alchemi API.
Cross-Platform Manager:
The Cross-Platform Manager, an optional sub-component of the Manager, is a generic web services interface that exposes a portion of the functionality of the Manager in order to enable the execution of platform independent grid jobs. Jobs submitted to the Cross-Platform Manager are translated into a form that is accepted by the Manager (i.e. grid threads), which are then scheduled and executed. Thus Alchemi is used to create different grid configurations like Desktop cluster grid, multi-cluster grid, and crossplatform grid (global grid).
Cluster (Desktop Grid):
The basic deployment scenario -a cluster consists of a single Manager and multiple Executors that are configured to connect to the Manager as shown in Fig. 2. One or more Owners can execute their applications on the cluster by connecting to the Manager. Such an environment is appropriate for deployment on Local Area Networks as well as the Internet.
Multi-Cluster:
A multi-cluster environment is created by connecting Managers in a hierarchical fashion as shown in Fig. 3. As in a single-cluster environment, any number of Executors and Owners can connect to a Manager at any level in the hierarchy. An Executor and Owner in a multi-cluster environment connect to a Manager in the same fashion as in a cluster and correspondingly their operation is no different from that in a cluster. The key to accomplishing multi-clustering in Alchemi's architecture is the fact that a Manager behaves like an Executor towards another Manager since the Manager implements the interface of the Executor. A Manager at each level except for the topmost level in the hierarchy is configured to connect to a higher-level Manager as an "intermediate" Manager and is treated by the higher level-Manager as an Executor. Such an environment is more appropriate for deployment on the Internet. (Fig 4). A grid middleware component such as a broker can use the Cross-Platform Manager web service to execute cross-platform applications (jobs within tasks) on an Alchemi node (cluster or multi-cluster) as well as resources gridenabled using other technologies such as Globus.
Fig. 4: Cross-Platform Grid
Grid Thread Programming Model: Alchemi simplifies the development of grid applications by providing a programming model that is object oriented. To develop and execute a grid application the custom grid class is created which is derived from the abstract GThread class. An instance of the GApplication object is created and any dependencies required by the application are added to its Dependencycollection.
Instances of the Gthread derived class are then added to the GApplication's ThreadCollection.
The Lifecycle of Grid application is shown in Fig.5.
Model of Transmission and Distribution System:
Electric power is normally generated at 11-25KV in a power station [7] . To transmit over long distances, it is then stepped-up to 400KV, 220KVas necessary. Power is carried through a transmission network of high voltage lines. Usually, these lines run into hundreds of kilometers and deliver the power into a common power pool called the power grid. The power grid is connected to load centers (cities) through a sub-transmission network of normally 33KV (or sometimes 66kV) lines. These lines terminate into a 33KV (or 66KV) substation, where the voltage is stepped-down to 11KV for power distribution to load points through a distribution network of lines at 11KV and lower. The power network, which generally concerns the common man, is the distribution network of 11KV lines or feeders downstream of the 33KV substation. Each 11KV feeder which emanates from the 33KV branches further into several subsidiary11KV feeders to carry power close to the load points (localities, industrial areas, villages, etc.,).
Fig. 5: Interaction between owner and Manager nodes
At these load points, a transformer further reduces the voltage from 11KV to 415V to provide the last-mile connection through 415V feeders (also called as Low Tension (LT) feeders) to individual customers, either at 240V (as single-phase supply) or at 415V (as threephase supply).
Lack of information at the base station (33KV substation) on the loading and health status of the 11KV/415V transformer and associated feeders is one primary cause of inefficient power distribution. Due to absence of monitoring, overloading occurs, which results in low voltage at the customer end and increases the risk of frequent breakdowns of transformers and feeders.
The Model of Transmission and distribution System is shown in Fig. 6. The data present in any substation can be accessed using our application which will give the following benefits [8,9] Real Fig. 9 Shows % of HT Line Loss for the above Table 2. It can be understood that when the voltage is increased the Line Loss can be reduced. Table 3. The Fig. 10, shows the % of HT Line Loss for the Table 3. It can be understood that when the Power Factor is increased the Line Loss is decreased.
Fig. 10: Power factor Vs. HT line loss
Existing HT Line loss in TNEB is 18 %. At present HT line loss is calculated for every Quarter of the Year. By using our Grid Computing framework it is easy to monitor and control the Energy Flow and line loss in the Electrical Power grid dynamically. Also decisions can be made faster than the existing method. This will improve the Electrical System reliability, availability and Maintainability.
Suppose if the line loss is reduced to 1% from 18 % using Grid computing method revenue will be increased to an appreciable level.
CONCLUSION
By monitoring and controlling the parameters such as voltage, load, frequency, power factor, KVA, KW, KWH, KVAR in electrical power transmission and distribution systems using our grid computing application the line loss is reduced. Our experimental results prove that when the power factor and voltage are increased the Line loss can be reduced. Thus by reducing the line loss the performance of the system is improved and the revenue will automatically get increased. | 2,450 | 2007-05-31T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
NEAT 1 knockdown enhances the sensitivity of human non-small-cell lung cancer cells to anlotinib
Anlotinib treatment of non-small cell lung cancer (NSCLC) is hindered by drug insensitivity. Downregulation of long non-coding RNA (lncRNA) NEAT1 can suppress the proliferation and invasion by NSCLC cells. This study explored the role of the combination of anlotinib with NEAT1 knockdown on NSCLC progression. A549 and NCI-H1975 cells were used to evaluate the effect of anlotinib with NEAT 1 knockdown on NSCLC cells in vitro. The proliferation, invasion, migration, and apoptosis of NSCLC cells were evaluated with CCK-8 assays, EdU staining, Transwell assays, and flow cytometry. The antitumor effect of anlotinib with NEAT 1 knockdown was further explored in a mouse xenograft model. NEAT 1 knockdown enhanced the inhibitory effect of anlotinib on NSCLC cell proliferation, migration, and invasion. NEAT 1 knockdown also increased the pro-apoptotic and cytotoxic effects of anlotinib through downregulation of the Wnt/β-catenin signaling pathway. The inhibitory effect of anlotinib on tumor growth was boosted in the presence of NEAT 1 knockdown in vivo. NEAT 1 knockdown promoted NSCLC cell sensitivity to anlotinib in vitro and in vivo. Thus, combined treatment of anlotinib with NEAT 1 knockdown may provide a new combined therapeutic approach for NSCLC patients.
INTRODUCTION
Lung cancer is the most frequently diagnosed cancer and a leading cause of cancer-related deaths worldwide [1,2,3]. Non-small cell lung cancer (NSCLC) accounts for 85% of lung cancer cases [4,5]. For patients with NSCLC of stage I to IIIA, the five-year survival rate ranges from 14% to 49% [6]. However, for patients of stage IIIB/IV, the five-year survival rate is less than 5% [6]. Despite advancements in radiotherapy, immunotherapy, and chemotherapy on NSCLC patient outcomes, the prognosis remains unsatisfactory [6,7]. Thus, improvements to current NSCLC therapeutic strategies are needed.
AGING Long non-coding RNA (lncRNA) are a group of nonprotein coding RNAs that are more than 200 nucleotides in length [15]. LncRNAs regulate transcription at multiple levels including transcriptional, translational, and post-translational [15,16]. The aberrant expression of lncRNAs has recently emerged as biomarkers, prognostic factors, and therapeutic targets for cancers [17]. In a previous study by Zhao et al., the knockdown of long noncoding RNA nuclear paraspeckle assembly transcript 1 (NEAT1) inhibited NSCLC progression in vitro [18]. However, the role of the combined NEAT1 and anlotinib treatment on NSCLC progression is not well known. In this study, we explored the effect of anlotinib and NEAT1 knockdown on NSCLC in vitro and in vivo.
Cell culture of A549 and NCI-H1975
A549 and NCI-H1975 cells were acquired from ATCC (American Type Culture Collection, Manassas, VA, USA) and cultured in RPMI 1640 (Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 100 U/ml penicillin, 0.1 mg/ml streptomycin, and 10% FBS. The cells were maintained in a humidified incubator at 37°C with 5% CO2.
Cell counting kit-8 (CCK-8) assay
The CCK-8 kit (Dojindo, Kumamoto, Japan) was used to measure cell viability. A549 and NCI-H1975 cells from each group were seeded onto 96-well plates at a density of 3,500 cells/well and cultured overnight. After the indicated treatments, the cells were washed twice with PBS. Then, cells were incubated with the CCK-8 reagent at 37°C for 60 min following the manufacturer's procedures. An optical density at 450 nm indicated cell viability and was measured by a microplate reader.
Combination studies
The drug combination study was conducted by calculating the combination index (CI) using the Chou-Talalay method [19]. A549 and NCI-H1975 cells were exposed to anlotinib at 0, 10, 20, 30, or 40 µM with or without supplementing with NEAT1 siRNA3 (10 nM). The CI value for the combination of anlotinib and NEAT1 siRNA3 was calculated as previously described using the following formula: CI = DA/ICx,A + DB/ICx,B [20].
5-ethynyl-2′-deoxyuridine (EdU) fluorescence staining
EdU staining was employed using the Cell-Light EdU DNA cell proliferation kit (RiboBio, Guangzhou, China) AGING to determine the proliferation of NCI-H1975 and A549 cells after the indicated treatments. All experimental steps were conducted according to manufacturer's instructions. The frequency of EdU positive cells was calculated from three random fields.
Cell invasion assay
Cell invasion assays were conducted with Transwell chambers (Corning, NY, USA) coated with Matrigel (BD Biosciences, Franklin Lakes, NJ, USA). After 24 h of transfection, A549 and NCI-H1975 cells were harvested, resuspended in 200 µL serum-free medium, and seeded onto the upper chamber at density of (3 × 10 4 /well). Then, anlotinib (20 μM) was used to treat the cells for 24 h. Cell culture medium supplemented with 10% FBS was added to the lower chamber to stimulate cell invasion. After 24 h of incubation, invaded cells were fixed with 4% paraformaldehyde. The invaded cells were stained with crystal violet (0.1%) for 15 min. Images were captured under an Olympus microscope, and the invaded cells were counted from three microscopic fields.
Wound healing assay
After 24 h of transfection, A549 and NCI-H1975 cells were seeded onto 6-well plates and cultured until the cells reached 90% confluence. A sterile 100 μL pipette tip was used to scrape the wound. The cells were treated with culture medium supplemented with anlotinib (20 μM). After 24 h of incubation, the cells were fixed with 4% paraformaldehyde. The migration rate was measured based on the migration distances.
Apoptosis assay
Apoptotic cells were determined by flow cytometry (FACScan, BD Biosciences) after the cells were doubled stained with Annexin V and propidium iodide (PI). The Annexin V-FITC Apoptosis Detection Kit (Thermo Fisher Scientific, Waltham) was used for the apoptosis assay as per the manufacturer's protocols.
Western blot assay
Cells from each group were lysed with RIPA buffer (Beyotime Biotechnology, Shanghai, China). The concentration of total protein was determined using a BCA kit (Pierce Biotechnology, Rockford, IL, USA). Total protein of 30 µg was resolved by SDS-PAGE, transferred onto a PVDF membrane, blocked with 5% non-fat milk, and immunoblotted with antibodies. TBST was used as the washing buffer. Immunoblots were visualized using ECL detection kits (Merck Millipore, Billerica, MA, USA).
Mouse xenograft models
Four-week-old BALB/c nude mice were acquired from Vital River (Beijing, China), housed in a standard animal laboratory, and allowed free access to water and food. All procedures involving animals were performed in accordance with the NIH Guide for the Care and Use of Laboratory Animals. The protocol was approved by the ethics committee for laboratory animal care and use of the Affiliated Lianyungang Hospital of Xuzhou Medical University. To induce tumor formation, 5 × 10 6 A549 cells were injected subcutaneously into the right flank of each mouse. When the tumor volume reached 180 mm 3 , the mice were randomly separated into three groups (n = 6 per group). Anlotinib or the combination of anlotinib and NEAT1 siRNA3 were used to treat the mice. Anlotinib was administrated at 6 mg/kg/day for 14 days. NEAT1 siRNA3 (50 nM) was injected into the tumor twice per week. Mice injected with physiological saline were used as controls. All mice were sacrificed at week 4 after treatment. Tumors were harvested and weighed immediately after sacrifice. Tumor volumes were calculated using the following standard formula: length × width 2 /2.
Statistical analysis
All experimental data are presented as the mean ± SD. At least three replicates were performed for all experiments. Significant differences between groups were evaluated by one-way ANOVA followed by a post hoc Tukey's test using GraphPad Prism 8 software.
NEAT1 knockdown increases the inhibitory effect of anlotinib on cell viability
NEAT1 siRNA1, siRNA2, or siRNA3 were transfected into A549 and NCI-H1975 cells to knockdown NEAT1. The knockdown efficacy of each siRNA was detected with RT-qPCR after 24 h of transfection. NEAT1 siRNA3 achieved the best knockdown efficacy in A549 and NCI-H1975 cells ( Figure 1A, 1B). Therefore, NEAT1 siRNA3 was utilized for knockdown experiments. Next, NEAT1 siRNA3 (0, 5, 10, 20, 40 nM) were transfected into A549 and NCI-H1975 cells. The influence of NEAT1 siRNA3 on the cell viability of NSCLC cells was evaluated by CCK-8 assays. As shown in Figure 1C and Supplementary Figure 1A, NEAT1 siRNA3 repressed the viability of A549, NCI-H1975 and BEAS-2B cells. A 10 nM concentration of NEAT1 siRNA3 induced a moderate cell viability decline and was subsequently used to treat NSCLC cells together with anlotinib. The effect of NEAT1 siRNA3 combined with anlotinib (from 0 to 40 μM) on cell viability was detected by CCK-8 assays.
AGING
As indicated in Figure 1D and 1E, anlotinib alone suppressed the viability of A549 and NCI-H1975 cells. NEAT1 siRNA3 further enhanced the inhibitory effect of anlotinib on NSCLC cell viability. However, BEAS-2B cell viability of anlotinib group and anlotinib + NEAT1 siRNA3 group exhibited no significant difference (Supplementary Figure 1B). NEAT1 siRNA2 exhibited similar effects on NCI-H1975 cell viability (Supplementary Figure 1A). In addition, anlotinib had no effect on the expression of NEAT1 in NCI-H1975 cells ( Figure 1F).
The IC50 value of anlotinib was 23.73 μM and 24.72 μM in A549 and NCI-H1975 cells, respectively. When combined with NEAT1 siRNA3 (10 nM), the IC50 value was decreased to 8.32 μM and 8.55 μM, respectively. Moreover, the CI value of the combination of anlotinib and NEAT1 siRNA3 was 0.67 in A549 cells and 0.65 in NCI-H1975 cells ( Table 1). The CI values were > 0.6 but < 0.8, which indicates a moderate synergism effect [20]. These results suggest that combining anlotinib with NEAT1 siRNA3 synergistically inhibits the viability of A549 and NCI-H1975 cells. In contrast, overexpression of NEAT1 reversed the inhibitory effect of anlotinib on NCI-H1975 cell viability (Supplementary Figure 3A and 3B).
NEAT1 knockdown enhances the inhibitory effect of anlotinib on cell proliferation
EdU staining was used to determine the effect of NEAT1 siRNA3 in combination with anlotinib on cell proliferation. Anlotinib inhibited proliferation in A549 and NCI-H1975 cells (Figure 2A, 2B). Sole treatment with NEAT1 siRNA3 (10 nM) had no significant effect on the proliferation (Figure 2A, 2B). However, the combined treatment of NEAT1 siRNA3 with anlotinib remarkably enhanced the anti-proliferation effect of anlotinib in A549 and NCI-H1975 cells (Figure 2A, 2B).
NEAT1 knockdown increases the anti-invasion and anti-migration effect of anlotinib in A549 and NCI-H1975 cells
To explore the effect of NEAT1 siRNA3 and anlotinib on the invasion and migration of A549 and NCI-H1975 cells, cell invasion and wound healing assays were conducted, respectively. Anlotinib or NEAT1 siRNA3 suppressed the invasion of A549 and NCI-H1975 cells ( Figure 3A, 3B). The inhibitory effect of anlotinib on cell invasion was improved by combined treatment with Figure 3A, 3B and Supplementary Figure 1B). Anlotinib or NEAT1 siRNA3 repressed the migration of A549 and NCI-H1975 cells ( Figure 3C, 3D). The combined treatment exhibited a greater inhibitory effect on cell migration compared with anlotinib alone. These results demonstrate that NEAT1 knockdown increases the anti-invasion and anti-migration effect of anlotinib in A549 and NCI-H1975 cells.
NEAT1 knockdown enhances the anti-apoptotic effect of anlotinib
We examined the effect of NEAT1 siRNA3 and anlotinib on cell apoptosis through Annexin V/PI staining. NEAT1 siRNA3 had little effect on apoptosis in the A549 and NCI-H1975 cells ( Figure 4A, 4B). In contrast, anlotinib increased cell apoptosis, which was AGING enhanced by the presence of NEAT1 siRNA3 ( Figure 4A, 4B). In addition, Restoration of NEAT1 expression could reverse the effect of anlotinib on cell viability ( Supplementary Figure 2A, 2B). These results illustrate that NEAT1 knockdown improves the anti-apoptotic effect of anlotinib in A549 and NCI-H1975 cells.
NEAT1 siRNA3 facilitates the cytotoxic effect of anlotinib through downregulation of the Wnt/βcatenin signaling pathway
Western blot was utilized to explore the mechanism of the combination treatment in A549 and NCI-H1975 cells. NEAT1 can act as an oncogenic IncRNA in NSCLC through modulating the WNT/β-catenin signaling pathway [21]. Thus, the expression of β-catenin and c-Myc were detected. Since anlotinib is a VEGFR2 inhibitor, the expression of VEGFR2 and p-VEGFR2 were also detected by western blot [9]. As shown in Figure 5A and 5B, anlotinib inhibited the expression of p-VEGFR2 and had no influence on the expression of VEGFR2, β-catenin, or c-Myc in A549 and NCI-H1975 cells ( Figure 5A, 5B). NEAT siRNA3 alone had no effect on p-VEGFR2 expression but suppressed the expression of VEGFR2, β-catenin, and c-Myc in A549 and NCI-H1975 cells ( Figure 5A, 5B). In addition, the combination of NEAT siRNA3 with anlotinib repressed the expression of p-VEGFR2, VEGFR2, β-catenin, and c-Myc in A549 and NCI-H1975 cells ( Figure 5A, 5B). The anti-proliferative Figure 3A-3B). These results demonstrate that NEAT1 siRNA3 enhances the sensitivity of NSCLC cells to anlotinib by regulating the WNT/β-catenin signaling pathway.
NEAT 1 knockdown enhances the anti-tumor effect of anlotinib in vivo
The effect of NEAT1 siRNA3 combined with anlotinib on tumor growth was investigated in a mouse xenograft model. As shown in Figure 6A-6C, sole treatment with anlotinib inhibited the increase of tumor volume and weight compared to the control group. Moreover, the combined treatment of NEAT1 siRNA3 with anlotinib remarkably decreased the tumor volume and weight compared to sole anlotinib treatment ( Figure 6A-6C). The tumor weight after anlotinib treatment was improved from approximately 40% to near 80% after combined therapy with NEAT1 siRNA3 ( Figure 6D). In addition, NEAT1 siRNA3 could effectively inhibit the level of NEAT1 in tumor tissues ( Figure 6E). These findings suggest that NEAT 1 knockdown enhances the anti-tumor effect of anlotinib in vivo.
DISCUSSION
Anlotinib is used as a third-line or further treatment for advanced refractory NSCLC due to its efficacy in extending PFS and OS [12,13]. However, drug AGING insensitiveness obstructs its anti-tumor effectiveness. Knockdown of NEAT1 can inhibit NSCLC progression in vitro, but the role of NEAT1 in enhancing the sensitivity to anlotinib remains unexplored [18]. Our findings showed for the first time that combined treatment of NEAT1 and anlotinib inhibited the progression of NSCLC.
In addition, it was previously reported that NEAT1 affects NSCLC cells by regulating Wnt/β-catenin signaling [21]. In particular, the downregulation of NEAT1 repressed the activity of Wnt/β-catenin signaling pathway, which suppressed the proliferation, migration, and invasion of NSCLC cells [21]. Our findings demonstrated that NEAT1 siRNA3 enhanced the sensitivity of NSCLC cells to anlotinib by inhibiting the Wnt/β-catenin signaling pathway. Thus, our findings further validate the involvement of Wnt/β-catenin signaling pathway in NEAT1 on NSCLC progression.
However, the regulatory mechanism of NEAT1 in combination with anlotinib on NSCLC cells has not been fully revealed. Yu et al. reported that Krüppellike factor 3 (KLF3) was associated with the role of NEAT1 in regulating the proliferation, apoptosis, and invasion of NSCLC cells [22]. According to the findings of Yu et al., LncRNA NEAT1 sponges microRNA(miR)-1224. miR-1224 then binds to 3′UTR of Krüppel-like factor 3 (KLF3), affecting the proliferation, apoptosis, and invasion of A549 cells [22]. Kong et al. reported that NEAT1 promoted NSCLC progression through the miR-101-3p/SOX9/Wnt/β-Catenin signaling pathway [23]. Taken together, these mechanisms of NEAT1 on NSCLC progression need to be further investigated when NEAT1 is used in combination with anlotinib.
In conclusion, we demonstrated that NEAT 1 knockdown promotes the sensitivity of NSCLC cells to anlotinib through downregulation of the Wnt/β-catenin signaling pathway. The combined treatment of anlotinib with NEAT 1 knockdown provides novel insights on developing a combined therapeutic approach against anlotinib insensitiveness in NSCLC patients. NCI-H1975 cells were transfected with pcDNA3.1 NEAT1 for 24 h. (A) The level of NEAT1 in cells was detected with RT-qPCR. (B) Cell viability was detected with CCK8 assay. ** P < 0.01 compared with the control group. ## P < 0.01, compared with the anlotinib group. | 3,584 | 2021-05-12T00:00:00.000 | [
"Chemistry",
"Biology",
"Medicine"
] |
Adoption Conceptual Model for Intelligent Waste Management in Smart Cities: Theoretical Review
Purpose –Adoption of technologies in waste management in developing countries has largely lagged leading to poor waste collection and disposal exposing the city dwellers to health hazards and points of extortion. The delay has been occasioned by several technology adoption inhibitors. This paper, therefore, proposes am integration of three adoption models: diffusion of innovation (DoI), technology acceptance model (TAM) and technology readiness index (TRI) models towards enhancing understanding of the factors that may influence acceptance and use of smart waste management system in a smart city Method – This paper critically reviewed the available literature on DoI, TAM, and TRI models and highlighted the challenges of applying each model and thereafter, proposed an integrated model based on the strength exhibited by each model.
Smart cities
International Telecommunication Union (ITU) describes a smart sustainable city is an innovative city that uses ICTs and other means to improve quality of life, the efficiency of urban operation, services provision, and competitiveness while ensuring that it meets the needs of present and future generations concerning economic, social and environmental aspects (ITU, 2014). This means that smart cities gather and use real-time data analytics for prediction and planning for future growth, infrastructure development, and maintenance to meet the ever-changing demands of the citizens. Smart cities are characterized by integrated systems which facilitate smart mobility (transport systems), buildings (energy efficient) (Ernst & Young, 2015), healthcare, waste management (Hoornweg&Bhada-Tata, 2012), e-governance (Nielsen, 2017), economic activities, water resource management, and smart users. The integration promotes data generation processing, mining, and shared use for improved performance and optimal utilization of resources. By leveraging the use of digital technologies, smart cities can overcome the 428 limitations of managing urban infrastructure and isolated developments (Economic and Social Council, 2016). Thus, the cities are meticulously planned to allow future expansions.
According to McKinsey &Company (2016), 70% of sustainable development goals will be realized through smart cities. This is because smart cities are progressive, resourceefficient, and provide high-quality services to the city dwellers. This has been achieved by creating synergies across systems to provide objectives and solutions to the dynamic environment of the cities. For instance: in Gujarat International Finance Tec-City in India, multiple utilities are co-located into a single data stream forming an integrated intelligent system that can be managed centrally (Economic and Social Council, 2016); In Barcelona, Spain, through the GrowSmarter Project the city installed fiber optics interconnecting major installations and services thus enabling open, efficient and user friendly services (European Commission, 2017); In Bristol, United Kingdom, the city implemented Replicate Project which deployed smart integrated energy, mobility and ICT solutions in a bid to curb carbon emission through efficient use of clean energy (European Commission, 2017); Similarly, in Cologne, Germany, the city is implementing smart solutions and integrated infrastructure to reduce carbon emission and thus become sustainable (European Commission, 2017). These are just some of the modern cities that have applied ICT solutions to enhance the quality of service provision to the citizens. Unlike in these modern cities, adoption of technology and success of smart cities in developing countries have generally lagged as a result of inadequate ICT infrastructure (inconsistent network connectivity), low ICT literacy levels, poor government policies supporting automation, resistance to change, lack of experienced professionals and insufficient funding (Vu & Hartley, 2018).
Smart cities have particularly exploded with the advent of cloud computing, open data, and the internet of things (IoT) which integrates data from smart objects and applies analytics tools to provide highly specialized data-driven decisions. According to Gartner (2017), IoT devices will increase from 8.4 billion in 2017 to 20.4 billion by 2020. In turn, this will push smart city devices to top 1 billion by 2025 (IHS Markit, 2016). To successfully implement a smart city, the following digital infrastructure is key: comprehensive and high-speed network, big data, IoT devices, sensors and platforms (Economic and Social Council, 2016); applications and tools with data analysis capabilities to run on the physical infrastructure; and user adoption characteristics and experiences for better decision and behavior change among city dwellers. This paper critically reviewed the aspect of technology adoption as the key drivers of smart cities with specific regard to the adoption of intelligent waste management systems. This is because smart cities require users to adopt and actively use the resources of the technologies productively in their day to day life, service, and business. For example, the use of parking apps to guide users on available parking space, enhanced mobility using taxi apps, use of clean energy, and waste collection and disposal system. These services can only make sense when used by the intended users. The section below discusses the concept of waste collection and disposal as a key aspect of a smart city.
Waste Collection and Disposal
Waste generation is increasing at an alarming rate thus cities experience challenges in sorting, recycling, and disposing of waste especially solid waste (Wilson & Velis, 2014). In Norway, various municipalities are efficiently managing waste through increased recycling and incineration for the generation of energy to the extent of importing waste from other countries. However, in developing countries, most cities grapple with the problem of increased generation of waste arising from the ballooning population, poor disposal attitude, lack of disposal facilities, inability of the governments to enforce waste disposal laws and regulations, failure to prioritize waste management, insincerity among private sector waste operators and inadequate infrastructure (Becidan et al., 2015). In Nairobi City, 2475 tonne of waste is produced every day, but due to poor disposal strategy, the wastes have remained an eyesore in the city (Leah, 2018) due to poor monitoring, collection, transportation, processing, recycling, and disposal of the waste.
According to Otieno and Omwenga(2015), one of the key challenges in waste management is the inability to predict when the waste bins are full for disposal to appropriately schedule garbage collection trucks and thus reduce cases of waste spillage or misuse of resources whereby a truck is sent to collect waste when the bins are not full. Therefore, introducing smart waste management systems will leverage the use of IoT/sensors to send real-time data at the source to aid in the smart management l of waste. The system creates methods for proper handling of waste including enhanced efficiency in waste collection, categorization at the source, pick up, reuse, and recycling. Such systems are already being used in Santander, Spain (Urban Waste, 2017), and Sharjah, United Arab Emirates (NS Energy, 2020).
A typical smart waste management system uses sensors to measure the fill levels of waste collection bins. The measured data is transferred via cloud services to a central system or an onboard system connected to garbage trucks for processing and analysis (Pardini et al., 2020). The sensors can segregate waste by separating solid waste from liquid waste to ease transportation. Through analysis, trash collections are planned, and truck routes are optimized to reduce the cost associated with a waste collection when the bins are not yet full (Golubovic, 2018). The system can also employ a digital tracking and payment system which encourages users to correctly dispose of waste and receive payment in cash or kind. (2018), smart cities can reduce the volume of solid waste per capita by 10-20% and 30-130 kg/person annual reduction in unrecycled solid waste thus delivering a cleaner and more sustainable environment. Therefore, the need to implement an intelligent waste management system to enhance the city's capability to collect and dispose of waste especially in Nairobi city is long overdue. However, due to the inherent adoption challenges (such as resistance to change by users/stakeholders, endemic corruption, inadequate ICT infrastructure, and low ICT literacy levels (Leah, 2018), there is a need to apply an appropriate technology 430 adoption framework for widespread acceptance and use of the system. This paper, therefore, reviews three technology adoption models namely diffusion of innovation, technology acceptance model, and technology readiness index, and proposes the integration of the three models to enhance understanding of the factors that may influence acceptance and use of smart waste management system as a critical aspect of a smart city.
Diffusion of Innovation (DoI) Model
DoImodel deals with the speed at which innovation is adopted by members of a social system and is measured by the number of users adopting innovation over some time (Rogers, 2003). DoI proposes the adoption of innovations based on either time, a channel of communication, innovation, or social systems (Sila, 2015). Dillon and Morris (1996) opined that technology or innovation spread at a rate that is proportionate to the level of integration with the existing beliefs, practices, norms, and culture of the society. The theory provides that the adoption of innovation is a decision of full use of innovation as the best course of action (Rogers, 2003). Considering the heterogeneity of the society, the level of acceptance varies based on adopters ' characteristics ranging from the earliest to the latest adopters. Rogers (1983) categorizes members of social systems in the form of innovators (2.5%), early adopters (13.5%), early majorities (34%), late majorities (34%), and laggards (16%). Therefore, each member of society plays a critical role in the adoption of technology. The roles are affected by user-perceived adoption factors (Rogers, 1983) including complexity -which is the perceived effort to be put by users to use the technology; trialability-which is the initial phase of familiarization and experiencing the functionality of the system before deciding whether to adopt it or not; observability-whereby the system provides observable results; relative advantage-which is the perceived benefits that accrue to the users by using the system; perceived compatibility-which is the level of integration of the system with existing technologies and users way of life (Lundblad, 2003).
Several scholars have applied DoI to study technology acceptance and use. For instance, Zhang et al. (2015) applied DoI to understand the factors impacting patient acceptance and use of consumer e-health innovations, the study found out that majority of patients did not adopt the innovation due to insufficient communication, lack of value of the service, incompatibility of the new service and limitations of the characteristics of the patients; Xue(2017) applied DoI to characterize faculty attending professional development programs and found out that characterizing and leveraging the type of adopters and targeting the need of each adopter present in groups of participants can enhance the effectiveness of the program and increase adoption; and Sasaki (2018) applied DoI to educational accountability, the study showed that targeted aspects of 431 curriculum policies were affected by all characteristics of DoI even though relative advantage and observability stood out as compared to the rest.
DoI can be applicable in determining the adoption of smart waste management system because: i. It provides a tool for measuring how, why, and how fast innovation meets its intended goals. ii. It would be important to determine the level of fit (integration) of the new intelligent waste management system to the existing beliefs, practices, norms, and culture of the society (city dwellers), iii. It would be necessary to assess the new intelligent system complexity -in terms of perceived effort to be put by users to use the smart waste collection technology iv. It should enable trialability by different types of users (intelligent system developers, its system administrators, support staff and the end-users), v. Observability: The system should provide visible output e.g chart-based reports and alerts or even smell or taste-based reports for the various types of wastes vi. relative advantage: the intelligent system should be advantageous to use as compared with the current manual waste detection and management system vii. perceived compatibility: The system should be made to have both backward (enable the current manual system users an option to continue with the manual system) and forward compatibility with an automated intelligent system Therefore, a system which is reliable, user friendly, provides observable results, allows for the performance of trials before full implementation and compatible with the practices/norms of the city dwellers tend to be trusted, and accepted for use by many users, unlike those systems which are likely to introduce a new social order.
The weakness of this theory is that it focuses much more on innovation rather than information technology. It does not also support the participatory adoption of technology. DoI is also less practical in predicting outcomes since it focuses more on system characteristics, organization attributes, and environmental aspects. The model, therefore, lacks psychometrics characterization of users' behavioral intentions such as perceived ease of use, perceived usefulness, and actual use which are an outgrowth of attitude and thus can influence user's ability to accept or reject the use of a system as proposed in TAM.
Technology Acceptance Model (TAM)
TAM has increasingly been applied in understanding technology adoption because the model outlines the psychometrics characterization of users behavioral intention to use technology (Davis, Bagozzi, &Warshaw, 1989) based on: a) Perceived enjoyment: this is the degree under which users of the smart waste management system will perceive the use of the system as being pleasant or enjoyable. b) Perceived ease of use: users of the system need not undergo extreme training or skills enrichment to interact with the waste management system. The system should be user friendly with an intuitive and interactive interface and support services. c) Perceived usefulness: this is the extent to which the system transforms an input into the desired output. The smart waste management system should be effective and efficient in managing waste collection and disposal. This agrees with the DoI model d) Attitude towards using the systems: perceived ease of use, enjoyment, and usefulness of the system would impact users ' attitudes towards either adopting or rejecting the use of the smart waste management system.
Perceived ease of use and perceived usefulness are moderated by the user experience while working with the technology which in turn influences user's decision to accept or reject the technology. Asiri, Mohamud, Abu-Bakar, and Ayub (2012) in Alharbi and Steve (2014) confirmed that a positive attitude towards technology will likely motivate a user to utilize the technology. Other studies also found out that beliefs were important in determining the use of technology. Alharbi and Steve (2014) note that the use of technology could be predicted by the competency level which affects the utilization of the technology. Technology acceptance can also be influenced by organizational, technological, and social barriers, and demographical factors such as gender, computer self-efficacy, and levels of training.
Scholars have applied TAM in understanding and explaining user behavior in the adoption of technology. For instance: Kalina and Marina (2017) applied TAM to study online shopping adoption among youth in the Republic of Macedonia, the study found out that TAM served as a model of explanation of online shopping behavior by presenting the current situation; Lule, Omwansa, and Waema (2012) applied TAM in M-Banking adoption in Kenya and found out that TAM constructs significantly influenced the adoption of M-banking services thus the framework can be used as a guide when assessing the adoption of an M-banking service and can be used in any developing country since it was generic; Mugo, Njagi, Chamwei and Motanya (2017) applied TAM in predicting the acceptance and utilization of various technologies in teaching and learning places, the study found out that there were challenges of attitude towards technology, and educators must work hard to address attitudinal issues arising from learner, staff, management, and policymakers. Waleed et al. (2019) integrated DoI and TAM to evaluate students attitude towards MOOCs learning management system and recommended that system developers, designers and procurers should cautiously study the needs of students and confirm that the chosen system successfully meet their expectation since the MOOCs system features significantly affected user adoption; Lee (2009) combined TAM with theory of planned behavior to understand the perceived risks and benefits in adoption of internet banking and found out that perceived ease of use, perceived usefulness, attitude, subjective norm and perceived behavioral control are the important determinants of online banking adoption; Moon and Young-Gul (2001) introduced a newer variable "playfulness" in working with TAM to study acceptance of world wide web ( www) and found out that perception of playfulness influenced users ' attitude towards using the www and should therefore be a consideration in designing future www systems by providing more concentration, curiosity and enjoyment.
Despite this wide use, TAM may not measure a user's readiness. For if we cannot measure then we can't know. Therefore, it is not possible to predict the behavior of semiskilled users in the early stages of using a new system. The model also deals with perception to use technology rather than the actual use of technology. The model was also built to predict adoption in the work environment thus less applicable in an environment where users are autonomous like a city (Lin et al., 2007). It would be important to index users' readiness to accept, adopt the smart system as proposed by the TRI model.
Technology Readiness Index (TRI)
This model measures the user's readiness to accept new systems as influenced by contributor factors of optimism and innovativeness and inhibitor factors of discomfort and security of the system. The model describes how fast and at what rate users are adopting technologies. According to Parasuraman (2000), users have increasingly amassed technology products and services most of which did not provide any benefit to them. This is corroborated by the findings of Lin, Shih, and Sher (2007) study which concluded that the higher the technology readiness of customers, the higher the satisfaction and behavioral intentions generated when using self-service technologies. Parasuraman and Colby (2001) categorized customers into explorers, pioneers, skeptics, paranoids, and laggards whereby the explorers are the early adopters of innovation while the laggards are the late adopters. Explorers are driven by the technology contributing factors of optimism and innovativeness while the laggards are driven by the technology inhibiting factors of discomfort and insecurity. Pioneers tend to display similar beliefs as explorers, but also exhibit high discomfort and insecurity. Skeptics are dispassionate about technology, but also have few inhibitions; thus, they need to be convinced of the benefits of technology. Paranoids may find technology interesting, but they are also concerned about risks and exhibit high degrees of discomfort and insecurity (Massey, Khatri& Montoya-Weiss, 2007).
TRI has been used in many studies as an explanatory variable or as a moderator of a behavior, intention, or attitude. Pires, Costa Filho, and Cunha (2011) used TRI factors as differentiating elements between users and non-users of internet banking and found out that technology factors of optimism, security, and discomfort presented significant differences between users and non-users of internet banking; Nihat and Murat (2011) applied TRI to investigate technology acceptance in e-HRIM and found out that optimism and innovativeness positively influenced perceived usefulness and perceived ease of use but discomfort and insecurity did not have a positive effect on adoption of the system.
434
TRI provides alternative perspectives and views on the adoption of and satisfaction with the technologies by identifying: the techno-ready users who champion and can influence adoption; the users who are thrilled about adoption but must be reassured of the benefits of adoption; and users who require strong conviction and proof of concept before they adopt.
The challenge of TRI is that it focuses mainly on experiences and demographics and presupposes that for widespread adoption of technology users must be well equipped with the required infrastructure, skills, beliefs, and attitude. Therefore, other than rating and diffusing smart waste management system users into early adopters, early majorities, late majorities, and laggards, there was a need to assess acceptance variation across and within these diffusion levels and index the technoreadiness among users. The next section depicts the integration of the three adoption models towards ensuring an understanding of the factors that would influence the adoption of a smart waste management system in a smart city.
Graphical representation of the proposed model
Implementing smart city technologies often require a robust, reliable, and affordable ICT infrastructure, an efficient ICT ecosystem as well as the right attitude for users to accept, adopt and use the technology. To gain a deeper understanding of the factors that would influence the adoption of an intelligent waste management system in a city in a developing country -for ease of planning-against the backdrop of inherent technology adoption inhibitors as discussed above, this paper proposes to integrate DoI, TAM, and TRI model to develop a model that will result in widespread actual-use of the waste management system (Figure 1). It is envisaged that the integrated model will inform the government and city planners on strategies of implementing a system that will be widely accepted by users for greater impact in waste management which has become a challenge to many cities.
TRI was chosen because it could easily be applied to determine whether a city dweller was a technology user or not, TAM was ideal because it can determine users perception about ease of use and usefulness of technology to develop the willingness to accept or reject the innovation, DoI was used because it provided positive behavioral intention to use a technology thus enhance the compatibility of technology with the current user activities and beliefs hence easy to use. Such an approach was initially proposed by Lin et al (2007) which proposed a combination of TAM and TRI and Waleed et al. (2019) which integrated DOI and TAM while in Walczuch, Lemminkb, and Streukensb(2007), technology readiness construct was associated directly with TAM's dimension of perceived of usefulness and perceived ease of usefulness. However, the combination of the two models would not adequately solve the adoption challenges. For instance:(i) whereas TRI antecedents may correlate to DoI constructs, their combination does not take into account the mediating factors such as psychometrics characterization of users' behavioral intentions, (ii)a study by Pires et al. (2011) found out that combining TRI and TAM led to only 3% increase in the intention to use technology, also a study by Godoe and Johansen (2012), found out that only optimism and innovativeness significantly affected perceived ease of use and perceived usefulness when TAM and TRIare combined, (iii) DOI variables of complexity and relative advantage are overlapping with TAM variables of perceived ease of use and perceived usefulness respectively (Carter &Bélanger, 2005) thus their combination may not provide a good prediction of adoption and use of technology. Therefore, the integration of the three models sought to solve the weaknesses of each model when applied independently or a combination of two models by focusing on innovation (DoI), perceptions (TAM), and readiness (TRI).
Discussion of the Model
Previous studies discussed herewith have not explicitly dealt with how the DoImodel relates to the behavioral intention to use a system as suggested in the TRI model. While it is worth exploring such a link, the proposed model hypothesizes that to maximize adoption for an intelligent waste management system that meets the DoI constructs of complexity, trialability, observability, compatibility, and relative advantage, both the intended users' readiness and perception has to be considered rather than just either of which. As shown in Table 1 above, the readiness index is determined by the technology contributors of innovativeness and optimism, and inhibitors of comfort and security which enables the identification of techno-ready users who are likely to be the early adopters, and the explorers, pioneers, skeptics, paranoids, and the laggards who tend to be the late adopters. The index level 1 indicates the users who are extremely likely to adopt technology while 5 represent those extremely unlikely to adopt. Explorers exhibit both optimism and innovativeness. Pioneers tend to display similar beliefs as explorers, but also exhibit high discomfort and insecurity. Skeptics are dispassionate about technology but also have few inhibitions; thus, they need to be convinced of the benefits of technology. Paranoids may find technology interesting, but they are also concerned about risks and exhibit high degrees of discomfort and insecurity. And the laggards merely exhibit factors of discomfort and insecurity. In this model, it is hypothesized that the DoI constructs of compatibility are ideal in assessing the innovation in terms of backward and forward integration with existing or newer systems; observability assesses system provisioning of visible output for easier manipulation by users; while trialability allows users to try and test the functionalities of the system before full deployment.
Perception with regards to ease of use, enjoyment, and usefulness provides the framework for understanding the users' attitudes towards using the system. While perceived ease of use and perceived usefulness is likely to significantly influence the actual adoption/use of innovations as had been validated by studies in TAM (Kalina& Marina, 2017;Lule et al., 2012;Mugo et al. 2017;Waleed et al. 2019;Lee, 2009;Moon & Young-Gul 2001), the proposed model intended to create an understanding of the interconnectedness of the DoI, TAM and TRI models by drawing from the strength of each model while focusing on innovation (DoI), perceptions (TAM) and readiness (TRI)to help city planners appropriately formulate a good strategy mix for the intended users of an intelligent waste management system. A techno-ready user at index level 1 is extremely likely to adopt the innovation since they exhibit the ideal adoptive perception 437 and behavior unlike at level 5. The researchers intend to further this study by testing the hypothesis through the actual implementation of the proposed model.
CONCLUSION
The models of DoI, TRI, and TAM, in this theoretical review, reveals to be complementary to each other. The TAM model addresses what it lacked in the DoImodel, which is the psychometric characterization of users' behavioral intention while the TRI model enables the measurement of user's readiness which is not covered in TAM.
Therefore, the integration of these three models covers the two key actors in the adoption of an intelligent waste management system: the innovation itself as may be focused by the DoImodel and the intended users; characterized by both perceptions through the TAM model; and readiness provided by the TRI model.
The knowledge gained from this proposed model of integration is deemed advantageous for city planners in crafting more appropriate strategies for the adoption of smart waste management system by the intended users; thus, enabling developing countries to experience the benefits of intelligent systems and consequently embrace further the concept of smart cities.
The study recommends the actual application of the model to test the hypothesis that integrating the models would enhance the adoption and use of intelligent waste management systems in smart cities. | 6,148.8 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Designing an Efficient and Secure Message Exchange Protocol for Internet of Vehicles
In the advancements in computation and communication technologies and increasing number of vehicles, the concept of Internet of Vehicles (IoV) has emerged as an integral part of daily life, and it can be used to acquire vehicle related information including road congestion, road description, vehicle location, and speed. Such information is very vital and can benefit in a variety of ways, including route selection. However, without proper security measures, the information transmission among entities of IoV can be exposed and used for wicked intentions. Recently, many authentication schemes were proposed, but most of those authentication schemes are prone to insecurities or suffer from heavy communication and computation costs. (erefore, a secure message authentication protocol is proposed in this study for information exchange among entities of IoV (SMEP-IoV). Based on secure symmetric lightweight hash functions and encryption operations, the proposed SMEP-IoV meets IoV security and performance requirements. For formal security analysis of the proposed SMEP-IoV, BAN logic is used. (e performance comparisons show that the SMEP-IoV is lightweight and completes the authentication process in just 0.198ms.
Introduction
e Internet of Vehicles (IoV) is a self-organized network of vehicles on the road and the road side units (RSUs). e IoV provides intervehicles (V2V) and vehicles to RSUs (V2R) communication infrastructure [1], which can benefit in many ways including the information relating to road congestion/traffic issues, parking information, alternative routes, and warnings of potential accidents. Using the information, the drivers can quickly make decisions relating to vehicles and/or road/s. It can further help the unmanned vehicles regarding the accuracy and safety through the use of more sophisticated information and artificial intelligence techniques. e information exchanged or the communication among entities of IoV is always through public wireless channel, which makes it prone to several attacks. An attacker can easily listen and extract the meaningful information from the exchanged messages. Such information can be very crucial for the accuracy and safety of the vehicles in an IoV. e attacker can replay an old message or can inject a message with total fake information, and it can cause some severe consequences on the vehicles and the riders including the accidents. Moreover, the listened information can be used by an attacker to trace/track a vehicle/rider, and such information can be used for criminal/terrorism purposes. e information can also be faked for marketing purposes to gain attraction of the riders, while they are attracted to a specific route through false information of traffic as well as to compete for the parking lots [2]. erefore, the security and privacy of the entire IoV including the communicating entities have more importance than all other factors. e goal can be achieved through authentication of the entities including vehicles before initiation of the communication among the entities of IoV. In this study, we proposed a lightweight symmetric keybased authentication scheme to secure message exchange among the entities of the IoV. We organize rest of the study as follows: Table 1 provides the notation guide. In Subsection 1.1, the system model is described. e motivations and contributions of the study are explained in Subsection 1.2, while the Subsection 1.3 discusses the adopted adversarial model. e Section 2 summarizes the existing related literature; whereas, our proposed secure message exchange protocol for IoV (SMEP-IoV) scheme is explained in Section 3. Using BAN logic, the Section 4 formally proves the security of the SMEP-IoV. In Section 5, a discussion on functional security and attack resilience of the proposed SMEP-IoV is given. Security and performance comparisons of the proposed SMEP-IoV with related schemes are given in Section 6. e study is concluded finally in Section 7. Figure 1 shows a typical IoV scenario. It consists of vehicles, each having installed a processing unit called on-board unit, which is responsible for communication and processing of exchanged data among the vehicle and other entities of an IoV. Along with vehicles, there are road side units (RSUs), which are the infrastructure deployed on the road. Typically, communication is performed among vehicles and nearby RSU. Moreover, intervehicle communication is also an important component of the IoV. e whole network is administered by a trusted authority called vehicle server (VS). All the vehicles and related entities (RSUs) join the IoV by registering with VS. After getting registered with VS, the two entities can communicate with each other, for which both have to authenticate each other, and the authentication ensures that both communicating entities are legitimate.
Motivation and Contributions.
Recently, many authentication schemes are proposed to secure message exchange among the entities of an IoV. However, many authentication schemes for IoV lack the required security features and resistance to known attacks. In this connection, Yu (i) Initially, we unveiled that the insecurities of the IoV authentication scheme proposed by Yu et al. We then proposed a robust authentication scheme using symmetric key-based encryption and hash functions. (ii) e security of the proposed scheme is proved using formal RoR. (iii) e comparative study with respect to efficiency and security among proposed and several existing studies is also provided in this study.
Attack Model.
We have taken into consideration the eCK adversary model [3], with strong adversary as compared with DY [4] and CK [5] models. e eCK is an extension of the CK model with a more strong adversary having capabilities to launch a key compromise impersonation attack in addition to controlling the communication channel, launching the power analysis to extract secrets stored in the smart card and access to all public parameters [6,7].
Related Literature
In their survey, Contreras-Castillo et al. [8] pointed out some security requirements and suggested to address authentication, integrity, confidentiality, and related security requirement before the IoV gain popularity. Some future directions were also discussed in [8]. In addition to the mentioned security requirements in [8], Mokhtar and Azab [9] stressed vehicle privacy, untraceability, access control, and resistance against tempering/forgery and jamming attacks.
In recent times, some authentication schemes were proposed [10][11][12][13]. Two different schemes were proposed by Lin et al. [14] and Yin et al. [15] using hashchains. Both schemes provided efficient and rapid authentication but lacked vehicle/user anonymity. e absence of anonymity could lead towards the leakage of sensitive vehicle/user information, IoV. In 2015, the scheme of Li et al. [16] was proved to have weaknesses against disclosure of session key attack by Dua et al. [17]. Afterwards in 2016, Wang et al. also proposed a smartcard-based two-factor authentication scheme for IoV [18], which was proved as having weaknesses against many attacks including vehicle/user forgery and smart card stolen attacks, and the scheme was also lacking anonymity by Amin et al. [19]. A pairing-based scheme was also proposed by Liu et al. [20]. However, due to usage of expensive pairing operations, a considerable delay can happen, which is unsuitable for fast moving vehicles. Another lightweight scheme was proposed by Ying et al. [21]. However, Chen et al. [2] Here, K VS is the master secret key of the vehicle server. Now, using K VS , any dishonest vehicle of the system can launch any attack on any devices. For example, (i) e dishonest vehicle with extracted K VS can disclose any session key shared among two vehicles. Let V x initiates a login request by sending M i1 , M i2 , M AE , T 1 }. By just listening to the request, the dishonest vehicle using M i1 , M i2 , and T 1 can compute
Proposed SMEP-IoV
e proposed secure message exchange protocol for IoV (SMEP-IoV) consists of four phases. Table 1 provides the notation guide to understand the technical details of the proposed SMEP-IoV, briefed in following subsections:
SMEP-IoV : Initialization.
e vehicle server (VS) selects its secret key K vs , a one way hash function
SMEP-IoV : RSU Registration.
During this phase, VS registers all road side units by assigning a unique identity ID rj and a shared secret key K rj � h(ID rj ‖K VS ).
e VS stores ID rj in its database.
SMEP-IoV : Vehicle Registration. During this phase, VS registers all vehicles by assigning a unique identity ID
e VS stores ID vi , PID vi , A vi , and B vi in the memory of the vehicle V i . Furthermore, the VS stores ID vi in its own memory. Please note, except ID vi , the VS does not store any other parameter relating to a vehicle say V i . Specifically, PID vi , A vi , and B vi are not stored in the memory of VS.
SMEP-IoV : Message Authentication.
For message authentication, the vehicle V i initiates the following steps with RSU j and vehicle server VS, the in-sequence steps as shown in Figure 2: where V i initiates the message authentication process by generating fresh timestamp t i and a random number On receiving M vi � PID vi , M i1 , M i2 , t i , the RSU j checks the freshness of t i by comparing it with current timestamp; if the delay is not within a predefined tolerable range ΔT, the RSU j terminates the process; otherwise, RSU j generates new timestamp t j and a random number r j . Moreover, RSU j computes M j1 � E K rj (r j , t j ) and sends M rj1 � ID rj , M vi , M j1 , t j to VS.
Security and Communication Networks
After receiving M rj1 � ID rj , M vi , M j1 , t j , the VS checks the freshness of t j by comparing it with current timestamp; if the delay is not within a predefined tolerable range ΔT, the VS terminates the process; otherwise, VS computes K rj � h(ID rj ‖K VS ) and decrypts M j1 using K rj to obtain the pair (r j ′ , t j ′ ). e VS also verifies the sameness of received t j with decrypted t j ′ , and in case both are same, the VS using its secret key K vs decrypts PID vi and obtains (ID vi , r 0 ). Now, the VS computes A vi � h(K vs ‖ID vi ) and B vi � h(PID vi ‖K vs ) and gets r i � B vi ⊕ M i2 . After that, the VS After receiving M vs � RSK, VSK, RSV, t vs , the RSU j checks the freshness of t vs by comparing it with current timestamp; if the delay is not within a predefined tolerable range ΔT, the RSU j terminates the process; otherwise, RSU j Designing an efficient and secure Message Exchange protocol for Internet of Vehicles
Formal Security Analysis through BAN
e Burrows-Abadi-Needham (BAN) logic analysis is performed to test the protocol from various security aspects with a focus on mutual key agreement, key sharing, and protection from exposure to session key. We used the following symbolic tokens to perform this analysis.
(i) L| ≡ w: L believes w (ii) L⊲w: L sees w (iii) L| ∼ w: L once said w, some time ago (iv) L|⇒w: L has got the entire jurisdiction over w (v) (#barw): the message w is fresh (vi) (L)w: L is used in formulae with w (vii) (w, w ′ ) k : w or w ′ is symmetrically encrypted with key K (viii) w, w ′ k : w or w ′ is hashed with key K (ix) L, w { } k : K is used in formula with w and L (x) LK↔L ′ : L communicates with the key K e following BAN logic rules are used to verify the security features: Rule 2. Nonce verification.
Corresponding with the above rules and assumptions, we accomplish the following goals in the BAN logic analysis. e symbols used here, i.e., (g, RSU j , V i , V s ), represent the goal, road side unit, vehicle, and vehicle server.
Security and Communication Networks Next, employing the above assumptions, we further analyze the idealized forms.
Taking the idealized version of M1 and M2: By applying seeing rule, we get According to D1, D2, P8, B9, and R1, we get Referring to X3, B1, R2, and R4, we get Referring to X5, B12, and R3, In accordance with X6, B14, and R3, we have Referring to X5, X7, and R6, we have Using X5, X7, B8, and R2, we get Taking the idealized version of M3, Using X11, B7, and R1, we have According to X12, B3, B13, R2, and R4, we have Next, considering M4 idealized form, Referring to X17, we apply R6 as According to X18, B2, we apply the R6 as e discussed cases for proving the protocol in BAN logic make obvious that the contributed scheme entirely supports mutual authentication and protects the established session key among the three participating members.
Informal Security Analysis
An informal security discussion on the security features of the proposed scheme is provided in following subsection:
Mutual Authentication.
e SMEP-IoV ensures mutual authenticity for all participating entities of the system. In particular, the RSU j authenticates both entities, VS and V i , by means of equality check comparing RSV against the computed h(SK‖t vs ) parameter. Since, RSU j is aware of the fact, the generated session key SK can only be constructed by a legitimate VS entity having access to master secret key K vs . Using K vs , VS can access r i , r j , A vi , and K rj factors to compute a valid SK. Likewise, V i authenticates RSU j on the basis of VSV equality check, after comparing it with the computed h(SK‖t jn ). Similarly, Vs authenticates V i by computing h(A vi ‖ID vi ‖t i ‖r i ) against M i1 . Realizing the fact that A vi is only held with a valid V i entity, it can validate the vehicle V i . If these equality checks fail, the mutual authentication cannot be assured in the protocol.
Stolen Verifier Attack.
In the proposed scheme, the vehicle server VS stores only public identities ( ID vi : i � 1, 2 . . . n ) of all the registered vehicles in its memory. VS does not store any other vehicle-related secret parameter in its own memory, and the verifier is with the vehicle. erefore, the possibility of stolen verifier attack on proposed SMEP-IoV is negligible.
Vehicle Anonymity.
e SMEP-IoV employs a pseudoidentity PID vi for each vehicle, which is renewed and replaced after the termination of each session. In this manner, the vehicle or user remains anonymous during the execution of the protocol. Moreover, there is no desynchronization possible in case an adversary holds or blocks the message on its way.
VS Impersonation Attack.
No adversary A can impersonate as Vs in the SMEP-IoV scheme. is is because, if an adversary attempts the same towards V i , the latter may discern the possibility of attack by comparing VSV against the computed factor h(SK‖t jn ). Similarly, if A attempts to impersonate as VS against RSU j , the RSU j may successfully thwart this attack on the basis of comparison of RSV and calculated h(SK‖t vs ). Hence, the SMEP-IoV is immune to VS impersonation attack.
RSU Impersonation Attack.
e SMEP-IoV is immune to RSU impersonation attack. Both entities V i and VS may easily prevent any attempt of impersonation as RSU on the part of adversary. is is due to the fact that VS shares a secret with RSU j . e use of fresh timestamps along with the shared secrets helps the VS entity in authenticating a legitimate RSU. Similarly, V i authenticates RSU j on account of the derived session key SK from the VSK message as submitted by a valid VS, which is further used in the later comparison of VSV. In this manner, both of the entities validate a legal RSU j on account of provided logical comparison of equality checks.
Man-in-the-Middle Attack (MiDM).
To launch a successful MiDM attack on SMEP-IoV, the adversary needs access to either the V i registration parameters such as A vi and B vi or access to secret key K rj or the master secret key K vs . On the other hand, as we see earlier, it is less likely for an adversary to initiate an impersonation attack on the protocol.
Session Key Security.
As we see earlier, no adversary could engage in the mutual authentication process until it gains access to secure credentials of the system either held by the registration authority or registered entities. Since, the SMEP-IoV provides mutual authentication to all participants, the established session key is only known to the legitimate members involved in the protocol execution.
Denial of Service.
Our scheme is resistant to denial of service attacks, since it engages fresh timestamps for the generation of M vi and M rj1 . Due to these timestamps, the receiving entity may check the freshness of the incoming message and discard the message immediately if the latency is beyond a certain preset threshold.
Replay Attack.
In case an adversary attempts to initiate a replay attack towards any entity V i , RSU j , or VS, the SMEP-IoV may foil this attempt immediately after checking the freshness of timestamps t i , t j /t jn , and t vs , respectively. Hence our scheme is immune to this threat.
Performance and Security Comparisons
e performance and security comparisons of the proposed scheme with related existing scheme [22][23][24] are explained in the following subsections.
Performance Comparisons.
For measuring the computation time and cost, Pi3 B+ is used with Cortex A53 (ARMv8) 64 bit SoC and with processing speed 1.4 GHz along with 1 GB LPDDR2 SDRAM RAM. e simulation results of basic operations executed over Pi3 are given in Table 2. For completion of authentication and a key agreement (AKA) among a vehicle V i and RSU RSU j through the intermediate agent VS-Vehicle Server, V i executes 2C hs and 3C ed operations. Likewise, RSU j performs 2C hs + 2C ed operations while VS accomplishes 7C hs and 7C ed operations. Hence, total computational operations performed to complete a cycle of AKA are 11C hs + 12C ed . Using the experiment with computational times represented in Table 2, the performance comparisons are briefed in Table 3. e proposed scheme completes single AKA cycle in ≈0.198 ms. In contrast to the proposed scheme, the scheme of Yu et al. [23] completes single AKA cycle in ≈0.132 ms, the scheme of Vasudev et al. [22] and Mohit et al. [24] complete the one cycle of AKA in ≈0.082 ms and ≈0.108 ms, respectively.
For communication cost comparisons, subsequent consideration is taken as per the sizes of different parameters. Timestamps and identity are taken as 32 and 64 bits, respectively; whereas, the sizes of the outputs of the symmetric key and asymmetric key operations are taken as 128 and 1024 bits. e value of hash output is fixed at 160 bits. Moreover, the size of random numbers is also assumed as 160 bit of length. e communication cost of SMEP-IoV and related schemes of Yu et al. [23], Vasudev et al. [22], and Mohit et al. [24]
Security Features.
e security comparisons of the SMEP-IoV and related existing schemes [22][23][24] are provided in this subsection. Table 4 solicits the summary of the security comparisons. Due to disclosure of master secret key K VS , the Yu et al.' scheme [23] is vulnerable to many attacks including impersonation of vehicle, RSU, and vehicle server, along with session key disclosure and vehicle/user anonymity violations attack. e scheme of Vasudev et al. [22] lacks mutual authentication and has insecurities against vehicle, RSU, and vehicle server impersonation attacks. Moreover, Vasudev et al.'s scheme is insecure against manin-the-middle attack. e scheme of Mohit et al. [24] is also weak against man in middle attack. In contrast, proposed SMEP-IoV provides all security features and is robust against the known attacks.
Conclusion
Initially, this study reviewed some of the recent authentication schemes for securing IoVs.
en, we developed a symmetric key-based authentication scheme, through which a vehicle can share a secret key with corresponding RSU through the mediation of the vehicle server. e proposed secure message exchange protocol for IoV (SMEP-IoV) uses only lightweight symmetric encryption and hash functions. e comparisons of the SMEP-IoV show that proposed scheme compromises slight performance overhead and provides adequate security, which other competing schemes do not provide. Hence, due to performance and security provisions, SMEP-IoV best suits the security requirements of the fast moving vehicles in the IoV scenario.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e author declares that there are no conflicts of interest. | 4,891.6 | 2021-05-19T00:00:00.000 | [
"Computer Science"
] |
Diazaoxatriangulenium synthesis of reactive derivatives and conjugation to bovine serum albumin
The azaoxa-triangulenium dyes are characterised by emission in the red and a long fl uorescence lifetime (up to 25 ns). These properties have been widely explored for the azadioxatrianguelnium (ADOTA) dye. Here, the syntheses of reactive maleimide and NHS-ester forms of the diazaoxatriangulenium (DAOTA) system are reported. The DAOTA fl uorophore was conjugated to bovine serum albumin (BSA) and investigated in comparison to the corresponding ADOTA-BSA conjugate. It was found that the fl uorescence of DAOTA experienced a signi fi cantly higher degree of solvent quenching if compared to ADOTA as non-conjugated dyes in aqueous solution, while the fl uorescence quenching observed upon conjugation to BSA was signi fi cantly reduced for DAOTA when compared to ADOTA. The di ff erences in observed quenching for the conjugates can be explained by the di ff erent electronic structures of the dyes, which renders DAOTA signi fi cantly less prone to reductive photoinduced electron transfer (PET) quenching from e.g. tryptophan. We conclude that DAOTA, with emission in the red and inherent resistance to PET quenching, is an ideal platform for the development of long fl uorescence lifetime probes for time-resolved imaging and fl uorescence polarisation assay.
In the group of organic dyes, the triangulenium dyes are different. 32 In these molecules, a small absorption cross-section, and the resulting low fluorescence rate constant, does not infer low photostability and low quantum yield. While donor-substituted triangulenium dyes are bright emitters similar to rhodamine and fluorescein dyes, [33][34][35][36] the azaoxatriangulenium dyes are highly photostable, highly emissive, long fluorescence lifetime dyes. [37][38][39][40][41] This group of triangulenium dyes includes azadioxatriangulenium (ADOTA) and diazaoxatriangulenium (DAOTA) shown in Chart 1 (for details on the triangulenium nomenclature see ESI †). The aza-bridges are readily functionalised with groups compatible, with the reaction conditions used to form the aromatic core.
The triangulenium dyes are synthesised from common precursors using sequences of highly selective nucleophilic aromatic substitution reactions (S N Ar). [33][34][35]38,42 For the azaoxatriangulenium dyes, each substitution step occurs in a cascade of two S N Ar-reactions forming a heteroatom bridge. 37,38 A similar approach has recently been used to form the aza-bridge in carbazoles. 43 Alternatively, ether-cleaving reaction conditions can be used to initiate an intramolecular S N Ar-reaction with the formation of an oxygen bridge. 44 Here, we elaborate on the synthesis of diazaoxatriangulenium (DAOTA) in an effort to make DAOTA derivatives, with reactive linkers for conjugation of the DAOTA fluorophore to biomolecules.
We chose to use bovine serum albumin (BSA) as a demonstrator for bioconjugation, although native BSA is not an ideal model system, 26,29,45 the results allow for a direct comparison between DAOTA and ADOTA BSA-conjugates. 46 We have previously investigated bioconjugates of ADOTA to BSA and IgG, and used both of the azaoxa-triangulenium dyes in bioimaging. [46][47][48][49] When conjugated to a biomolecule, the ADOTA fluorescence is significantly quenched. This has also been observed for other fluorophores and rationalised as photoinduced electron transfer (PET) quenching by tryptophan. [50][51][52] As DAOTA is less electron deficient than ADOTA, 38,53 it was expected that DAOTA would be much less prone to reductive PET quenching by tryptophan. This difference in PET activity between ADOTA and DAOTA was recently highlighted in a study of a DAOTA based DNA G-quadruplex fluorescence lifetime probe. 41 Here, we report a significant reduction in fluorescence quenching of DAOTA upon bio-conjugation, but also that the DAOTA dyes undergo significant solvent quenching, which leaves room for further improvements of this long fluorescent lifetime fluorophore. In particular, the solubility of DAOTA in water needs to be improved for several applications, in close analogy to what was achieved with the Alexa dyes®. [54][55][56][57] Experimental
Materials and methods
Absorption spectroscopy was recorded with a double-beam spectrophotometer using the pure solvent as baseline. Steady state fluorescence spectra were recorded with a standard L-configuration fluorimeter equipped with single grating monochromators. All solvents used for spectroscopic experiments were of HPLC grade and used as received. Phosphate buffered saline (PBS) was prepared from slats according to common protocols. The pH value of the buffer was determined and subsequently adjusted to 7.4. Molar absorption coefficients were determined for each of the dyes using Lambert-Beers law by measuring the absorption spectrum of three stock solutions with different concentrations of the dye. Quantum yields were determined using the relative method, 58 using rhodamine 6G as standard (ϕ fl = 0.95). 59 Details on the quantum yield measurements are given in the ESI. † Fluorescence lifetimes were measured using a Fluo-Time 300 (PicoQuant, Berlin, Germany) system. The emission signal was measured with a Hybrid-PMT detector with a spectral range of 220-650 nm. The dyes were excited at 510 nm using a solid-state laser excitation source. The instrument response function was recorded at the excitation wavelength using a dilute solution of Ludox®. The fluorescence decays were analyzed using the FluoFit software package. The decay data were all found to be monoexponential and was fitted by iterative reconvolution with a single exponential In eqn (1) α is the amplitude and τ is the fluorescence lifetime. All time-resolved emission decay profiles and fits are shown in the ESI. †
Synthetic procedures
Unless otherwise stated, all starting materials were obtained from commercial suppliers and used as received. Solvents were of HPLC grade for reactions and recrystallisations and technical grade for column chromatography and were used as received. 1 H NMR and 13 C NMR spectra were recorded on a 500 MHz or a 300 MHz instrument (500/300 MHz for 1 H NMR and 126 MHz for 13 C NMR). Proton chemical shifts are reported in ppm downfield from tetramethylsilane (TMS) and carbon chemical shifts in ppm downfield of TMS, using the resonance of the residual solvent peak as internal standard. High-resolution mass spectra (HRMS) were recorded with an ESP-MALDI-FT-ICR spectrometer equipped with a 7 T magnet ( prior to experiments, the instrument was calibrated using NaTFA cluster ions) using dithranol as matrix. Column chromatographic purifications were performed on Kieselgel 60 (0.040-0.063 mm particle size). Dry column vacuum chromatography was performed on Kieselgel 60 (0.015-0.040 mm particle size). Thin layer chromatography was carried out using aluminum sheets pre-coated with silica gel 60F.
Synthesis of 2 and 4. Compound 1 was prepared according to the published procedure, see ref. 37. The compounds 2a-d and 4a-d were prepared as reported in ref. 47.
N-(3-Carboxypropyl)-N′-methyl-1,13-dimethoxy-quin[2,3,4-kl]acridinium tetrafluoroborate 3a. 1,8-Dimethoxy-10-(2,6dimethoxyphenyl)-9-(3-carboxypropyl)-acridinium methyl ester tetrafluoroborate 2a (0.5 g, 0.9 mmol) was placed in a sealable tube and dissolved in acetonitrile (5 mL) and excess methylamine (15 mL, 33 wt% in ethanol) was added. The solution was stirred at 60°C for five days. After it had cooled to ambient temperature it was poured into diethyl ether (500 mL) to precipitate the product. The crude material was dissolved in potassium hydroxide solution (1 M, 0.1 L) and stirred at refluxing conditions for 5 h. After cooling aqueous tetrafluoroboric acid (50 wt% in water) was used to acidify the solution when a dark precipitate formed, which was filtered off. The crude compound was dissolved in warm acetonitrile, filtered through a paper filter, and precipitated twice from a solution of acetonitrile with diethyl ether. Recrystallisation from i-propanol/ acetonitrile and yielded dark green crystals, which are washed with dichloromethane and dried in vacuum (0.315 g, 69%). -dimethoxyphenyl)-9-(2-(4-carboxyphenyl)-ethyl)-acridinium tetrafluoroborate 2b (0.35 g, 0.60 mmol) was placed in a sealable tube and dissolved in acetonitrile (10 mL) and methylamine (12 mL, 33 wt% in ethanol) was added. The tube was sealed and the mixture was stirred at 65°C for three days. After cooling to ambient temperature the solution was poured into diethyl ether to precipitate the compound. It was then dissolved in sodium hydroxide solution (1 M, 50 mL) and extracted three times with dichloromethane. Then the pH value was adjusted to ∼3 with tetrafluoroboric acid (50 wt% in water) to precipitate the product. The material was dissolved in dichloromethane and filtered. The solvent was removed in vacuum. The material was subsequently precipitated twice from a solution of acetonitrile with diethyl ether to give the compound as blue powder which was washed with dichloromethane, and dried in vacuum (0.271 g, 78%). 1,13-Dimethoxy-N-(4-(methylcarbamoyl)phenyl)-N′-methylquin[2,3,4-kl]acridinium tetrafluoroborate 3c′. 1,8-Dimethoxy-10-(2,6-dimethoxyphenyl)-9-(4-carboxyphenyl)-acridinium methyl ester tetrafluoroborate 2c (0.69 g, 1.16 mmol) was placed in a round bottom flask (100 mL) and dissolved in methylamine (50 mL, 33 wt% ethanol). The flask was equipped with a reflux condenser fitted with a balloon to trap the evaporating gas. The solution was stirred at 70°C for three days. After cooling the mixture was poured into diethyl ether to precipitate the crude product. Column chromatography over silica (dichloromethane/methanol 10/1) yielded the product as green powder, which was washed with diethyl ether and dried in vacuum (0.31 g, 48%). 1 H NMR (500 MHz, Methanol-d 4 ) 476.1972. N-(4-Aminophenyl)-N′-methyl-1,13-dimethoxy-quin[2,3,4-kl]acridinium tetrafluoroborate 3d. 1,8-Dimethoxy-10-(2,6dimethoxyphenyl)-9-(4-aminophenyl)-acridinium tetrafluoroborate 2d (0.55 g, 1.00 mmol) was placed in a sealed round bottom flask and was dissolved in acetonitrile (10 mL) and methylamine (14 mL, 33% in EtOH) equipped with a reflux condenser fitted with a balloon to trap evaporating gas. The mixture was stirred under refluxing conditions. Over two days the solution turned dark green and additional methylamine (15 mL) was added. After a total of three days at refluxing conditions the solution was cooled to ambient temperature and poured into diethyl ether (500 mL) to precipitate the product. Precipitation was repeated from a solution of acetonitrile with diethyl ether to give the crude green product. The material was first recrystallized from i-propanol/acetonitrile and further purified by dry column chromatography (silica; heptane → CH 2 Cl 2 → CH 3 CN → i-propanol) to give the dark green product (0.387 g, 74%). 1 H NMR (500 MHz, Acetonitrile-d 3 ) (4 mL) and methylamine (1 mL, 8.7 mmol, 33 wt% in ethanol). The solution was stirred at 85°C overnight. After 15 h the solution was colored red and a precipitate formed, which was filtered off. The solution was poured into sodium tetrafluoroborate solution (0.2 M, 0.1 L) and the pH was adjusted to ∼3 with tetrafluoroboric acid (50 wt% in water) to precipitate the remaining product. The crude product was washed with water, dissolved in hot acetonitrile and filtered through a paper filter. After cooling the material was precipitated with diethyl ether. Repeated precipitation from a solution of acetonitrile with diethyl ether yielded the pure compound as a fine red powder, which was dried under vacuum (0.04 g, 43% N-(4-Carboxyphenyl)-N′-methyl-diazaoxatriangulenium tetrafluoroborate 5c. N-(4-Carboxyphenyl)-azadioxatriangulenium tetrafluoroborate 4c (0.11 g, 0.23 mmol) and benzoic acid (0.6 g, 4.9 mmol) were placed in a round bottom flask and dissolved in ethanol (5 mL) and methylamine (0.57 mL, 4.6 mmol, 33 wt% in ethanol). The mixture was heated to reflux overnight. After 18 h additional methylamine (0.5 mL) was added and reflux of the mixture was continued for another day. Sodium tetrafluoroborate solution (0.2 M, 0.1 L) was added to precipitate the product. The red material was dissolved in dichloromethane, dried over sodium sulfate, and filtered. Repeated precipitation from a solution of acetonitrile with diethyl ether and drying in vacuum gave the pure product as a dark red powder (0.1 g, 86%). 1 H NMR (500 MHz, DMSO-d 6 N-(4-Aminophenyl)-N′-methyl-diazaoxatriangulenium tetrafluoroborate 5d. Pyridinium chloride (36 g) was heated to 185°C and N-(4-aminophenyl)-N′-methyl-1,13-dimethoxy-quin [2,3,4-kl]acridinium tetrafluoroborate 3d (0.94 g, 1.84 mmol) was added and stirred for 45 minutes. After completion of the reaction sodium tetrafluoroborate solution (0.2 M, 0.4 L) was added ( pH adjusted to 9 with sodium hydroxide solution) to precipitate the product. After cooling to ambient temperature the material was filtered off and washed with additional sodium tetrafluoroborate solution and water. Precipitation of the compound from a solution of acetonitrile with diethyl ether gave a fine powder which was recrystallized from i-propanol/methanol to yield the pure compound as dark crystals which are washed with cold acetonitrile and methanol (0. 15 152.1, 151.8, 150.6, 142.8, 141.6, 140.9, 140.3, 139.7, 139.2, 138.6, 138.1, 128.5, 124.5, 115.8, 111.0, 110.5, 109.9, 108.3, 108.2, 107.3, 107.1, 107.0, 106.1, 35.6. Anal. Calcd for C 26 . N-(4-Carboxyphenyl)-N′-methyl-diazaoxatriangulenium NHS ester tetrafluoroborate 6c. N-(4-Carboxyphenyl)-N′-methyldiazaoxatriangulenium tetrafluoroborate 5c (0.04 g, 0.08 mmol) was placed in a flask and dissolved in DMSO (10 mL) and diisopropylethylamine (0.05 mL, 0.32 mmol). N,N,N′,N′-Tetramethyl-O-(N-succinimidyl)uronium tetrafluoroborate (0.05 g, 0.16 mmol) was added and the solution was stirred at ambient temperature overnight. After confirmation of product formation by MALDI-TOF analysis the material was precipitated by addition of sodium tetrafluoroborate solution (0.2 M, 0.1 L) and filtered off. The crude red material was dissolved in dichloromethane, dried over magnesium sulfate, filtered, and the solvent was evaporated in vacuum. Then the material was precipitated from a solution of acetonitrile with diethyl ether and dried in vacuum to give a red powder (0.016 g, 34%). 1 H NMR (500 MHz, Acetonitrile-d 3 ) N-(4-Malimidophenyl)-N′-methyl-diazaoxatriangulenium tetrafluoroborate 7. N-(4-Aminophenyl)-N′-methyl-diazaoxatriangulenium tetrafluoroborate 5d (0.1 g, 0.2 mmol) was placed in a round bottom flask and dissolved in acetonitrile (25 mL) and 2,6-lutidine (0.2 mL). After the compound dissolved completely maleic anhydride (0.1 g, 1 mmol) was added and the solution was refluxed for 4 h until all starting material was converted to the acid. The solvent was removed by evaporation and the acid was dissolved in acetic anhydride (8 mL) and stirred at 100°C. After 30 min the reaction was completed and the solution was cooled to ambient temperature. The crude product was precipitated by addition of sodium tetrafluoroborate solution (0.2 M, 0.3 L). After filtration the material was dissolved in warm acetonitrile, filtered and precipitated with diethyl ether twice to yield the product as a fine red powder (0.08 g, 66% Labelling procedure. Labelling of bovine serum albumin (BSA) was achieved either by activating the free carboxylic acid substituted dyes with O-(N-succinimidyl)-N,N,N′,N′-tetramethyluronium tetrafluoroborate (TSTU) in the presence of diisopropylethylamine (DIPEA), resulting in in situ formation of the N-hydroxysuccinimide (NHS) ester, which was subsequently reacted with BSA. Alternatively, NHS esters of the dyes were used directly. The BSA conjugates were subsequently purified by dialysis. Consult ref. 46 for the full labelling protocols and procedures. Despite the tendency of BSA to bind small molecules electrostatically, we did not observe unbound dye in the optical experiments.
Synthesis
The synthesis of azaoxa-triangulenium dyes was developed in our lab, 37,38 based on the work of Martin and Smith, 44 who first synthesised the trioxatriangulenium (TOTA) system. In our early work, azadioxa-, diazaoxa-, and triaza-triangulenium (ADOTA, DAOTA, and TATA) was reported. 37,38 Lacour and coworkers have later expanded the series of triangulenium salts to also include derivatives with a single sulfur bridge. 60 The incorporation of more than one sulfur atom in the triangulenium core seems not to be possible due to the distortion enforced by the significantly different C-S bond length as compared to C-N and C-O bonds. 61 The azaoxa-triangulenium dyes are all made from a common precursor; tris(2,6dimethoxyphenyl)-carbenium tetrafluoroborate (1) which, can be reacted stepwise with primary amines to form between one and three aza-bridges. 1 or each intermediate may be reacted under ether cleaving conditions at elevated temperatures to form the fully ring closed triangulenium core with one, two, or three oxa-bridges. The oxa-bridges are themselves reactive towards primary amines. However, this substitution reaction is slow compared to attack on the methoxy groups of the open precursors, but it is still highly selective. 38 Post-functionalisation of the substituents on the azabridges have previously been reported, 62,63 and we have synthesised and explored reactive derivatives of ADOTA for bioconjugation. 46,48,49 The syntheses of ADOTA with reactive NHS esters and maleimide groups (Scheme 1) are straightforward, and proceed by a S N Ar reaction between 1 and a suitable amino acid or diamine. The primary amine attack one of the methoxy substituted carbon atoms in 1 followed by elimination of methanol upon formation of a transient intermediate set up for an intramolecular S N Ar reaction, eliminating methanol and forming an aza-bridge. The product is an N-substituted tetramethoxy-acridinium salt (2). Reacting 2 in molten pyridinium chloride yields the fully ring closed azadioxa-triangulenium (4, ADOTA). 37,38 Alternatively, a second aza-bridge can be introduced by reacting 2 with a second primary amine forming a [4]helicenium ion. The product is an N,N′-substituted 1,13-dimethoxyquinacridinium (DMQA) salt (3, for details on this nomenclature see ESI †), which in molten pyridinium chloride yields the fully ring closed diazaoxa-triangulenium core (5, DAOTA). The steps to ADOTA-derivatives suitable for bioconjugation as active esters and maleimides follow the direct path from 1 over 2 to 4. 47 Similarly, the 3-carboxypropyl derivative of DAOTA (2a) could be synthesised by introducing first one (2a), and then a second aza-bridge to give the DMQA derivative (3c). Subsequent reaction of the DMQA compounds in molten pyridinium chloride followed by basic hydrolysis of the intermediate amide yielded the desired N-(3-carboxypropyl)-N′-methyl-DAOTA tetrafluoroborate (5a, Scheme 1) as a red powder in a good yield. 49 The synthesis of the 4-aminophenyl DAOTA derivative (5d) was also performed via the route through the DMQA derivative (3d, Scheme 1). This yielded 3d in a high yield, while the ring closure reaction for conversion of 3d into 5d yielded 5d in a low yield. In attempts to prepare the 2-(4-carboxyphenyl)-ethyl and 4-carboxyphenyl DAOTA derivatives (5b and 5c, respectively) via the same synthetic route as described for 5a and 5d, we found that this was associated with difficulties. The compound 3b was isolated in a high yield. However, the following ringclosure reaction in molten pyridinium chloride yielded the desired product (5b) in combination with impurities, which were inseparable from 5b. Similarly, the DMQA derivative 3c could not be prepared as the basic hydrolysis of the intermediate amide derivative (3c′, Scheme 2) was not successful.
Thus, 5b and 5c were synthesised via an alternative route, which to our knowledge has only been reported for the N,N′dipropyl-DAOTA salt by Laursen and Krebs. 38 The compounds 5b and 5c were obtained in good yields via reaction of the ADOTA derivatives (4b and 4c) with methylamine in acetonitrile/ethanol or NMP/ethanol mixtures. The acid derivatives 5a-c were reacted with N,N,N′,N′-Tetramethyl-O-(N-succinimidyl)uronium tetrafluoroborate (TSTU) in acetonitrile or DMSO solution to form the reactive NHS esters (6a-c) in good yields. The NHS ester functional group is used to conjugate fluorescent dyes to biomacromolecules via coupling to a primary amine residue of the biomacromolecule. 64 Compound 5d was reacted with maleic acid anhydride to form the corresponding maleimide 7 in a two-step/one pot reaction (Scheme 1). 47 The maleimide group is used to selectively label thiol groups in biomolecules. 64
Optical spectroscopy
In the following N-(4-carboxyphenyl)-N′-methyl-DAOTA tetrafluoroborate 5c is used to demonstrate the spectroscopic properties of the reactive DAOTA derivatives. The properties of 5c are compared to those of the corresponding N-(4-carboxyphenyl)-ADOTA tetrafluoroborate 4c. 46,47 The spectra displayed in Fig. 1 show that DAOTA (5c) has more desirable absorption and emission properties. These are in the red region of spectrum, which is better for measurements on biological systems as compared to ADOTA (4c). 49,53,65 Table 1 summarises the photophysical properties of 4c and 5c in acetonitrile, dimethyl sulfoxide (DMSO), and phosphate buffered saline at pH = 7.4 (PBS). The differences between 4c and 5c in organic solvents are closely related to the oscillator strengths of the lowest energy transition of the two parent chromophores. The higher molar absorption coefficient at the lowest energy transition of DAOTA as compared that of ADOTA (ε = 16 000 vs. 10 000 M −1 cm −1 in acetonitrile solution), 65 results in a shorter fluorescence lifetime (τ fl = 19 vs. 21 ns in MeCN). 65,66 In PBS solution, DAOTA 5c exhibits a significant change in fluorescence lifetime (τ fl ) and fluorescence quantum yield (ϕ fl ), where both are reduced; ϕ fl from 55% to 35% and τ fl from 19 ns to 14 ns. We have found that chloride is not a specific quencher of DAOTA, and suggest that the observed 30% reduction in ϕ fl must be due to unspecific solvent quenching, likely due to the high hydrophobicity of the DAOTA core of 5c. ADOTA 4c is less hydrophobic and does not show more than 18% reduction in ϕ fl in PBS solution as compared to the organic solvents.
The fluorescence lifetime (τ fl ) for DAOTA 5c is exceptionally long for a red emitting organic dye, even when reduced to 14 ns by unspecific solvent quenching in PBS solution. The combination of a long fluorescence lifetime (>10 ns) and emission in the red (>600 nm) makes DAOTA ideally suited to monitor rotational correlation times of biomolecules and as a fluorescent probe for fluorescence polarisation based assays. 8,[47][48][49]
DAOTA-BSA conjugates
To test the ability of DAOTA as a probe for measuring the rotational motion of proteins the reactive ester of DAOTA 5c, N-(4-carboxyphenyl)-N′-methyl-diazaoxatriangulenium NHS ester tetrafluoroborate 6c, was conjugated to BSA. This reaction is expected to predominately result in conjugation of the triangulenium based probe to the N-terminus of the protein. 46 The labelling protocol was optimised to give a low degree of labelling (DOL) to ensure that complications arising from multiple labels, such as energy transfer between labels, was minimal. When developing a fluorescence polarisation assay, these effects can be probed by looking for Weber's red-edge effect. 67,68 A DOL of 0.9 DAOTA dyes per BSA was used to obtain the results presented below.
The Perrin equation (eqn (2)) describes the ideal relationship between: the observed fluorescence anisotropy (r), the fundamental anisotropy of the dye (r 0 ), the rotational correlation time of the rotating volume (θ), and the fluorescence lifetime (τ fl ). The rotational correlation time (θ) is directly related (eqn (3)) to the rotational volume (V) and the viscosity of the surrounding λ abs is the wavelength of the lowest energy absorption maximum given in nm. λ em is the wavelength at the emission maximum given in nm, Δ Stokes' is the Stokes' shift given in nm, τ fl is the fluorescence lifetime given in ns, and ϕ fl is the measured fluorescence quantum yield using rhodamine 6G as a reference, τ 0 is the radiative lifetime: τ fl /ϕ fl given in ns. medium (η). For proteins, the rotational correlation time can be related (eqn (3)) to the molecular mass (M), the specific density (ν), and the average hydration (h) of the protein. 66
Organic & Biomolecular Chemistry Paper
Two factors related to the fluorescent probe used are found in the equations: the fluorescence lifetime (τ fl ) and the fundamental fluorescence anisotropy (r 0 ). The fluorescence lifetime determines the range of rotational correlation times that may be probed, in other words the range of molecular weights that can be investigated. The fundamental anisotropy, ranging from 0.4 to −0.2, determines the dynamic range of the experiments. Dyes with a low r 0 value are poor probes for fluorescence polarisation-based methods. DAOTA has r 0 = 0.38, 65 which is very close to the maximal value (0.4), while the fluorescence lifetime allows for probing biomolecules with a molecular weight up towards 1000 kDa. 46 Fig . 2 shows a Perrin plot of 5c conjugated to BSA (5c-BSA), where the steady-state fluorescence anisotropy (r) measured at four different temperatures (T ) are plotted as 1/r against T/η and used to determine the θ/V, and the apparent anisotropy (r app 0 ) by extrapolation of T/η to zero. The latter is a measure for the flexibility of the dye label, when conjugated to the biomolecule. Ideally, if the label upon conjugation loses all degrees of freedoms, except co-rotation with the biomolecule, the apparent and fundamental anisotropy will be identical. Any local flexibility of the dye label will induce a pathway for fast scrambling of the photoselection not related to the motion of the biomolecule. The result is a lowering of the apparent anisotropy r app 0 in a Perrin plot, and a loss of dynamic range in any fluorescence polarisation based experiment. For 5c-BSA the apparent anisotropy is at r app 0 = 0.36 surprisingly high, clearly indicating that the DAOTA label is immobilised on the surface of BSA.
While the effect of the long fluorescence lifetime of 5c may be hard to identify in the steady state spectra, it is directly visible in time-resolved experiments. Fig. 3 (top) shows a timeresolved emission decay profile for 5c and 5c-BSA measured in PBS solution, obtained using time-correlated single photon counting (TCSPC). Cursory inspection of the fluorescence decays profiles in Fig. 3 (top) shows that the 5c-BSA has a longer fluorescence lifetime than 5c in PBS solution (τ fl,5c = 14.0 ns vs. τ fl,5c-BSA 21.2 ns, see ESI †), and that photons can be detected well beyond 150 ns when using the standard settings of TCSPC with a maximum count of 10.000. By using longer acquisition times (higher maximum number of counts), photons arising from emission of the 5c-BSA conjugates may be detected up towards 250 ns after excitation, this is without equal when considering organic dyes with emission in the red. Fig. 3 (bottom) shows the time-resolved anisotropy decay for 5c-BSA. The data allow for direct determination of the rotational correlation time of the rotating volume (θ). The long fluorescence lifetime is important in obtaining these data, as photons must be emitted in a time interval long enough to describe the rotational motion of the biomolecule. shows that in the case of 5c-BSA the photoselection is fully scrambled by rotational motion in ∼100 ns. That is, the anisotropy has decayed to zero. The long rotational correlation time determined for BSA from these data is θ BSA = 40 ns (see ESI for details †), a number identical to the average literature value of θ BSA = 40 ns. 69 Note that BSA is not a perfect spherical rotor, and the value determined will be influenced by the position and relative orientation of the dye in the bioconjugate.
While emission from either of the azaoxa-triangulenium dyes can be used to follow the rotational motion of large biomolecules for more than 100 ns, 27 there is a clear difference in the behaviour of 4a-BSA and 5c-BSA. Where ADOTA fluorescence is quenched in the conjugates (4a-BSA) when compared to the non-bounded dyes (4a), the DAOTA fluorescence appears to be enhanced upon conjugation, as seen by the significantly increased fluorescence lifetime (Fig. 3). As the fluorescence lifetime only report on the emitting population of dyes, the actual emission intensity of each dye was determined. Fig. 4 shows the results for 4a-BSA and 5c-BSA. The total emission intensity of either dye is decreased upon conjugation to BSA, but the effect is much less pronounced for the DAOTA derivative. We rationalised the quenching of the ADOTA fluorescence as a result of reductive PET quenching by tryptophan, 50 a process the increased cation stability of DAOTA makes less favoured (see ESI for details †). 53 Thus, we see less quenching of the DAOTA fluorescence in the 5c-BSA conjugates. The significant increase in lifetime upon conjugation of 5c to BSA must be due to a reduction in the non-specific solvent induced quenching of a population of DAOTA dyes that is partially shielded by the protein surface, while the reduction in overall intensity must be due to an almost fully quenched population of 5c-BSA. The quenched population does not contribute to the time-resolved emission decay profile, and will not influence the fluorescence anisotropy. The net result is that the fluorescence quantum yield of the 5c-BSA conjugate at ϕ fl = 0.34 is very close to that of the free dye 5c in PBS at ϕ fl = 0.35, although with a more complicated time-resolved fluorescence decay profile (see ESI †).
Conclusions
The syntheses of six new derivatives of diazaoxatriangulenium (DAOTA) salts were reported, and it was shown that these dyes can be accessed via two synthetic routes. One set of substituents may favour one route over the other.
The photophysical properties of the DAOTA fluorophore were investigated in view of using the red emitting, long fluorescence lifetime dye in fluorescence polarisation assays. We showed that the DAOTA fluorophore undergoes unspecific solvent fluorescence quenching in aqueous buffer, reducing the fluorescence lifetime from 19 ns in acetonitrile solution to 14 ns in PBS solution. 5c was conjugated to BSA, and we found that 5c-BSA had a significantly increased longest fluorescence lifetime component at 21.2 ns (as well as intensity weighted average fluorescence lifetime of 19.2 ns). Furthermore, the overall emission intensity of the conjugates (5c-BSA) was found to be equal to that of the free dye (5c) measured in PBS solution, as 5c is less quenched by tryptophan. Thus the DAOTA fluorophore, with emission further in the red and a longer fluorescence lifetime in biomolecule conjugates, was found to be superior to the ADOTA fluorophore as a probe for developing fluorescence polarisation assays for large biomolecules. | 6,678 | 2016-01-04T00:00:00.000 | [
"Chemistry"
] |
Vector valued Banach limits and generalizations applied to the inhomogeneous Cauchy equation
In Prager and Schwaiger (Grazer Math Ber 363:171–178, 2015) the classical notion of Banach limits was used to solve the inhomogeneous Cauchy equation f(x+y)-f(x)-f(y)=ϕ(x,y)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(x+y)-f(x)-f(y)=\phi (x,y)$$\end{document} for real functions of one real variable. Here these methods are generalized to more general target spaces, namely Banach spaces which admit vector valued Banach limits.
Introduction
The main topic of this paper is the investigation of the inhomogeneous Cauchy equation where φ : V × V → W is given and f : V → W is to be determined. The emphasis lies on giving explicit formulas for solutions when certain hypotheses on the inhomogeneity are fulfilled. It is well known (see [7]) that the solvability of this equation is equivalent to the properties of the inhomogeneity φ. Solving the equation in general is based on the application of Zorn's Lemma. In our case, if the domain and co-domain are rational vector spaces, assuming the existence of a basis for such vector spaces is enough to give explicit solutions. In the generic case the existence of such a basis is granted only by applying also Zorn's Lemma. Nevertheless situations may In what follows, given a (Hamel) basis of V over Q, all solutions of (1) will be given. The ideas for the definition of the solution f arise from some necessary conditions on f in (1).
Rewriting this equation as f (x + y) = f (x) + f (y) + φ(x, y) we get by induction that which for x 1 = x 2 = · · · = x n = x means Now we gather some auxiliary results needed for the construction. (2) and (3). Then φ satisfies
Let B ⊂ V be a Hamel basis of V over Q. Then for each x ∈ V there is a unique finite subset B = B x ⊆ B and a unique family (For x = 0 the set B x is the empty set and the corresponding family of λ b is the empty family.) This implies that for any C ⊆ B and any family (μ c ) c∈C ∈ Q C such that x = c∈C μ c c it follows that Proof. This follows immediately from Lemma 3.
According to this lemma the function F , Proof. Let n := #C, m := #B and let, which is always possible, α : {1, 2, . . . , n} → C be such that α ({1, 2, . . . , m}) = B. Then Proof. If p = 0 then r = 0, too. Thus G * (b, p, q) = G * (b, r, s) in this case. If p = 0 also r = 0 and we may assume that, say, p and q are coprime. Then, since p q = r s there is some l ∈ N such that r = ln and s = lq. We have to show that Vol. 93 (2019)
Vector valued Banach limits and generalizations 263
After division by sgn(λ), observing Lemma 3 and (1), we get for the right hand side of (9) that hence (9) is true.
Then, using the considerations above, we may define functions Taking for granted that f is a solution of (1), the second part obviously becomes true.
It remains to show that f indeed is a solution. Let Now we will show that for all b ∈ B we have We set Vol. 93 (2019) Vector valued Banach limits and generalizations 265 and with an analogous application for H Corresponding to the possible distributions of signs we consider the cases listed in the following table. A column without a numeral contains a case which is obtained from the immediately preceding one, considering (2) by renaming λ b by μ b and vice versa. Therefore these cases need not be treated separately.
Adding together we thus obtain which with (10) yields the result.
Different types of Banach limits
In [1] one may find the following definition of a vector valued Banach limit. It is also shown there that this includes the usual definition of a Banach limit on bounded sequences of reals (see [3]).
Definition 1.
Let X be a normed space and let ∞ (X) be the space of bounded sequences on X equipped with the supremum norm. Then L : ∞ (X) → X is a Banach limit if (i) L is linear and continuous (ii) L(x) = lim n→∞ x n for any convergent sequence x = (x n ) n∈N in X, ) n∈N , and (iv) the operator norm of L equals 1: L = 1.
In this paper it was proved that Banach limits exist on the dual X * of any normed space X. The proof used an ultrafilter U on N containing {A ⊆ N | N\S is finite} and the definition which is meaningful with respect to the w * -topology since then any ball is compact.
the closure with respect to the w * -topology of the set 1 Theorem 2. Let X be a normed space and X * its dual. Then the Banach limit L defined by (16) has the property that for any sequence ξ = (x * n ) n∈N ∈ ∞ (X * ) the value L(ξ) is contained in cl w * conv{x * n | n ∈ N}, the closure with respect to the w * -topology of the convex hull of the elements in the sequence. But there are Banach limits which do not fulfill this property.
Proof. The first part is clear by the lines above if one takes into account that The second part follows from the last item of the following remark, since by [2] there are different Banach limits on the reals.
Proof of the remark above. The first part follows immediately from the definition. This implies the second item. A proof of the third one is contained in [11].
Regarding the fourth item one may proceed in the following manner. Let (f n ) n∈N be any sequence in ∞ (X * ). Then the sequences f n (e 1 ) and f n (e 2 ) are bounded real sequences. Thus their Banach limits exist for all Banach limits on R. Accordingly L((f n ) n∈N ) ∈ X * . Obviously L is shift invariant since L 1 , L 2 are. If the sequence f n converges to f , the sequences f n (e i ) will converge to f (e i ), i = 1, 2. Thus L i ((f n (e i ))) = f (e i ) for i = 1, 2. Since obviously L is also linear, it remains to show that L = 1. Note that L((f n ) n∈N ) = max{|L 1 ((f n (e 1 )) n∈N )| , |L 2 ((f n (e 2 )) n∈N )|}. Note also that L 1 = L 2 = 1. This implies |L 1 ((f n (e 1 )) n∈N | ≤ 1 · (f n (e 1 )) n∈N ∞ and |L 2 ((f n (e 2 )) n∈N | ≤ 1 · (f n (e 2 )) n∈N ∞ . Using 2. this also gives L( This implies L ≥ 1 and altogether L = 1. Now we deal with the final part. Let c = (c n ) n∈N ∈ ∞ (R) be such that L 1 (c) = L 2 (c) and define f n := c n (π 1 − π 2 ). Then f n = |c n | for all n.
Let A : A x for some x = 0. By 2. this set is also closed with respect to the norm topology. It is also convex since it is a linear subspace of X * . Assume now that Using Banach limits L 1 , L 2 , . . . , L m it is easy to construct Banach limits on R n . The following theorem answers the question whether a result similar to that of Theorem 2 holds true.
This Banach limit has the property that
Proof. The properties (i)-(iii) for L immediately follow from the corresponding properties of L i . Since L i = 1 and since we use the supremum norm on R m it follows that L ≤ 1. Now we consider the constant sequence determined by 1 m := (1, 1, . . . , 1). L i are Banach limits. Thus L i ((1, 1, . . .)) = 1. Therefore L ((1 m , 1 m , . . .)) = 1 m . Thus L ≥ 1 m ∞ = 1 and also L = 1, as desired.
Suppose that L 1 = L 2 = · · · = L m and assume that Then by the separation theorem for closed convex sets (see [4,Théoréme 1.7]) there is some a = (a 1 , a 2 , . . . , a m ) ∈ R m and some α ∈ R such that n ) n∈N and by the property of a Banach limit this value must be ≤ α since n < α for all n. Thus L((x n ) n∈N ) ∈ K, as desired.
Finally assume (without loss of generality) that L 1 = L 2 and that L 1 ((z n ) n∈N ) = L 2 ((z n ) n∈N ) for some (z n ) n∈N ∈ ∞ (R m ). Then the sequence x n , x n := (z n , z n , 0, . . . , 0) is contained in ∞ (R m ). Moreover all x n are contained in the (closed) linear subspace E : and L 1 ((z n ) n∈N ) = L 2 ((z n ) n∈N ), a contradiction.
Solutions of the inhomogeneous Cauchy equation expressed in terms of Banach limits
In [10] it was shown that for V = W = R the relations (2) and (3) imply provided that (1) has a solution at all. The proof for arbitrary rational vector spaces V, W is the same. Thus (17) holds true in view of Theorem 1. The following generalizes Theorem 2 of [10].
Theorem 4.
Let V be a rational vector space and X a normed space which admits a Banach limit L : ∞ (X) → X. Assume that φ : V × V → X satisfies (2) and (3) and that the sequences φ(nx, x) and φ(nx, ny) are contained in Proof. Note that L may be applied to all sequences generated by (17) separately. Then the result follows from the fact that L is shift invariant and that L maps constant functions to the corresponding constant.
Example 1 (Unbounded φ). Let the rational vector space V as above be of infinite dimension and let X admit a Banach limit L : where Q >0 denotes the set of all positive rational numbers. Then the set of all C x gives a partition of V . Let R ⊆ V be a set of representatives for this partition, i.e., V = r∈R C r and C r ∩ C s = ∅ for r, s ∈ R, r = s. AEM Take g : R → X arbitrary and h : V → X bounded and define f : Then Theorem 4 works for φ: If x ∈ C r , y ∈ C s , x + y ∈ C t the Cauchy difference for f is given by Since nx, (n + 1)x ∈ C r , we get φ(nx, x) = h((n + 1)x) − h(nx) − h(x). A similar calculation shows that φ(nx, ny) = g(t) − g(r) − g(s) + h(n(x + y)) − h(nx) − h(ny). Thus the hypotheses of the theorem are satisfied.
g may be chosen unbounded since by the assumption on V the set R of representatives has to be infinite. This is well defined, since sin •f is measurable and even square integrable since sin is bounded. Moreover φ itself is bounded because 1 0 sin 2 (f (x)) dx ≤ 1. Thus a fortiori φ satisfies the hypotheses of the theorem above.
Abstract Banach limits and solutions of the inhomogeneous Cauchy equation
Analyzing the considerations of the previous section it turns out that condition (iv) of a Banach limit is not used when proving Theorem 4. Here we want to discuss a more general situation. Throughout we consider an Abelian semigroup S, an Abelian group G = {0} and an inhomogeneity φ : S × S → G, which satisfies (2) and (3) | 2,994.6 | 2018-07-28T00:00:00.000 | [
"Mathematics"
] |
Optimizing PV Power Output with Higher-Voltage Modules: A Comparative Analysis with Traditional MPPT Methods
,
Introduction
Due to the rising energy needs and the need to make power in a way that is good for the environment, the PV generator is the one that is used the most. Maximum power point tracking (MPPT) techniques are used in [1] to get the maximum power possible from the PV panel while accounting for variations in actual voltage and current. Current, voltage, curve-ftting, gradient descent, slide mode, incremental resistance MPPT, and AI-based methods such as fuzzy logic, neural networks, ANFIS, and hybrid methods were discussed, and the benefts and drawbacks of each were tallied based on diferent criteria [2].
Te performance of PV panels is primarily infuenced by irradiance, ambient PV cell temperature, and load impedance. In the study of [3], MPPTcharacteristics are compared. For a ramp change in radiance (600-1000-700 W/m 2 ) and a step change in irradiance (300-600-300 W/m 2 ), (400-1000-400 W/m 2 ), and (500−1000 W/m 2 ) [4,5] and for fxed and variable increment techniques, the responses of several MPPT techniques are compared. For a small-scale PV power system, the efect of temperature is taken into account and an MPPT is constructed [6].
Most commonly, standalone mode operation of a single PV source is used to develop several MPPT approaches. To achieve the MPP with the reference of the output values of the systems, the method was implemented in the incremental conductance (INC), perturb and observe (P&O), and constant voltage controller (CVC) methods [7]. Te modifed IC method based on the new average global peak value was implemented in the existing INC method, and improved efciencies are tabulated [8]. A direct control structure and buck-boost converter were implemented in both the P&O and INC methods. Te results yield 98.3% efciency in the P&O method and 98.5% in the INC method [9]. In these same methods, the feld-programmable gate array (FPGA) was used in the high-speed test algorithm in the loop feature [10]. Te soft MPPT technique is proposed in both P&O and INC methods to solve the issue of continuous steady state oscillations [11]. Problems with gridintegrated PV systems such as low active power, poor MPPT performance, and power loss [12]. Intelligent controllers with convertors are used with fundamental MPPT techniques like IC, P&O, and CVC to address this complexity [13]. Te PV terminal voltage is periodically perturbed and compared to the prior value using the perturb and observe (P&O) approach. Te P&O output is fuctuating slowly as the air conditions approach the maximum power operating point (MPOP). Te MPOP deviated when the meteorological conditions changed quickly [14]. In the constant voltage (CV) approach, the PV voltage is matched to the reference voltage closer to the maximum power point. In contrast to other approaches in [15], the incremental conductance (IC) method draws maximum power at lower and rapidly varying irradiance conditions.
Power converters serve as MPP trackers by changing the power switches' duty cycle [16]. Tree dynamic test operation procedures, such as day-by-day operation, stepped operation, and EN50530 operation procedures, were introduced in P&O, VSSIC, and the hybrid step-size beta method [17]. Soft computing methods such as Kalman flter, fuzzy logic control (FLC), neural network, partial swarm optimisation (PSO), ant colony optimisation (ACO), artifcial bee colony (ABC), bat algorithm, and hybrid PSO-FLC are introduced, and the results are computed [18]. In standalone or grid-connected PV systems, single or double power converters are used depending on the load requirements [19,20]. Te single stage inverter (SSI) is used in place of the double stage converter to reduce costs and the number of components [21]. Te PV parameters are adjusted to keep the DC motor parameter constant at various levels of irradiation [22]. Using PWM pulses from MPPT, the conductance value of the power converter can be changed [23]. A solo PV system with a basic resistive load was simulated under varying weather conditions (−25 C to −50 C) [24,25]. A variable step size incremental conductance is created to enhance the dynamic and steady state performance of the incremental conductance approach in load impedance change [26].
A battery energy storage system (BESS) is added to the PV system for residential applications in order to maintain the voltage level and prevent power supply interruptions [27]. To extract the most power possible, the PV is matched with the residential load line. For a single stage inverter PV system, constant and changing insolation (300 W/m 2 to 700 W/m 2 ) is taken into account, and the MPPT controller is built to run in standalone or grid linked mode depending on the PV output availability. By integrating one cycle control with MPPT, a single stage inverter with MPPT that increases power output by 3% and has an efciency of more than 90% is increased to 95.67% [28].
Te computational complexity of fuzzy logic controllers is lower than that of traditional controllers. Performance of a fxed step size (1000 W/m 2 to 500 W/m) controller is evaluated with fuzzy logic with particle-swarm-optimization-based MPPT. Cell temperature will not have an impact on the power factor and the dc link voltage as they are independently managed [29] for the grid connected single stage inverter for step change in solar irradiation. Te fuzzy controller's membership function shape is changed to regulate the distance between its operational and maximum power points, and the increment in conductance is changed to be proportional to solar irradiation [30]. An adaptive fuzzy logic MPPT controller is proposed and tested for gridconnected PV systems [31] in order to enhance simplefuzzy-logic-based MPPT. Tere is discussion of several single stage and multistage inverters. Grid-connected PV generators use unidirectional inverters to allow power to fow from the source to the grid. When using grid-connected PV with local loads or stand-alone PV with reactive loads, a bidirectional converter is used. Te development of an adaptive neuro fuzzy inference system (ANFIS) PSO-based MPPT approach took into account ten diferent patterns of temperature and solar insolation [32]. To improve the shortterm PV power forecasting, an ANFIS-based MPPT controller with a combination of GA-PSO is implemented.
Te use of voltage/current-based ANN MPPTalgorithms is encouraged because the difculty of hardware implementation and the cost of sensors prevented consideration of irradiance and temperature fuctuation [33]. A long-term study is conducted to determine the accuracy. Te ANNbased MPPT is constructed, and the efectiveness in locating the ideal operating point for stationary modules with a slope of 30 degrees is shown for various seasonal variations [34].
To address power quality improvement and safety concerns, various isolated microinverter topologies were considered for grid-connected systems [35]. Te incremental conductance method is chosen from the literature review because it performs better than other MPPT techniques, particularly for changeable irradiance in grid-connected PV systems. Te majority of studies used constant irradiance, while some of them also used irradiance changes of more than 400 W/m 2 . In order to obtain the greatest power for a sudden change in irradiance, a grid-connected PV system with a new MPPT technique is suggested in this research. Te servo mechanism and new MPPT approach are proposed at lower irradiances (250 W/m 2 ) to obtain the maximum power, and the uplift of power from source to grid is thoroughly studied [36]. Te biased three-winding transformer performance was analyzed, and the result obtained through the biased MPPT technique in INC MPPT methods is taken into reference [37].
According to a literature review, no additional equipment has been employed to get the most power possible from the panel thus far. In this study, a novel method for obtaining MPPT is presented employing extra devices.
Tis section gives a quick overview of various PV systems and their MPPT methods that are described in the literature. Problems encountered when connecting the PV system to the grid and various approaches taken to achieve optimal efciency are highlighted. Te reference papers provide a brief description of the benefts and drawbacks. It International Transactions on Electrical Energy Systems introduces the idea of extract maximum power using additional devices.
Functional Block Diagram of the Proposed System.
In the proposed method, the PV generator is connected to the microgrid through a couple of inverter and transformer. During low irradiance, the auxiliary transformer output voltage is added to boost the PV voltage and produce the power with the help of feedback power from the grid through the impulse integrated inverter MPPT method. Te proposed impulse integrated inverter MPPT approach for grid-tied PV generators is shown functionally in Figure 1. Trough a couple of inverters, flter circuits, and transformers, the PV module is connected to the single phase grid. An inverter and a flter circuit are used at the front stage of both the transformer to convert the DC voltage of PV to AC voltage. Te main inverter is rated highly, while the auxiliary inverter is rated lower. Both main transformer and auxiliary transformer secondary windings are linked together in series. Te secondary windings of both transformers are connected to the grid. Te secondary voltage of the auxiliary transformer is added to increase the PV voltage to match the grid voltage level. Te MPPT controller receives I pv and V pv from the PV array output terminals along with the I rr reference. Te MPPT controller's pulses cause the inverter circuit's input conductance to change accordingly. Inverter switches enable the PV to produce the most power possible. Te correct sensors are positioned in the correct locations in order to obtain the reference values of voltage and current.
In this proposed model, the PV panel is divided in to two parts. Tey are as follows: (1) Main panel (2) Auxiliary panel During normal irradiation, switches S 1 and S 2 are in the closed state, and both panels are connected to primary winding 1.
When there is not enough sunlight, the voltage that is present across the primary winding of the transformer is not high enough to allow the power that is available from the photovoltaic panel to be transferred to the secondary side of the transformer, which is connected with the load. After this point, the switch S 1 and the switch S 2 will be turned to the OFF position. Because of this switching operation, the auxiliary photovoltaic panel is now connected to the second primary winding of the transformer, and there is now an adequate voltage produced across the primary side of the transformer.
Proposed System
Modeling. Te equivalent circuit for solar cell shown in Figure 2 is represented as a current source infuenced by the photons is connected in parallel with diode which represents barrier in the PV cell and with the series connected resistor RS and the parallel connected resistor RP. Kirchhof's law is applied in equation (1) to get the current obtained from the PV. PV current variation is proportional to the change in the environmental changes like irradiance and temperature changes which leads to the nonlinearity behavior. Hence, the maximum power from the PV is not easily taken and the curve changes with respect to the atmospheric conditions. To overcome this struggle, some (maximum power point tracking) MPPT algorithms are developed to track the curve and extract the available maximum power from the PV. Conductance of the PV is matched with that of the load. Relationship between the impedance values is given in equation (2). Along with the conventional power equation, the conditions for power transfer from source to grid are depicted in equations (3) and (4).
where I PV is output current in the PV panel, I PH is current in the PV panel, I D is current through the parallel resistor, and I R is current through the parallel resistance.
where Z TOTAL is total impedance, Z MAININV is impedance in the main inverter circuit, Z AUXINV is impedance in the auxiliary inverter circuit, Z LCMain is impedance in the main flter circuit, Z LCAux is impedance in the auxiliary flter circuit, Z PyMain is impedance in the main primary side of the transformer, and Z PyAux is impedance in the auxiliary primary side of the transformer. To transmit power from source to grid, and Here, V PV is output voltage of the PV, V G is grid voltage, and I G is grid current.
Te PV system properly works at higher irradiance value is shown in Figure 3. But for lower irradiance value, it afects the power transfer from source to grid, as it cannot satisfy the condition. Te obtained value of PV voltage and grid voltage relation is given in (5) which is not as in (3) and (4). Te voltage and current equations for the PV system during lower irradiance are given in equations (6) to (11).
At lower irradiance, where, V OMain is output voltage of the main PV and V OAux is output voltage of the auxiliary PV.
International Transactions on Electrical Energy Systems
where V LMain is voltage across inductor in the main circuit, V CMain is voltage across capacitor in main circuit, V PYMain is main circuit primary voltage, V LAux is voltage across inductor in auxiliary circuit, V CAux is voltage across capacitor in auxiliary circuit, and V PyAux is auxiliary circuit primary voltage.
At higher irradiance, To satisfy the condition for power transfer from source to grid, an auxiliary transformer is brought into the system. Te required modifcation needed is obtained in equations (13) and (14).
At lower irradiance, New conditions for power transfer from source to grid are obtained from (15) and (16). Te turns ratio of the auxiliary transformer is given in equations (17) and (18).
Condition for power transfer from PV source to grid, Te overall simplifed diagram is given in fgure 3 Equation (19) shows the computation of load angle δ from the q component of the auxiliary capacitor voltage V CAuxq and the auxiliary capacitor voltage V CAux . International Transactions on Electrical Energy Systems Equation (20) gives the relationship between the voltage components V CMainαβ and V CMaindq .
Te diference between the actual value P a PV and measured value P M PV gives the ∆P PV . Equations (21) and (22) give the value of P a PV and ∆P PV .
Let us consider I PV � Constant, the required modifcation in the impedance of the PV's output side is getting as follows: Te load angle δ 1 is obtained from the output power of the main inverter and the maximum PV power.
Current mode of a grid connected PV converter is considered in Figure 4 Te relationship between the input vector U in U out U C T and output vector Y in Y out T of the PV inverter can be represented as follows: Te converter output impedance matrix is constructed as follows: To analyze the converter system, the following model is constructed: Input admittance of the converter derived from the PV impedance is as follows: Te cross coupling coefcients are constructed as follows: International Transactions on Electrical Energy Systems From the analysis of equation (29), the required values of G 22 and i pv a are calculated with the help of δ 1 and δ 2 which values are generated with the help of phase locked loop.
An MPPT controller shown in Figure 5, controls the voltages that come out of the main PV panel and the other auxiliary panel. In this controller, the tabulated available input voltage is taken as a reference and compared with the output voltage; from this diference, the MPPT controller computes the value of the power angle for inverter 2, which is connected between the auxiliary PV panel and primary winding 2. Also, the V pv and I pv values are calculated through the suitable sensors, and the current I RR value is taken as a reference value, then the MPP value is computed. Te driver circuit sends the PWM signal to both inverters and obtains the maximum power from the PV panel.
In Figure 6, the computed PLL value of the main PV panel inverter δ 1 and auxiliary panel inverter δ 2 is generated through the PLL circuit. If that ratios will not equal then, compare the ∆ ratio with actual parameter ratio (8) If ∆ ratio is greater than the negative value of actual parameters ratios or the ∆ value of current is greater than zero, then increase the voltage value of PV (9) If ∆ ratio is lesser than the negative value of actual parameters ratios or the ∆ value of current is lesser than zero, then decrease the voltage value of PV (10) If the ∆ value of V Sy is not equal to zero, then add the ∆V Sy to the secondary voltage value of transformer by adding the auxiliary circuit to get the new value
Working
Te amount of additional PV panel current (v pv a ) is injected from the additional PV panel with the help of the controllers. Due to I pv a , the required voltage across the primary winding is successfully created and it will drive the available power from the main PV panel to grid. Te proposed simulation setup includes 20 kW panel. Table 1 provides a list of the simulated system parameters for the proposed technique.
Simulink Block Diagram and Resultant Waveforms.
Tree elements make up the suggested system simulation base model for proposed MPPT. As shown in Figure 8, they are the PV array block, control circuit, and power circuit. Figure 9 depicts the simulation of the suggested system. It is split up into three main sections. Te frst installation has two 20 kW PV arrays. Grid-integrated circuits and a transformer with two primary and one secondary winding make up the power circuit. All control components and units are present in the control circuit. Table (2) describes the varying radiation from the sun between 7.00 am and 5.30 pm.
Power Circuit.
On the PI INC transformer, the power circuit is made up of two primary windings and one output winding. One input winding is linked to the inverter from the PV panel 1, and another input winding is connected to the inverter from the PV panel 2 and is controlled by the appropriate controller. Te setup is called as PI INC block. Te current generated by the second winding is used to raise the transformer's output voltage. Te voltage is smoothed using an LC flter. Figure 10 depicts the previously mentioned confguration.
Control Circuit.
Te flter circuit is employed in the simulated power circuit depicted in Figure 11 to cancel out the undesirable signal. For MPPT, a mathematical algorithm is created to determine the value of the needed second PV panel voltage with the help of the computation block. Te 6 International Transactions on Electrical Energy Systems error values are calculated and provided to the PID controller in this stage. Te PWM generator sends output from the PID controller to the VSI. Finally, the secondary winding from the second PV panel receives the computed voltage. Te PV panel's terminal voltage and current are measured, and ripples are eliminated using an appropriate flter. Te PV terminal voltage and current are used to instantly calculate the change in conductance values. Figure 11 illustrates the diference between the actual and reference voltages, which is used to compute the required second PV panel voltage. Due of the proposed system's grid connectivity, synchronization is crucial to grid integration. To track the values of δ1 and δ2 in this method, a single-phase PLL is employed. In the suggested inverter system, this value is used as a reference to generate the modulating signal.
Te simulation control circuit depicted in Figure 11 is essential to the system methodology that is suggested. Trough the specifc computational block, it regulates the system's output. Figure 13. Initially during the low irradiation 7.00 am to 9.00 am (0 sec to 1 sec) and 4.00 pm to 5.30 pm (2 sec to 3 sec) with no load condition, the current and voltage values of the PV panel International Transactions on Electrical Energy Systems is very low. But in the normal irradiation, both values got improved. Te electricity drawn from PV is depicted in Figure 14. Te proposed method extracts more power than the traditional method when the irradiance is low.
Te load power that is maintained at both low and high levels of irradiation is shown in Figure 15. Figure 16 shows the grid power for irradiance level. Te grid provides the necessary extra power to the load when the PV power is insufcient to meet the load's needs. If there is more PV power available than what is needed to meet the load, the extra PV electricity is injected into the grid. Te grid power is positive and lower than the traditional incremental MPPT during the 0 sec to 1 sec when power fows from the grid to the load. Te extra PV power is delivered into the grid between 1 and 2 seconds, making the grid power negative. Te grid power is negative in the proposed method whereas it is positive in the conventional way during the time interval of 2 to 3.5 s when the available PV power is marginally larger than the load requirement.
Te auxiliary inverter's power angle variation is seen in Figure 17. During the low irradiation period, the auxiliary unit is connected with the auxiliary inverter. Te auxiliary inverter power angle is generated through the PI INC MPPT controller and this power angle δ is used to improve the transformer primary side voltage.
Te main and auxiliary inverters' peak to peak and rms voltages are displayed in Figure 18 correspondingly. While the auxiliary inverter voltage level is dependent on the PV voltage, the main inverter maintains its voltage level. Te auxiliary inverter output voltage is 0 since it is not necessary when the PV voltage is adequate to power both the load and the grid.
Te auxiliary inverter current is shown in Figure 19. No current is drawn from the inverter when the radiation level is high. Te inverter current is almost 1 A between 0 and 1. Due to biasing, the inverter current is high between 2 second and 3.5 second.
Power and Cost Analysis for the Proposed System
Te power extracted from the 20 kW PV panel is examined and contrasted with the existing incremental conductance MPPT method. Te following two categories are used to underpin investigations: (1) Low irradiation. Table 3 compares the cost of the proposed system to the present system during the low-irradiation period as well as power extraction per hour (2) Irradiation due to cloudy and misty condition. Table 4 compares the existing system's cost analysis and electricity extraction per hour during the overcast and misty period with the proposed system
Low Irradiation.
Comparison of the cost of the proposed system to the present system during the low-irradiation period as well as power extraction per hour is given in Table 3.
Cloudy and Misty Conditions.
Comparison of the existing system's cost analysis and electricity extraction per hour during the overcast and misty period with the proposed system is given in Table 4.
Extracted Power Analysis.
Annual power extraction analysis of proposed and existing MPPT methods energy value are given in (Table 5).
Cost Analysis.
Annual cost analysis of proposed and existing MPPT methods energy value is given in Table 6. Figures 20 and 21 contrast the proposed PI INC approach with the current INC method by providing a visual depiction of energy extraction and cost analysis.
Te suggested methodology is explained in this section. Te choice is made between the transformer and the additional panel. Te transformer was chosen to preserve the voltage profle by adding the necessary voltage. Present is the implementation plan for the increased voltage approach. Tere is provided a functional block diagram of the suggested system. Te PI INC MPPT method's dynamic equation is derived. In addition, provided are the model's International Transactions on Electrical Energy Systems algorithm and fowchart. In terms of power and cost analysis, a comparison is made between the present and proposed approaches.
Comparative Analysis of Power Extraction during Varied
Irradiation. Table 7 describes the improvement of power developed using the proposed methodology (20 kW Panel). In Section 3, where simulation parameters and their corresponding values are presented, the suggested method is simulated in full. Te method's simulation block module, control scheme, and power circuit are described. Te PV array's V-I and P-V characteristics are displayed. It is possible to obtain simulation results for the PI INC MPPT method and compare them to INC MPPT. Te outcomes demonstrated the advantages of the PIO INC MPPT. Te proposed system's daily extracted power from the PV was substantially improved according to the simulated output waveforms and data values. International Transactions on Electrical Energy Systems 13
System Setup.
A simplifed simulation model was used to create the tangible prototype for the Focused Method. Table 8 gives the component ratings for the trial system. Two single-phase inverters constructed using an IGBT power module from SEMIKRON and one auxiliary unit make up this circuit. A secondary winding and a primary winding with two inputs each make up the proposed three-winding transformer. One voltage smoothing capacitor and the secondary winding are linked in parallel to the grid. Trough the inverter circuit, one primary winding is linked to the PV source, while a second winding is linked to the auxiliary PV source. Te required supply is acquired from the auxiliary PV source through the Inverter and it derived with SEMIKRON's-based voltage control circuit. PV source 1 and the auxiliary the corresponding sensor ratings listed in Table 8. Auxiliary PV source's (PV source 2) voltage and current values are measured by suitable sensors and the corresponding sensor ratings listed in Table 8. Te suggested control strategy was created in MATLAB/Simulink using the Simulink coder shown in fgure 22, and it was then implemented in the TI Launchpad f28379 d. Te power control circuit in Figure 23 was isolated using the TLP250, which also served as the power module's driver. Figure 24 depicts the deployment of the whole experimental evaluation system.
Experimental Evaluation Setup of Hardware Photo.
Te hardware input supply for the proposed system comes from a 1 kW PV panel is shown in Figure 24. While drawing power from the PV, three-winding transformers are needed (two primary windings and one secondary winding). Between the primary winding of the transformer (0-15) V and (0-15) V/230V and the PV panel, there are two inverters connected. Te following components: a rectifer, voltage controller, and a TI launch pad f28379 d Texas board controller are used for the control unit, as illustrated in Figure 22.
Control Circuit Implementation Using SIM Coder.
Te terminal voltage and current of the PV panel are measured, and then any ripples in the signal are removed by a flter that is appropriate for the situation. Te voltage and current readings at the PV terminals are immediately put to use in order to calculate the values for the change in conductance. Te auxiliary voltage provided by the PV system is determined by the immediate power availability. Te control circuit is responsible for determining the required auxiliary voltage, with the diference between the actual voltage and the reference voltage being depicted in Figure 22. Since the proposed system will be connected to the grid, synchronization will play an important part in the process of grid integration. For the purpose of achieving frequency tracking of the value of, this method makes use of a single-phase phase-locked loop (PLL). Within the framework of the proposed inverter system, this value of will serve as a reference for the generation of the modulating signal.
An optocoupler TLP250 and a bridge rectifer are used in the driver circuit depicted in Figure 25 to primarily control the voltage. Figure 25. During normal irradiation the auxiliary voltage is connected to the main panel. During low irradiation, the voltage value is reduced in 1 to 2 seconds, and then the auxiliary PV panel is connected to the second winding of the primary winding in 2 to 3 seconds. In Figure 26, the voltage peaks during the initial period. When the auxiliary unit is connected to the main panel, there is no voltage to fow to the second winding through the auxiliary inverter on the primary side of the transformer. When the switches S 1 and S 2 open, the auxiliary unit is connected to the transformer primary winding 2. Te RMS value PV panel 2 is shown in Figure 27. When PV panels 1 and 2 are connected to the primary winding, during this period, the RMS value of the auxiliary unit is near zero. Te RMS value is increased after PV panel 2 is connected to the auxiliary unit.
Hardware Result. Te auxiliary PV panel voltage is shown in
Initially, PV panel 1 and PV panel 2 are connected together, and the combined PV output is connected to the load through the transformer. At no load, the combined PV output is high. During low irradiation with load, the output value of PV 1 is reduced, as shown in Figure 28. Figure 29 shows how the power is boosted due to the improved voltage from the auxiliary unit. After the switches are opened, the auxiliary panel is connected to the auxiliary unit. Figures 30(a) and 30(b) depict the proposed PV system's entire impact. Te integrated PV system produces its most electricity under typical irradiation. In 0-1 seconds, it is explicitly mentioned. If there is no power transmission from the period of 1 sec to the period of 2 sec, the radiation period is low. Te PV panel then supplies the load with the available power following the separation of the auxiliary panel, which takes place in 2 to 3 seconds.
Te PLL value of power and its angle are shown in Figures 31(a) and 31(b). When compared to the loading condition, the power angle is superior when it is obtained at low irradiation after one second.
Hardware of the proposed system is developed. Voltage and current of the proposed PI INC MPPT technique were obtained. Extraction of continuous power irrespective of the change in irradiation was presented. Hardware results justify the implementation of proposed MPPT technique as it ensures maximum power extraction under varied environmental conditions.
Conclusion
Te proposed system, PI INC MPPT, was successful in drawing energy from the solar PV panel during periods of low irradiance. Trough research, simulation modelling and efective hardware outcomes, this technique was verifed. Tis unique proposed MPPT approach using an auxiliary unit extracts an additional 3 MW of electricity per year when compared to the current method. Te planned threewinding transformer was built with an auxiliary unit that produces the right output, achieving the ultimate objective. Te Texas board controller proved successful in controlling the voltage needed for the primary winding. Further research is possible using the proposed MPPT with auxiliary unit model because there is a lot of room for interpretation in terms of the feld's potential. Te current economy and the electricity demand are thought to be better suited for this methodology.
Data Availability
Te author gathered the information for their manuscript from the following text books. Data on the world's use of | 7,278.6 | 2023-06-19T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Carcharocles -bitten odontocete caudal vertebrae from the Coastal Eastern United States
A description and analysis is given of three Neogene odontocete caudal vertebrae that were bitten by the extinct megatooth sharks Carcharocles megalodon or Carcharocles chubutensis . The peduncular caudal vertebrae show bilateral gouge marks consistent with having been actively bitten and wedged between adjacent teeth of C. megalodon or C. chubutensis . None of the vertebrae show signs of healing. The occurrence of bite marks on distal caudals suggests active predation (vs. scavenging) in order to immobilize even relatively small prey prior to consumption.
Geological and geographic settings
The vertebrae described herein were collected separately from Mio-Pliocene sediments along the Atlantic Coastal Plain.CMM-V-4360 (Fig. 1) was surface collected by ME from within Pliocene Yorktown Formation sediments in the PCS phosphate mine in Aurora, North Carolina, USA.The geology and paleontology of this site has been thoroughly described elsewhere (Ray 1983(Ray , 1987;;Purdy et al. 2001;Ray and Bohaska 2001;Ray et al. 2008).
CMM-V-8405 (Fig. 2) was collected by SG as beach float from Bayfront Park (formerly known as Brownie's Beach), Calvert Cliffs, Maryland, USA.Although it was not found in situ, there is no reason to believe that it was not locally derived from the adjacent cliffs that comprise Miocene sediments from the Plum Point Member of the Calvert Formation.The local Miocene geology has been described most recently by Kidwell et al. (2015).
CMM-V-8406 (Fig. 3) was collected by MSV as beach float from Willows Beach, Calvert Cliffs, Maryland, USA.Although it was also not found in situ, it is thought to have been locally derived from the adjacent cliffs that comprise Miocene sediments from the Plum Point Member of the Calvert Formation.
Description
Figures 1-3 illustrate three peduncular caudal vertebrae of eurhinodelphinid-size odontocetes (Table 1).In the details of their morphology, they match those of eurhinodelphinid-grade odontocetes (Fig. 4;Abel 1931: pls. 25-26, 29).These vertebrae are identified as distal caudals because of their lack of transverse processes and the diminutive size of their neural canal.Furthermore, they are peduncular vertebrae, characteristic of cetaceans, that occupy the region of the vertebral column immediately anterior to the fluke (Fig. 4).Peduncular vertebrae are those in which the body of the vertebra is higher than it is wide (Table 1) and the transverse processes, neural arch, and pedicles are very reduced or absent (Uhen 2004).The vertebrae illustrated in Figs.1-3 derive from fully mature individuals; their circular epiphyses are fully fused.In all of these vertebrae, length is their greatest dimension, followed by their height, then width (Table 1).In the mysticetes from along Calvert Cliffs and from Aurora, and at least odontocetes more derived than Xiphiacetus and Zarhachis (Lambert et al. 2017: fig. 14) in which peduncular vertebrae are known, they are all both higher and wider than they are long.
Cylindrical best describes the shape of these vertebrae.Their neural arches are small (CMM-V-4360, Fig. 1) to virtually non-existent (CMM-V-8406, Fig. 3).The diameter of the neural canals in these vertebrae ranges from 2-5 mm.In all three vertebrae, at about the midpoint in the length of the cen- Brief report trum, the bilateral arterial foramen passes vertically through the body of the centrum.
In lateral views of CMM-V-4360 (Fig. 1), deep gouges are present on both antero-ventrolateral sides of the vertebra.The gouges begin at the anteroventral base of the centrum and pass diagonally across the body of the centrum where they end at about the midpoint in both length and height of the centrum.The longest gouge is approximately 46 mm.On the left side of the centrum, there is one major gouge whereas on the right side there are two; shorter and shallower incisions are also present on both ventrolateral sides of this vertebra.
In lateral views of CMM-V-8405 (Fig. 2), multiple gouges are present on both lateral sides of the vertebra.As in CMM-V-4360, the gouges also cross the body of the centrum diagonally in an anterodorsal-posteroventral direction.Both sides of this vertebra (mostly within the lower half of the centrum) are marked by six variably spaced and prominent gouges.The longest gouge on the right side of the centrum is approximately 37 mm (Fig. 2B).
CMM-V-8406 (Fig. 3), is also marked by multiple gouges but they are not as pronounced (as elongate or as deeply incised into the body of the centrum) as in the two aforementioned specimens.The longest gouge is approximately 19 mm (Fig. 3A).The anterior end of the vanishing neural arch is marked by two gouge marks whereas the right posterior end is marked by four bite traces.The right posteroventral end of the centrum is also scored by several short gouges (Fig. 3B).
Discussion and conclusions
Of the Neogene predators known, only Carcharocles megalodon and Carcharocles chubutensis had teeth large enough, with sufficient spacing (Fig. 5A) between adjacent teeth for the dolphin vertebrae described herein to have been gouged on both sides simultaneously in the manner in which they were.C. megalodon is known from both localities (Purdy et al. 2001;Kent in press).In the Miocene sediments along Calvert Cliffs and in the PCS phosphate mine in Aurora, C. chubutensis (which in Aurora, Purdy et al. (2001) list as Carcharodon subauriculatus) teeth occur in some of the same beds as teeth of C. megalodon.Individuals of C. chubutensis would also have been large enough to have created bite marks such as those preserved on the odontocete vertebrae described here.C. chubutensis is considered to have been the immediate predecessor and a chronomorph of C. megalodon (Perez et al. in press).Smaller predators like Carcharodon hastalis (Bianucci et al. 2010;Ehret et al. 2012;Kent in press), which occurs in both localities (in Aurora, Purdy et al. 2001 describe it as Isurus hastalis and its synonym Isurus xiphodon), were also considered.In C. hastalis, only the largest individuals, with teeth up to 75 mm in length, might have had tooth size and spacing large enough to make the gouges present on CMM-V-8406 (Fig. 3), but not on either CMM-V-4360 (Fig. 1) or CMM-V-8405 (Fig. 2).
None of the bite traces show evidence of the cutting tooth having had serrations.However, the manner in which the teeth cut into the bone (i.e., parallel to the cutting edge), the teeth did not rake the surface of the bone, providing much opportunity for the preservation of serration marks had they been present.Furthermore, the spongy texture of the bone surface did not provide the necessary material resolution for serration marks to be preserved.Many other shark-tooth-marked bones are known but, to our knowledge, no other fossils are known where mark-
A B C
ings were made on single bones from having been wedged between two adjacent teeth.
Although we are confident that the gouges in CMM-V-4360 and CMM-V-8405 were made by Carcharocles spp.and that none of the vertebrae show signs of healing, unfortunately we do not know for sure if the tooth traces result from scavenging or active predation.Nevertheless, we think that a stronger case can be made for active predation vs scavenging.Extant great white sharks (Carcharodon carcharias) do not actively prey upon large adult baleen whales although they will scavenge their carcasses (Dickens 2008;Fallows et al. 2013;Collareta et al. 2017).Fallows et al. (2013) also observed that, during scavenging events, great white sharks generally show an initial preference for foraging on the caudal peduncle and fluke of a Bryde's whale (Balaenoptera edeni) carcass before proceeding to blubber-rich regions of the body of the cetacean.Conversely, C. carcharias rarely scavenge smaller marine mammals (seals or diminutive odontocetes), however, they do actively prey upon these smaller prey items.Based on the feeding habits of extant great white sharks and cetacean and pinniped bones exhibiting large bite marks attributed to the activity of mega-toothed sharks, Collareta et al. (2017) proposed that megalodon preyed upon relatively small marine mammals (e.g., small-sized mysticetes) while also scavenging on the carcasses of larger whales.The odontocete vertebrae described here would have come from individual cetaceans no longer than 4 m, considerably smaller than large megalodon in the 15-18 m body-length range (Gottfried et al. 1996;Shimada 2003;Pimiento and Balk 2015;Grant et al. 2017).The reconstructed jaws shown in Fig. 5A accompany a reconstructed skeleton of a shark with a body length of about 11 m.
Modern large sharks attack small, echolocating toothed whales in such a way so as to avoid detection by both the lateral visual field and the anteriorly directed biosonar (Long and Jones 1996;Bianucci et al. 2010).Extant great white sharks are known to disable dolphins by biting their caudal peduncle (Long and Jones 1996: fig. 8).At a minimum, Carcharocles megalodon could certainly have done the same (Fig. 5B).That the caudal vertebrae show multiple gouges suggests that the peduncle of these odontocetes was jammed forcefully and repeatedly between adjacent teeth by powerful bite forces applied by teeth in the opposing jaw (Fig. 5A).The application of such repeated force seems more in keeping with the disabling of struggling prey rather than the dismembering of a small carcass so close to its fluke (Fig. 5B).Therefore, these Carcharocles-bitten odontocete caudal vertebrae suggest that this apex predator included this disabling tactic in its predatory repertoire, and that it also actively preyed upon relatively small odontocetes.
Fig. 1 .
Fig. 1.A Neogene odontocete caudal vertebra, CMM-V-4360 from a spoil pile within the PCS phosphate mine in Aurora, North Carolina, USA.Peduncular vertebra in right (A) and left (B) lateral views and anteroventral (C) view.The deep diagonal gouges on the anteroventral margin of the vertebra were created when the vertebra was repeatedly wedged between two adjacent teeth.
Fig. 2. A Miocene odontocete caudal vertebra, CMM-V-8405 from beach float at Bayfront Park, Calvert Cliffs, Maryland, USA.Peduncular vertebra in left (A) and right (B) lateral views and anteroventral (C) view.Multiple gouges are present on both sides of the vertebra. | 2,244.2 | 2018-01-01T00:00:00.000 | [
"Geology"
] |
Single-site-resolved imaging of ultracold atoms in a triangular optical lattice
We demonstrate single-site-resolved fluorescence imaging of ultracold $^{87}\mathrm{Rb}$ atoms in a triangular optical lattice by employing Raman sideband cooling. Combining a Raman transition at the D1 line and a photon scattering through an optical pumping of the D2 line, we obtain images with low background noise. The Bayesian optimisation of 11 experimental parameters for fluorescence imaging with Raman sideband cooling enables us to achieve single-atom detection with a high fidelity of $(96.3 \pm 1.3)$%. Single-atom and single-site resolved detection in a triangular optical lattice paves the way for the direct observation of spin correlations or entanglement in geometrically frustrated systems.
Introduction
Frustrated quantum magnets are one of the most challenging subjects in condensed matter physics. The simplest example of frustrated magnetism is a triangular lattice with antiferromagnetic interactions. A competition of the interactions among the spins often leads to exotic liquid states, in which the spins are strongly correlated while their long-range order is suppressed [1].
Quantum simulation with ultracold atoms in optical lattices [2,3] is a promising approach for understanding such systems. The simulation of classical magnets of frustrated systems has been demonstrated using Bose-Einstein condensates in a triangular optical lattice [4]. Further, theoretical proposals for the quantum simulation of frustrated quantum magnetism have been reported [5,6,7,8]. However, several technical challenges, such as the preparation of a low-entropy state, need to be overcome to realise frustrated quantum magnets in ultracold-atom experiments.
Recently, there have been considerable advances in the simulation of quantum magnetism using the quantum gas microscope technique [3]. An antiferromagnetic order was realised in the Fermi-Hubbard model on a square lattice [9]. To achieve a low-entropy state, which is essential for observing a magnetic order or correlation, the manipulation of local potential has been demonstrated [10]. In addition to twospin correlations, string order [11,12] and entanglement entropy [13] are accessible; such measurements provide a unique probe of quantum spin liquid states [14]. Exotic excitations such as spinons, arising from quantum spin liquid states, may be captured through direct observation of spin dynamics [15,16,17].
In this study, we demonstrate the single-site-resolved imaging of ultracold 87 Rb atoms in a triangular optical lattice. Although polarisation gradient cooling has been successfully employed to realise a quantum gas microscope of 87 Rb atoms for a square lattice [18,19], we used Raman sideband cooling, which has been applied to microscopes of fermionic 6 Li and 40 K atoms [20,21,22]. The reasons for which are as follows. Generally, it is challenging to provide optical access for polarisation gradient cooling beams, except in the square lattice case, where one can irradiate the cooling light by superimposing optical lattice beams [19]. From the perspective of optical access, Raman sideband cooling is relatively easy to implement. Furthermore, in our scheme, scattered light from the cooling beams near the D1 line can be removed using an optical filter, and thus the fluorescence photons of the D2 line are detected with low stray light. A drawback of this scheme is the relatively large number of experimental parameters required for the cooling method. However, we have tuned them efficiently using machine learning based on Bayesian optimisation [23,24,25,26,27].
Experimental setup and atom preparation
We first describe the apparatus used for preparing and detecting ultracold 87 Rb atoms in a triangular optical lattice. To set up a high spatial resolution imaging system (a) Top view of the high-resolution imaging system. L1, L2, and L3 denote the triangular lattice beams; R1(H/D) and R2 denote the Raman beams; and OP1, OP2, and OP3 denote the optical pumping light with σ − polarisation. The angles between lattice beams L1-3 are 120 • . The polarisation of R1(H/D) is parallel to the x-axis and corresponds to π polarisation. The polarisation of R2 is oriented along the z-axis and consists of σ + and σ − polarisation. The applied magnetic field B is 1 G along the x-direction. (b) Side view of the high-resolution imaging system. The angle between the vertical lattice beams is approximately 18 • . R1(H) and R2 are propagating on the xy-plane. R1(D) is irradiated at an angle of 24 • from the xy-plane. Spontaneously scattered photons, resulting from the optical pumping beams (OP1-3), are collected through a high-NA objective of 0.65. compatible with a laser cooling system, we employed the optical transport technique [28]. The laser-cooled atoms were directly loaded into an optical dipole trap generated by a laser beam at a wavelength of 817 nm. The power of the beam was 450 mW, and the beam waist was 30 µm. To transport the atoms to the imaging region above a high numerical aperture (NA) objective of 0.65 (Mitsutoyo M Plan Apo NIR HR50x (custom)), we shifted the focus position of the trap beam by 119 mm in 0.9 s with an air-bearing moving stage. Subsequently, the atoms were loaded into a crossed dipole trap generated by two beams with a beam waist of 40 µm and a wavelength of 1064 nm. Here the internal atomic state was randomly populated owing to photon scattering induced by the transport beam. To efficiently cool atoms down by evaporative cooling, the atoms were initialised into the |F = 2, m F = −2 state via optical pumping using two circularly polarised beams. Then, we conducted evaporative cooling by lowering the potential depth of the crossed dipole trap. The typical temperature of the atoms after the evaporative cooling was approximately 1 µK.
We loaded the ultracold atoms into a lattice system that consisted of a triangular and a vertical optical lattices (see figure 1(a) and (b)). We first ramped up the vertical optical lattice (along the z-direction), which was formed by two 810 nm beams at a relative angle of approximately 18 • , as illustrated in figure 1(b). The waist of the beams was approximately (100, 43) µm along the horizontal and vertical axes, respectively. The lattice separation of the vertical lattice was 2.58 µm, which is smaller than the atomic cloud size in the optical trap. Thus, the atoms were distributed over several layers of the vertical lattice. Under this circumstance, we could not obtain clear images because of the contribution of considerably blurred fluorescence signals from the out-of-focus layers. To obtain a site-resolved image, we prepared atoms trapped in a single layer using the following procedure. We applied a magnetic field gradient of 21.4 G/cm along the z-direction. The position-dependent Zeeman shift caused by this gradient made it possible to resolve each layer of the vertical lattice with a microwave transition. The atoms trapped in the selected layer were transferred to the |F = 1, m F = −1 state, and then the other atoms in the |F = 2, m F = −2 state were blown away by a laser pulse resonant to the transition from F = 2 to F = 3. Finally, we ramped up the triangular optical lattice formed by three 1064 nm beams, intersecting at a relative angle of 120 • on the horizontal xy-plane (L1-3), as depicted in figure 1(a). Notably, the shape of the triangular lattice beams was elliptical for reducing the harmonic confinement, and the typical beam waist was (130, 44) µm along the horizontal and vertical axes, respectively. For site-resolved imaging in the triangular lattice, we rapidly increased the potential depths of the triangular and vertical lattices to (120, 170) µK, respectively, and pinned the atoms to their lattice sites.
Fluorescence imaging with Raman sideband cooling
We adopted fluorescence imaging to achieve single-atom sensitivity. However, light scattering during imaging causes the atoms to heat up, which leads to atom loss. To suppress the undesired heating, we utilised Raman sideband cooling, which we describe here. Raman beams R1 and R2, depicted in figure 2(a), drive a transition that decreases vibrational levels from the |F = 2, m F = −2, ν state to the |F = 1, m F = −1, ν − 1 state. Here ν is the vibrational level. The atoms transferred to the |F = 1, m F = −1, ν − 1 state are optically pumped back to the |F = 2, m F = −2, ν − 1 state using optical pumping (OP) light with σ − polarisation (see figure 2(a)). The OP1 (OP2) light transfers atoms from F = 1 (F = 2) to F = 2. In the optical pumping process, the changes in the vibrational levels are suppressed by tightly trapping the atoms in potential wells up to the point of the Lamb-Dicke regime [29]. Owing to the repetition of the above processes, the atoms are cooled down to the vibrational ground state |F = 2, m F = −2, ν = 0 , whereas photons resulting from the optical pumping are used for fluorescence imaging. The photons collected with the high-NA objective were recorded at an electron-multiplying CCD camera. The atoms stop scattering photons when they populate the vibrational ground state. For continuous fluorescence imaging, we applied an additional beam (OP3), which transferred the atoms from F = 2 to F = 3.
The wavelength of the fluorescence light induced by the optical pumping was 780 nm of the D2 line, whereas the Raman beams were detuned by −100 GHz from the D1 line at 795 nm. Therefore, the stray light from the Raman beams can be removed from the fluorescence light using an interference filter. Furthermore, the optical pumping beams OP1-3 propagated perpendicularly to the objective axis, which reduced their stray light incident onto the imaging system (see figure 1). Owing to these advantages, we obtained fluorescence images with only a few background photons. We used two Raman beam pairs to independently adjust the coupling strengths of the Raman transition along the horizontal and vertical directions. The R1(H) and R2 beams drove the Raman transition along only the horizontal direction, and the R1(D) and R2 beams drove the transition along mainly the vertical direction. Here R1(H) propagated along the y-axis, and R1(D) was irradiated at an angle of 24 • from the horizontal plane (see figure 1). To experimentally confirm the role of each Raman beam pair (R1(H)-R2 and R1(D)-R2), we took Raman spectra. The spectra of the former and latter pairs are presented in the top and bottom panels of figure 2(b), respectively. The results of the Raman spectra indicate that the separations of the vibrational levels of the lattice potential wells were (ω H , ω V ) = 2π × (108, 35) kHz in the horizontal and vertical directions, respectively. We evaluated the Lamb-Dicke parameter using these separations. For the spontaneous emission by the optical pumping, the Lamb-Dicke parameters, η OP = k OP a, were 0.19 and 0.33 in the horizontal and vertical directions, respectively, where k OP is the momentum of the optical pumping beams, and a = /2mω is the harmonic oscillator length. The Lamb-Dicke parameter for the Raman beams is described as η R = ∆ka, where ∆k is the momentum transfer due to the Raman beams. Here the Raman momentum transfers for the R1(H)-R2 and R1(D)-R2 pairs are expressed as ∆k H and ∆k D , respectively. The momentum transfer ∆k H included only the horizontal component of 9.0 µm −1 , whereas ∆k D contained a horizontal component of 8.6 µm −1 as well as a vertical component of 3.2 µm −1 . Therefore, the Lamb-Dicke parameter, η H = ∆k H a, is 0.21 for the beam pair R1(H)-R2. Similarly, for the beam pair R1(D)-R2, the Lamb-Dicke parameters, η D = ∆k D a, are 0.20 in the horizontal direction and 0.13 in the vertical direction. All the Lamb-Dicke parameters (η OP , η H , and η D ) are significantly lower than 1, indicating that the atoms were in the Lamb-Dicke regime for the cooling process.
In the preceding discussion, we assume that each site of the triangular lattice is well approximated by a two-dimensional isotropic harmonic trap. Under this assumption, there is no angular dependence in the coupling strength of the Raman transition on the horizontal plane. However, considering the spatial symmetry of a triangular lattice, a relative angle of 15 • between the incident direction of a lattice beam and Raman momentum can provide the best Raman transfer efficiency for the triangular lattice. Owing to the limitation of our apparatus, the relative angle was approximately 19 • . Although this angle was slightly different from the best one, we successfully cooled the atoms in the triangular lattice through Raman sideband cooling.
Automatic optimisation of cooling parameters
To improve the quality of the fluorescence images, the laser parameters of the Raman beams and optical pumping beams need to be fine-tuned. Because of experimental imperfections, such as the limited purity of the polarisation of the optical pumping beams or heating from the optical lattices, it is generally difficult to determine the desirable parameters without experimental optimisations. Moreover, correlations between multiple parameters often compel experimenters to expend considerable time and effort. We employed Bayesian optimisation to overcome this difficulty. The details are explained in our previous work on Bayesian optimisation of evaporative cooling [25]. To improve the signal-to-noise ratio of the fluorescence image, it is necessary to increase photon scattering, which inevitably leads to the heating of atoms. To ensure high-fidelity detection, it is necessary to suppress hopping of the atoms, which also results in atom loss. Therefore, Raman sideband cooling needs to sufficiently surpass the heating. To this end, we adopted N 2 × (N 2 /N 1 ) as the score to be maximised by the optimiser. Here N 1 is the fluorescence count of the first image with an exposure time of 0.5 s, and N 2 is that of the second image with the same exposure time taken 2 s after the first image (see figure 3(a)). The factor N 2 of the score motivates the optimiser to increase the number of fluorescence photons. The ratio N 2 /N 1 corresponds to the lifetime of the atoms in the Raman sideband cooling process; less hopping is preferred to obtain a longer lifetime. Under these conditions, the balance between increasing photon scattering and efficient cooling is maintained. Notably, our score based on the fluorescence counts is robust and easy to evaluate. Furthermore, instead of using only the atoms in a single layer, we used those in all the layers to stabilise the atom numbers and increase the fluorescence count. We optimised 11 parameters involving the intensities and frequencies of Raman beams and optical pumping beams. The optimisation was conducted automatically and almost converged after 1700 trials within 13 h (see figure 3(b)). Therefore, this method can also be used to compensate for the slow drift of the experimental environment, such as the temperature change over the year or deterioration of optical components with age, and even to recover from slight accidental changes in the experimental apparatus.
Based on the outcome of the Bayesian optimisation, the two-photon Rabi frequencies of the Raman transition on the carrier were set to 2π ×1.1 kHz for R1(H)-R2 and 2π × 6.8 kHz for R1(D)-R2. Using the optimised parameters, we achieved a lifetime of (6.5 ± 0.4) s of atoms during fluorescence imaging (see figure 3(c)).
Site-resolved imaging
Using the optimised parameters explained in the preceding section, we realised singlesite-resolved imaging of 87 Rb atoms in a single-layer triangular optical lattice, as exhibited in figure 4(a). The performance of a quantum gas microscope is characterised by the detection fidelity, which consists of pinning, hopping, and loss rates. To evaluate these rates, the atom distribution needs to be determined, which requires the profile of The inset presents the PSF averaged using subpixel shifting [18]. (c) Two-dimensional histogram of relative positions of atoms. The red arrows with labels L1-3 denote the incident direction of triangular lattice beams. The black arrows, a 1 and a 2 , represent the lattice vectors of our lattice system. Using these lattice vectors, each lattice site can be expressed as na 1 + ma 2 , where n and m are integers.
the point spread function (PSF) and lattice geometry, as characterised by the lattice vectors, a 1 and a 2 . First, we evaluated the PSF of our imaging system by averaging over 2500 isolated atoms of the fluorescence images (depicted in the inset of figure 4(b)). The radial profile of the measured PSF is presented in figure 4(b). We approximated the PSF as a Gaussian. The fitting result is represented by the blue curve in figure 4(b); the full width at half maximum was determined as 679 nm, which corresponds to an effective NA of 0.59. The obtained resolution of our imaging system was only 1.1 times larger than the diffraction-limited resolution of 617 nm, which is given by the designed NA of 0.65. Converting the integrated PSF count into the number of photons, we found that ∼ 240 photons per atom were detected during a 0.5 s exposure. This count corresponds to a photon scattering rate of 7.8 kHz with an estimated collection efficiency of 6.0%.
In the case of square lattices, the lattice vectors, comprising the lattice axis and spacing, can be determined by searching for a one-dimensional histogram with minimal width projected onto one axis [30], because the lattice axes are orthogonal. However, in our case of the triangular lattice, an additional coordinate transformation is necessary to extract the lattice vectors for the same method because the two lattice vectors are not orthogonal. Therefore, we used a two-dimensional histogram of the relative coordinates between individual atoms (see figure 4(c)). We fitted the two-dimensional histogram with the sum of Gaussians with the same amplitude A and width w: where r n,m = na 1 + ma 2 indicates the centre coordinate of the lattice site (n, m). From the fitting result, we directly extracted information on the lattice vectors for our system. We determined the atom distribution in the images using a deconvolution method based on Richardson-Lucy deconvolution with the obtained PSF and lattice vectors a 1 and a 2 [30]. We applied this method to a limited region of 30 × 30 µm, including approximately 2000 sites. A typical example of the deconvolution process is presented in figure 5(a)-(c). The histogram of the reconstructed amplitude at each site is shown in figure 5(d), in which the distribution of the empty and occupied sites are clearly separated. We defined the threshold for the occupied site as 2σ of the lower side in the distribution of occupied sites, which is indicated by the dashed line in figure 5(d).
Finally, we evaluated the detection fidelity by taking two images of the same atomic cloud and comparing their reconstructed atom distributions ( Figure 5(e)). In this measurement, the exposure time of each image was 0.5 s, and the wait time between the images was 0.15 s. The pinning rate was defined as the rate of atoms that stayed pinned to their sites, the hopping rate as the rate of the occupied lattice sites that were detected only in the second image, and the loss rate as the ratio of the differences in atom number between the images. Note that our definition of the loss rate allowed for an increase in the atom number due to atoms entering from outside of the analysis area; thus, the loss rate became negative sometimes. We achieved a pinning rate of (96.3 ± 1.3)%, a loss rate of (0.5 ± 1.4)%, and a hopping rate of (3.2 ± 1.0)% for 0.5 s exposures of clouds with a lattice filling of approximately 0.10.
Conclusion
In conclusion, we have demonstrated the site-resolved imaging of ultracold 87 Rb atoms in a triangular optical lattice. Although we used the fluorescence counts of an atomic ensemble as the optimisation score, a high pinning rate of (96.3 ± 1.3)% during an imaging time of 0.5 s has been realised through the machine-learning-based automatic optimisation of the Raman sideband cooling parameters. The detection fidelity can be improved further if the pining rate can be directly used as an optimisation score. Thus, a more robust deconvolution algorithm is required. Although the present experiment has been performed for 87 Rb atoms, our method can be extended to 85 Rb atoms, for which the interatomic interaction can be adjusted with a broad, low-field Feshbach resonance [31] for realising antiferromagnetic super-exchange coupling [32,33] or antiferromagnetic coupling through negative absolute temperature [8]. | 4,751.6 | 2020-10-11T00:00:00.000 | [
"Physics"
] |
High Efficiency On-Board Hyperspectral Image Classification with Zynq SoC
. Because of the downlink bandwidth bottleneck and power limitation on satellite, the demands for low power cost high performance on-board payload data processing which can reduce the volume of communication data are growing as well. This paper propos es a high efficiency architecture for on-board hyperspectral image classification in a Zynq Soc to achieve real-time performance. The Hamming-distance based Support vector machine (SVM) is adopted to get a high accuracy and low energy consumption for multi-class classification. The sequential control and the computing data path are realized in ARM processor and Programmable logic respectively. By the pipelined computing data path, a satisfying speedup is reached and thus lowers the energy consumption. The experiments on real hyperspectral image datasets demonstrate that our architecture can achieve 97.8% overall accuracy, 2.5~330x speed up and 11~835x energy saving compared with different state-of-art embedded platforms. For the AVIRIS spectrometer in real NASA application, it can realize real-time image classification.
Introduction
The spatial, spectral and temporal resolutions of hyperspectral image (HSI) in remote sensing keep increasing in recent years. However, the limited downlink bandwidth between satellites and ground station cannot catch up with the drastically increased data volume from high resolution spectroscopy instruments [1]. One approach to resolve this downlink bottleneck is to process the image data onboard and in real-time to decrease the data size to be transferred by downlink [2]. This requires the onboard processing system not only to achieve high performance to process the online continuous image data, but also to meet the harsh limitation of satellite, such as power consumption, size and weight.
In recent years, significant efforts have been made to cope with above challenges. In [3], Graphics Processing Units (GPUs) was used to process hyperspectral images (HSI) and to gain a superb performance while it is difficult to employ it for onboard satellite applications because of its high power consumption. Due to the superiority of FPGA devices for onboard application, Bernabe et al. [4] developed an automatic target generation process system for HSI using FPGAs.
Classification is one of main tasks for remote sensing image processing. Support vector machine (SVM) algorithm which can perform equal or better accuracy than other classifiers is widely used for hyperspectral classification [5]. It has a number of advantages over its counterparts, such as asymmetry of computation in training and classification, generalization capacity and good performance on small training data set [6]. HSI classification is multi classification problem, and different strategies to build a multi-classifier by SVM binary classifier has been developed in [7]. Hamming-distance SVM is the most potential choice by overcoming deficiencies of other methods [8].
A lot of researchers have already worked on SVM implementation utilizing different hardware. A. H. M. Jallad et al. [9] proposed a binary SVM classifier on SRAM based FPGA which was designed to identify the Cloud and Non-Cloud pixels. However, in most applications, multi classifier is required. J. Manikandan et al. [10] designed a SVM multi-classifier with System-on-programmable-chip (SOPC) technology on a Cyclone II FPGA. The accurate and high efficiency onboard HSI multi-classifier satisfying hardware resources and power consumption requirements are still ongoing research concerns.
In this work, a high efficiency multi-classifier using Least Square SVM (LS-SVM) and Hamming-distance judging strategy is proposed and its implementation on a state-of-art Xilinx hybrid chip, ZYNQ, is presented. ZYNQ is a better option for onboard application owing to its low-power consumption and advanced ARM processor with abundant logic resources. To achieve an optimum design solution, an iterative procedure is applied where variant implementation techniques are utilized to produce best possible efficiency. By simultaneously utilizing the control intensive module in processor and computing intensive module in logic cells, a high performance and low-power consumption computing architecture is designed. The main contributions are: -Proposing a parallelism multi-classifier based on Hamming-distance judging strategy by combining hardware and software co-design for high performance and energy efficiency to achieve real-time hyperspectral image processing, -Full experiments performed with real HSI dataset on a ZYNQ platform which is designed for satellite data processing.
Mulit-Classifier algorithm
According to above discussed features of SVM, it is quite suitable for HSI classification. While SVM is a binary classifier, a multi-classifier is necessary in real processing task which can be implemented by using several binary classifiers in a certain architecture. In order to reduce the requirement of logic resource, LS-SVM is proposed which is the optimum option for onboard application [11].
Multi-classifier by Hamming-distance decision
To realize a multi-classifier, a Hamming-distance decision approach is used to combine the results of several binary classifiers. The result of every binary classifier represents one bit of a code. The bit width of the code is equal to k(k-1)/2. For a binary classifier, the outputs are assigned to 1 or 0 to represent different labels. Therefore after the processing of all binary classifiers for a test data, a new result code will be generated. Likewise, for every class label, an identifying code is created when training the sample dataset. The Hamming distance between result code and every identifying code is computed by counting the number of different bits. The class label with minimum distance is then assigned to test data. So for a new result code, Hamming distance (H) between all of the identifying code should be calculated, however not all of the bits in result code are useful for different class labels, a pre-mask should be used for selecting the bits. The Hamming distance is calculated as the following formula: Where H is the Hamming distance, n is the width of result code and is equal to k(k-1)/2, l b means the l th bit value of T_res which is calculated in (2).
Where R_Code is the new result code, mask is the one of the pre-mask which is different for different class label. I_code is the identifying code. The value of identifying code and pre-mask code is dependent on the number of class and the class label of every binary classifier.
In our research we will focus on this approach owing to its higher accuracy than previously described approaches [8,12].
SVM binary classifier
An improved version of SVM, LS-SVM is adopted as the binary classifier, because it has assured computing process and needs less logic resources [11].
A LS-SVM classifier should be trained first. The training process can be done in the ground before satellite launching. The on-board processing only involves classification which is described as following: where i x is the training data, is kernel function, are Lagrange multipliers. b is a real constant. and b can be calculated from training process.
The RBF kernel is the most used kernel function in hyperspectral classification. It is described as: where is the width parameter, and the optimum value of in Equation (4) is ascertained by Cross Validation (CV) method together with and b in training processing.
Implementation
To implement such complicated algorithm in heterogeneous platform, several challenges should be overcome. Firstly, a special hardware platform should be designed with considering the computing architecture (including SOPC), logic resource, data cache and data communication interface with the satellite. Secondly, when designing the algorithm, a fully pipeline design should be carefully implemented to get high performance and energy efficiency of the algorithm, the balance point of cycle and resource should be find. The logic part in logic resource and the software part in processor must be working synchronously.
As comprising of processors and special computing elements, a new type of heterogeneous chip, ZYNQ, its lower-power costing and light weight, is used in this research work for onboard classification. The Zynq chip contains two ARM processors (PS partition) and logic resources(PL partition).
Parallelism is implemented to get high performance, and it should be reasonable considering the limitation of hardware resource, data cache size, bus bandwidth, speed of data source and the power supply.
Implementation of a binary classifier
In this paper, each multi-classifier is made up with a binary LS-SVM classifier and Hamming-distance judging module.
For each binary classifier, the formulas are as described in (3) and (4), Fig.3 shows the architecture of a binary classifier and the architecture of the whole algorithm. For every test pixel which contain several (in this paper, it is 9) different band data, the binary classifier will run k(k-1)/2 times to get every bit of result code.
Hamming distance According to section II, the function of a 1-vs-1 SVM binary classifier is completed in four steps. In Fig.4, "Bd" stands for spectral band numbers and the "Dim" stands for the training dimension. In the "Two Norm Value" step shown in Fig.4, the value of 2 2 x -y as in Equation (4) is computed, and we need the pixel data and the training pixel data. For one pixel data, it should be processed together with all the training data in above formula, every spectral data should participate the processing which is shown in Fig.3. In the end of this step, there is an iteration which is a challenge for a full pipeline design, all the operations should be finished before starting the second step. Considering that this loop is the inner loop, unrolling can reduce a huge number of processing cycles but not too much extra resources, this step will be implemented in parallel. For the step of "Exponential function", is required to calculate the value of Equation (4), and is implemented by using the Exponential IP of Xilinx by using DSP48E and logic resource.
For "Multiplier Accumulator" step, α and b is required. The results of "Exponential function" step are multiplied by α, then accumulating together, this iteration is an outer, unrolling will take a great number of logic resource, and not reduce the processing cycle sharply, this step will be implemented in loop.
Those step can be executed faster by promoting the parallelism at every step but the resource consumption and data communication bandwidth will be sharply increased. So the balance point should be calculated between processing cycles and such factors. For example, in first step, using 8 couples of subtractor and multiplier, instead of one couple to process 8 training pixels data simultaneously. The cycles of the first step will be reduced to 1/8 compared with raw design. After experiments, we found that a float-point adder required 2 DSP48Es which is the most limited and useful resource in a computing intensive application design, and for subtractor, multiplier and exponential function module the requiring number are 2, 3 and 26 respectively. Considering the total number of DSP48E in our platform is 220, so, when class number is over 4, the device cannot support so much DSP48E for the k(k-1)/2 counts of binary classifiers. To realize a scalable design for different number of classes, each multi classifier uses only one unparalleled binary classifier, six multi classifier is designed in our application. For every binary unparalleled classifier, the theoretical total cycle requirement is described in (3). Higher parallelism can be carried out by unrolling the computing loop.
The architecture of the multi classifier by using Hamming-distance is shown in Fig.5 It contains three modules: Control Management, SVM binary classifiers and Hamming-distance decision module. Control Management module manages the flow of computing including data input and output, and selects the training data and parameters which decides the final bit location in result code for computing Hamming-distance, those parameters and training data are saved in Training Dataset & Parameters module. There are two approaches to implement SVM binary classifier module, using k(k-1)/2 classifiers in parallel to get the results at the same time, this method consumes k(k-1)/2 times resources than using only one which required more cycles for processing. Considering the resource limitation of onboard application, so in this paper, one binary classifier is used in each multi classifier.
To find the nearest class, Hamming-distance between result code and identifying code should be computed. By the advantage of logic gates, the result code will be processed in exclusive OR operation with identifying code of every class, then, the result of above operation will be processed in AND operation with a pre-mask code of every class. The number of 1 in every result will be accounted, the minimum one have the most similar degree with the class label, and set as the data label. Table 1. shows the identifying and pre-mask codes for class number upto six
A scalable control program in ARM processor
By just uploading the training dataset and parameters, another new multi-classifier can be launched based on the old classifiers under the control of program. The program which is run on ARM processor controls the flow of computing. It reads HSI dataset and configures the classifier. Program flow is shown in Fig.6. The program can assure and control the classifier by reading and writing the registers in classifier through AXI Lite interface. In "Initial Peripherals", the peripherals and the program are prepared for computing, then in "Initial Training dataset & Parameters", the training dataset and parameter α is sent to Storage module by DMA controller through AXI HP ports, and this can get a high transfer speed and realize a heavy work on processor. In the "Read Pixel data", the image pixel are transmitted to the binary classifier, the status of the classifier is monitored, after computing the Hamming-distance, multi classifier send the class label of input data to ARM processor. Different classifiers are launched by the program works in parallel. After the processing, results can output through CPCI interface in the demo system. In our experiment, the time and the classifying accuracy will be shown through UART ports on PC.
Experiments &Result
To compare efficiency with different platforms, the experiment is carried out on two well-known real HSI datasets. These are produced by Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) [13], are engaged for testing. AVIRIS can capture 224 bands data for every pixel, from its instrument features, its scan rate is 12Hz, and in every scan, 677 pixels will be produced, the sampling rate is approximately 123.1μs/pixel.
To prove our design can realized online real time processing for AVIRIS, the processing time of every pixel is evaluated from the experiments. These two datasets are both provided by University of the Basque Country [14]. The first image contains 145 × 145 pixels, and it comes from a mixed agriculture/forestry photo in Northwestern Indiana on June 1992. This image is gathered over the Indian Pines Test Site and shown in Fig.7. The second image is collected from Salinas Valley, California, and its spatial resolution is 3.7meter/pixels comprising of 512 × 217 pixels.
In our experiments, considering water absorption bands and information redundance, only 9 spectral bands and 6 classes are used for training and identifying in both datasets. For each class, the number of training and testing pixels is 50 and 100 respectively.
In order to evaluate the performance of our proposed design, four reference designs in different hardware platform are developed in the experiment II. The first one is implemented on HP XW8600 workstation with the configuration of Intel Xeon X5482 having 8 cores at 3.2GHz frequency and 64GB of memory. The same algorithm is implemented in C language on visual studio 2010 development environment. The second reference design is implemented on an embedded system comprising of an ARM cortex-A9 processor at 666.7MHz, Vector Floating Point Unit and 32KB Cache is employed to speed up the computing. The third one is on a state-of-the-art Texas Instruments DSP of TMS320C6778 at 1000 MHz which contain 8 cores to accelerate the processing.The last reference design is on Power PC440 processor, which runs at 400MHz and a FPU in it. The power consumption of these platform are also measured. In order to measure the power consumption more accurately on workstation, the difference of power consumption in running and idle state is used to calculate the power consumption of algorithm. The same 600 pixels data is tested on all the platforms with in the same training dataset, parameters and data precision.
For the Indian Pines Test Site dataset, the time consumption and speedup are shown in Table 2.
ICMM 2016
In above table, OA means overall accuracy, E stand for energy consumption. From this table, our Hammning-distance multi-classifier on ZYNQ gets 8.3x speedup with about 224x energy saving compared to HP workstation. Compared with ARM, DSP and PPC, our heterogeneous design gain 51.2x, 2.54x, and 330x speedup, and 43x, 11x, and 835x energy saving respectively. By the high frequency clock, embedded FPU, and fast cache, ARM platform gain higher computing speed than DSP and PPC platform. As the lower frequency and the bandwidth limitation between FPU and CPU core, the PPC shows slowest process than other platforms.
Our design gains about 98.3% overall accuracy. With the same training and test data, especially the same classification algorithm, the overall accuracies are the same on different embedded platforms and PC. Under the onboard resources and power limitation, our heterogeneous platform and algorithm architecture is efficient for HSI classification applications.
Comparing with other research on the same dataset, the comparison of overall accuracy is shown in table 3. [15] 98.02 Wavelet Networks [16] 82.0 MLRsub [17] 92.5 HA-PSO-SVM [18] 98.2 PGNMF [19] 93.36 From above table, the proposed approach in this paper gain higher accuracy than other research.
For the Salinas Valley dataset, the comparison on power consumption and speedup are shown in Table 4. For online application of AVIRIS, the sampling rate of spectrometer is 123.1μs/pixel [20], and our design can realize 27μs/pixel classification. So it can fully fill the real-time processing requirements.
Conclusions
In this paper, we propose a novel Hamming distance judging strategy based multi-LS-SVM-classifier for HSI classification. By employing parallel logic architecture and flexibility of software in hybrid ZYNQ SOC, we realized the proposed multi classifier with high performance and power efficiency for satellite onboard application. The experiments results on two datasets from AVIRIS demonstrate that the proposed multi-classifier reaches up to 2.5x ~ 330x speed up with 11x ~ 835x energy saving compared to different embedded platforms. At the same time, it gains over 97.8% overall accuracy. So it can realize high overall accuracy and low power consumption real time hyperspectral image classification. | 4,228 | 2016-01-01T00:00:00.000 | [
"Computer Science"
] |
RETRACTED ARTICLE: Some new results on the boundary behaviors of harmonic functions with integral boundary conditions
In this paper, using a generalized Carleman formula, we prove two new results on the boundary behaviors of harmonic functions with integral boundary conditions in a smooth cone, which generalize some recent results.
Introduction
Let R n (n ≥ ) be the n-dimensional Euclidean space. A point in R n is denoted by V = (X, y), where X = (x , x , . . . , x n- ). The boundary and the closure of a set E in R n are denoted by ∂E and E, respectively. We introduce a system of spherical coordinates (l, ), = (θ , θ , . . . , θ n- ), in R n that are related to Cartesian coordinates (x , x , . . . , x n- , y) by y = l cos θ .
The unit sphere and the upper half unit sphere in R n are denoted by S n- and S n- + , respectively. For simplicity, a point (, ) on S n- and the set { ; (, ) ∈ } for a set ⊂ S n- are often identified with and , respectively. For two sets ⊂ R + and ⊂ S n- , the set {(l, ) ∈ R n ; l ∈ , (, ) ∈ } in R n is simply denoted by × .
We denote the set R + × in R n with the domain on S n- by T n ( ). We call it a cone. In particular, the half-space R + × S n- + is denoted by T n (S n- + ). The sets I × and I × ∂ with an interval on R are denoted by T n ( ; I) and S n ( ; I), respectively. We denote T n ( ) ∩ S l by S n ( ; l), and we denote S n ( ; (, +∞)) by S n ( ).
The ordinary Poisson in T n ( ) is defined by where ∂/∂n W denotes the differentiation at W along the inward normal into T n ( ), and G (V , W ) (P, Q ∈ T n ( )) is the Green function in T n ( ). Here, c = and c n = (n -)w n for n ≥ , where w n is the surface area of S n- . Let * n be the spherical part of the Laplace operator, and be a domain on S n- with smooth boundary ∂ .
R E T R
We denote the least positive eigenvalue of this boundary problem by τ and the normalized positive eigenfunction corresponding to τ by ψ( ). In the sequel, for brevity, we shall write χ instead of ℵ + -ℵ -, where ℵ ± = -n + ± (n -) + τ .
The estimate we deal with has a long history tracing back to known Matsaev's estimate of harmonic functions from below in the half-plane (see, e.g., Levin [], p.).
Theorem A Let A be a constant, and let h(z) (|z| = R) be harmonic on T (S + ) and continuous on T (S + ). Suppose that Then where z = Re iα ∈ T (S + ), and A is a constant independent of A , R, α, and the function h(z).
In , Xu and Zhou [] considered Theorem A in the half-space. Pan et al.
[], Theorems . and ., obtained similar results, slightly different from the following Theorem B.
Theorem B Let A be a constant, and h(V ) (|V | = R) be harmonic on T n (S n- + ) and continuous on T n (S n- + ). If where V ∈ T n (S n- + ), and A is a constant independent of A , R, θ , and the function h(V ).
Recently, Pang and Ychussie [], Theorem , further extended Theorems A and B and proved Matsaev's estimates for harmonic functions in a smooth cone.
be a constant, and h(V ) (V = (R, )) be harmonic on T n ( ) and continuous on T n ( ). If
is a sufficiently large number, and M is a constant independent of K , R, ψ( ), and the function h(V ).
In this paper, we obtain two new results on the lower bounds of harmonic functions with integral boundary conditions in a smooth cone (Theorems and ), which further extend Theorems A, B, and C. Our proofs are essentially based on the Riesz decomposition theorem (see []) and a modified Carleman formula for harmonic functions in a smooth cone (see [], Lemma ).
In order to avoid complexity of our proofs, we assume that n ≥ . However, our results in this paper are also true for n = . We use the standard notations h + = max{h, } and h -= -min{h, }. All constants appearing further in expressions will be always denoted M because we do not need to specify them. We will always assume that η(t) and ρ(t) are nondecreasing real-valued functions on an interval [, +∞) and ρ(t) > ℵ + for any t ∈ [, +∞).
Main results
First of all, we shall state the following result, which further extends Theorem C under weak boundary integral conditions.
Theorem Let h(V ) (V = (R, )) be harmonic on T n ( ) and continuous on T n ( ).
Suppose that the following conditions (I) and (II) are satisfied: Remark From the proof of Theorem it is easy to see that condition (I) in Theorem is weaker than that in Theorem C in the case c ≡ (N + )/N and η(R) ≡ K , where N (≥ ) is a sufficiently large number, and K is a constant.
Theorem The conclusion of Theorem remains valid if (I) in Theorem is replaced by
Remark In the case c ≡ (N + )/N and η(R) ≡ K , where N (≥ ) is a sufficiently large number and K is a constant, Theorem reduces to Theorem C.
We have the following estimates: from [, ] and (.). We consider the inequality We first have We shall estimate U (V ). Take a sufficiently small positive number d such that which is similar to the estimate of U (V ). We shall consider the case V = (l, ) ∈ (d). Now put Since rψ( ) ≤ Mδ(V ) (V = (l, ) ∈ T n ( )), similarly to the estimate of U (V ), we obtain Similarly to the estimate of U (V ) in Case , we have which, together with (.) and (.), gives (.).
R E T R
which also gives (.). Finally, from (.) we have which is the conclusion of Theorem .
Proof of Theorem 2
We first apply a new type of Carleman's formula for harmonic functions (see [], Lemma ) to h = h +hand obtain where dS R denotes the (n -)-dimensional volume elements induced by the Euclidean metric on S R , and ∂/∂n denotes differentiation along the interior normal. It is easy to see that from (.). We remark that We have (.) and | 1,775.2 | 2016-07-27T00:00:00.000 | [
"Mathematics"
] |
Sensitivity Analysis of Road Freight Transportation of a Mega Non-Alcoholic Beverage Industry
Re-optimization can be very costly for gathering and obtaining more data for a particular problem, to curb this very expensive investment. Sensitivity analysis has been used in this work to determine the behaviour of input parameters of the formulated problem. The main goal of the study is to respectively provide, derive, observe, compare and discuss the sensitivity analysis of data that has been optimized using different methods of the optimal solution. The best method, saving the highest percentage of transportation cost, for the formulated problem is determined to be the North-West Corner method. This was carried out by arbitrarily assigning values to the available warehouses to determine the best possible demand and supply cases rather than the initial cases. Thus, more cases are advised to be supplied to FID from the Asejire plant for the optimum reduced value of transportation cost. DOI: https://dx.doi.org/10.4314/jasem.v24i3.8 Copyright: Copyright © 2020 Latunde et al. This is an open access article distributed under the Creative Commons Attribution License (CCL), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Dates: Received: 16 November 2019; Revised: 11 January 2020; Accepted: 22 February 2020
The best application of real-life problem that has been successful over the century as numerous studies suggested in the optimization field is the transportation problem, Shraddha (2017) solved the transportation problem of the Millennium Herbal Company using different methods and comparing the results yielded. Shraddha gave several methods of solving the transportation problem and obtain the objective of evaluating the similarities and differences of these methods. Dantzig (1963) used the simplex method to determine the solution of transportation as the primal simplex transportation. He proposed that initial basic feasible solution for the transportation problem can be determined through the Column minima method, Row minima, Matrix minima, Vogel's approximation method and the North-West Corner Rule. For the optimal solution, he used the Modified Distribution (MODI). Rao (2009) explained the concept of linear programming, the revised simplex method, duality in Linear Programming under which he worked on duality theorems and dual simplex method. The most interesting of part of this fourth chapter is the Decomposition Principle, Sensitivity of Postoptimality Analysis especially the Transportation Problem. Rao, however, concluded that there are two major methods of obtaining solutions for practical aspects of optimisation, firstly is the sensitivity equation using Kuhn-Tucker conditions and secondly sensitivity equation using the Concept of Feasible Direction. Pursulai and Niittymaki (2001) worked on the functionality of the simulation program Simu+++ and Dispo+++, developed at the Institute of Transport, Railway Construction and Operation in the University of Hanover, Germany. In their book Mathematical Methods on Optimization in Transportation Systems shared the book to two distinct parts, (i) Public Transport Models and (ii) General Transport Models. In the first part they carefully dealt with the prevention of delay in railway traffic by optimization and simulation, then the heuristic for scheduling buses and drivers for an ex-urban public transport computing with bus-driver dependencies. In the research work Optimization Techniques for Transportation Problem of Three Variables, Joshi (2013) used four methods, namely; the North-West Corner Method, the Least Cost Method, the Vogel and the MODI method, in the process of considering the optimization techniques of transportation for three variables, she vividly explain the steps to each method and the steps to determine the optimal solution and comparison between the MODI method and every other method. The book aims at getting the shortest, best and cheapest route to satisfying the demand from any destination. Latunde et al. (2016), Latunde and Bamigbola (2018), Latunde et al. (2019) and Latunde et al. (2020) also analysed the model design by sensitizing some model parameters in the approach of optimal control models to asset management and transportation problems. Therefore, the objective of this paper is to present the sensitivity analysis of road freight transportation of a mega non-alcoholic beverage industry in Nigeria.
MATERIALS AND METHODS
The methods used in the model formulation of the road freight transportation of a mega non-alcoholic beverage industry in Nigeria is Linear programming (LP). LP is applied to the problems related to the study of efficient transportation routes, that is, how efficiently the product from different sources of production is transported to the different destinations such that the total transportation cost is minimised.
Here, two major demand warehouses are considered: Asejire and Ikeja; and 11 major supply destinations are considered: FID, Akin, Oniyele, MGR, Adhex, FDR, Vero, BnB, Mimx, Nuhi and Ile-Iwe. The available data are utilised in the formulated model and solved using four different methods of solving transportation problems: North-West corner method (NWC), Least Cost method (LCM), Vogels Approximation method (VAM) and directly through a computer software called Maple. The model design is therefore analysed using Sensitivity analysis as a post optimality tool where the model designed are analysed to determine the behaviour of these different methods of solving the identified problem, thus recommend a possible improvement on different cases of demand and supply.
Model Formulation: Let's consider a condition in which nine distinct source i.e , , , , , , , , and have to meet the request of also nine destinations, say , , , , , , , , . The gods available at each source is specified , , , , , , , , and the goods requested at each destinations , , , , , , , , . The cost of moving goods from the source to destination can be represented on tables of say where the subscripts (1-9) indicate the cell , given the cost of moving from the source (origin) i to destination j, therefore is the cost of moving goods from source to destination .
The Linear Programming method is a useful tool for dealing with such a problem as a transportation problem. Each source can supply a fixed number of units of products, usually called the capacity or availability, and each destination has a fixed demand, usually known as a requirement. The nature of the and its application in solving problems involving several products from sources to several destinations, this type of problem is frequently generally called " The Transportation Problem ".
Suppose a company has x warehouse and the number of retailers to be y, we can only ship one product from x to y. We can build a mathematical model for the following transportation problem. For example: Consider the table 1 Where implies the supply to the warehouses and is the demand by the retailer outlets.
Let companies producing goods at different places (factories) say "m" factories, from i=1,2,3,...,m. Also, it supplies to different distributors or warehouses, we have this to be , i=1,2,3,...,n. The demand from the factory reaches all requested places (say, wholesalers). The demand from the last wholesaler is the jth place, we call this . The problem of the company is to get goods from factory i and supply to the wholesaler j, the cost is and this transportation cost is linear. By formulation, if we transport numbers of goods from factory i to wholesaler j, then the cost is .
The problem is to find the minimum cost of transporting those goods. The condition that must be satisfied here is that we must meet the demand at each of the wholesalers' request and supply cannot exceed. Therefore, linearly, the cost of this program is The number of goods transported from the factory i is ∑ .
(2) Meanwhile recall, is the good transported i to j. From the factory, you can transport any goods to any of the wholesalers j=1,2,...,n. Here (2) above is the addition of all goods supplied by the factory i from the first wholesaler to the last. The goods cannot be more than the request to be supplied to the wholesaler, we, therefore, have it that ∑ ≤ ∀ = 1,2, . . . , .
(3) Similarly, the constraints to make sure demand is met at all wholesalers point is.
There would be excess demand if the sum of all supply is not more than the demand as such, the request from the wholesalers will be much after supplies have been made to avoid this, therefore, have that ∑ ≤ ∑ , (5) if this is not holding, then the demand cannot be met. Hence, there must be enough possibly excess supply to be sure that demand is met. It is also fair to assume that the quantities demand is exactly equal to the quantities supplied.
When this happens it means the plan for transportation cost is perfect and the supply meets the wholesalers' need at every point and disposed of all goods that left the factory. Therefore, at the cost , m supplies for i=1,2,3,...,m, and n demands for j=1,2,3,...,n. The major work is finding a transportation schedule denoted by to get a solution to min ∑ ∑ (7) subject to ∑ = ∀ = 1,2, . . . , .
Optimal Solutions to the Problem: This subsection shows the data gathered, the result of the optimal feasible solution obtained and the sensitivity analysis of the parameters from two major plants, namely: The Asejire and the Ikeja plant. The study of the graph and the chart above in Figure 1 shows that the North-West Corner Method produces the optimum transportation cost which is 517,040. For the analysis here, we increase the number of cases in demanded by each warehouse from both plants (Asejire and Ikeja), we do this by adding 50 cases each. We run every addition by the Maple software to determine the outcome and the optimized cost that will be generated, it was studied that some warehouses had more cost value than the others. In Table 3 above is the amount of cases demanded by FID from the Asejire plant, is the amount of cases demanded by Akin also from the Asejire plant, it continues as represented in Table 2, Table 3 and Table 4 and continues on the row until which is the number of cases demanded by Nuhi and lastly which is Ile-Iwe.
is the amount of cases demanded by FID from the Ikeja plant, is the amount of cases demanded by Akin also from the Ikeja plant, it continues as represented in Table 2, Table 3 and Table 4 and continues on the row until which is the number of cases demanded from Ikeja by Nuhi and lastly which is Ile-Iwe.
RESULTS AND DISCUSSION
The problem was solved with three distinct methods namely; the North-West Corner Method, The Least Cost Method and the Vogel's Approximation and then compared with the result computed by the linear programming module called MAPLE. The study of the graph and the chart above in Figure 1 shows that the North-West Corner Method produces the optimum transportation cost which is 517,040. It is noted that Optimized Result is the cost computed by MAPLE and RCV is the Reduced Cost Value in percentage. The result of the Sensitivity Analysis studied shows that more cases of drinks can be should be supplied to which is the FID warehouse from Asejire as it has the largest optimum reduced cost value almost twice of the others. The implication of this is that by priority of supply, will get more cases and will still minimize cost. | 2,644.6 | 2020-04-23T00:00:00.000 | [
"Economics"
] |
Book Review: Critical Thinking: A Concise Guide
“To believe or not to believe, that is the question” should be an automatic question we ask ourselves. Thus, scientists’ aim should be to provide reasons and evidence when many people do not believe in science. These kinds of questions are even more important during health crisis when the general population have to follow scientists’ recommendations [i.e., coronavirus disease 2019 (COVID-19)]. Indeed, multiple factors can lead people to relay misinformation or be victim of false reasoning (Apuke andOmar, 2020). Bowell, Cowan, and Kemp’s book (Bowell et al., 2020) is a great start to learn how to distinguish good arguments from false reasoning or rhetorical techniques. Synthesis and simplification of information, logical and analytical reasoning, as well as systematical evaluation of verbal content will be taught in this book, which come close to the very definition of critical thinking (Jacobs et al., 1997). To help the reader through the book, the authors made a chapter summary in the introduction and at the beginning and the end of each chapter.While some of the eight chapters are quite independent, a few of them are bonded together (3 and 4, 5, and 6).
"To believe or not to believe, that is the question" should be an automatic question we ask ourselves. Thus, scientists' aim should be to provide reasons and evidence when many people do not believe in science. These kinds of questions are even more important during health crisis when the general population have to follow scientists' recommendations [i.e., coronavirus disease 2019 ]. Indeed, multiple factors can lead people to relay misinformation or be victim of false reasoning (Apuke and Omar, 2020). Bowell, Cowan, and Kemp's book (Bowell et al., 2020) is a great start to learn how to distinguish good arguments from false reasoning or rhetorical techniques. Synthesis and simplification of information, logical and analytical reasoning, as well as systematical evaluation of verbal content will be taught in this book, which come close to the very definition of critical thinking (Jacobs et al., 1997). To help the reader through the book, the authors made a chapter summary in the introduction and at the beginning and the end of each chapter. While some of the eight chapters are quite independent, a few of them are bonded together (3 and 4, 5, and 6).
EVALUATION OF THE BOOK'S CONTENT
The first chapter introduces us to the critical thinking with lots of definitions. Basics of argumentation, are explained and many practical examples (i.e., Martin Luther King's "I have a dream" speech) are put forward. Open-mindedness and self-questioning are explicitly promoted and encouraged.
Chapter 2 leads to a non-exhaustive list of rhetoric methods seeking to persuade without using arguments. Many tips are provided to spot these attempts in a speech and to judge the relevance of arguments without being under the influence of rhetorical elements. Overall, it is an easy-to-read chapter that teaches how to dodge non-argumentative ploys.
Both Chapters 3 and 4 are dedicated to logical reasoning. They are the most elaborated chapters of the book and introduce a lot of principles, models, and definitions. Chapter 3 starts with the question of deductive validity, which will be discussed through the concepts of true, false, valid, or invalid concerning arguments and their components. Chapter 4 introduces probabilistic reasoning and logic. Probabilities, mathematics models, and methods to judge the relevance of an argument are at the center of this chapter.
Again, both Chapters 5 and 6 are paired, as they are, respectively, dedicated to argument reconstruction and judgment. Longer than the other ones, Chapter 5 focuses on the process of extracting an argument in order to reconstruct it in its simplest form. Chapter 6 deals with argument analysis in two parts. The first part is about methods to assess both validity and relevance of a given argument. The second part includes some practical tips and advices to provide constructive criticism of an argumentation. After reading Chapter 6, you will be able to successfully pass the Ennis-Weir Critical Thinking Test (Ennis and Weir, 1985), a critical thinking test based on a flawed arguments letter.
The last two chapters are mostly independent from the rest of the book and are easy to read, although you do not have mathematical skills. Chapter 7 is probably the most on time chapter these days. It introduces pseudo-reasoning, fallacious, and misleading arguments (i.e., uses of ad hominem fallacy when responding to someone's argument by making an attack upon the person rather than addressing the argument itself). Beyond the concept, the authors explain a very interesting paradox: why these arguments should not be considered as reliable and why so many of us still tend to accept them.
The last chapter is a philosophical opening on epistemological and sociological questions. Concepts of truth or false, knowledge, and believing are discussed, leaving the reader to make up his own mind on the subject. The main purpose of this chapter is to add nuance to what we may consider as true, or not, even before analyzing logical structures and relevance of arguments.
DISCUSSION
Researchers in philosophy, psychology, and education agree that critical thinking covers skills of analysis, logical reasoning, judgment, and decision making (Lai et al., 2011). All these topics are explored in this book, allowing the reader to have an insight on what can be defined as critical thinking such as the mastery of language, logic, argumentation, and problem solving. Technical concepts are explained by different methods such as the schematization of arguments into syllogisms with premise(s) and conclusion(s) and the use of extended examples to decompose and analyze a speech. In addition, this fifth edition introduces the use of Venn diagrams to illustrate categorical deductive logic. Many detailed examples have also been added, as well as the discussion of current phenomena (i.e., fake news). We strongly encourage librarians and teachers to recommend this book to train critical thinking psychology students in university (Lacot et al., 2016) and earlier at school when possible (Hand et al., 2018). Indeed, from both practical and academic point of view, this book could be addressed to undergraduate students to enable them to develop an openmindedness and a deep reflection around their own knowledge and the concepts addressed during their training and practice (i.e., therapies, models). Anyone, regardless of their previous knowledge, could benefit from this book, as there are lots of example, practical exercises and definitions. Finally, this book's additional contribution compared to previous books is to provide a methodical, simple, and complete explanation of the fundamental concepts related to critical thinking in a practical, playful, and concrete manner with numerous illustrations drawn from the real world. We hope this book will be translated in different languages in the future, as the flawed arguments and shortcuts are well-spread in the world.
AUTHOR CONTRIBUTIONS
VG wrote the manuscript. MH drafted it. All authors contributed to the article and approved the submitted version. | 1,607.2 | 2021-04-09T00:00:00.000 | [
"Philosophy"
] |
Design of an Infusion Monitoring System Based On Image Processing
At present, infusion monitoring heavily relies on nursing rounds or supervision. In many cases, nursing staff fail to remove patients’ cannula in a timely manner after intravenous infusion due to negligence, which leads to serious swelling and blood backflow at the chosen venipuncture area over time, causing pain and even endangering patient lives. This study designed a smart real-time liquid level detection system based on image processing to solve this difficulty. By running the canny edge detection algorithm and Hough Transform (HT) algorithm on a Raspberry Pi computer with an industrial camera, the system extracted and calculated the image’s pixels for judgment. As the number of pixels in the detected area reaches the alarm value, the system shall issue an alarm. In this experiment, a 62mm drip chamber was selected, and the system could achieve a high success rate of 98% with a set detection width of 5mm. The experimental results showed that whether the liquid level reached the alarm level could be accurately and effectively-identified utilizing the system. The information could be transmitted to the receiving end promptly with a high success rate, which verified the system’s effectiveness. Given its real-time performance and high accuracy, the system has excellent application prospects.
Introduction
Infusion is the most commonly used treatment in clinical medicine [1] . Except for the ICU, the current infusion basically uses an infusion bottle or an infusion bag plus a disposable infusion line with a Murphy's tube, and the drug is injected into the patient's body with the help of the liquid's own weight and atmospheric pressure. Although this method is simple and easy to implement, the status during the infusion process is completely dependent on the patient's own observation and inspection by the nurse. There is no self-alarm and protection mechanism, and abnormalities in the infusion process are prone to occur, but the failure to discover in time may lead to medical risks [2] . In recent years, with the rapid development of computer and wireless communication and other technologies, a variety of infusion monitoring devices have been introduced for clinical applications. These devices usually achieve infusion-assisted monitoring using the weight method, photoelectric detection method, ultrasonic echo detection method, and capacitive detection method [3][4][5][6][7] .
In this paper, a smart real-time liquid level detection system based on image recognition was designed as follows. A drip chamber for infusion was fixed in a closed box. Through lighting the box and collecting images using a camera, real-time liquid level change data can be automatically captured in real-time. The data were then processed locally and uploaded to a computer for data processing and information identification via the network to judge whether the liquid level in the chamber reached the alarm level, thereby achieving smart remote monitoring. The system's advantage is that the impacts of external conditions such as light conditions and drip chamber movement on the detection results during the image detection process can be ignored, and the detection success rate is high. Another advantage is that remote and real-time monitoring can be achieved to monitor the system remotely from the computer at the nurses' station outside the ward. When the drip chamber's liquid level in the ward reaches the alarm value, the system can automatically send a message to remind nursing staff, which avoids wasting human resources and resources and has real-time performance.
Liquid Level Image Processing
Image recognition relies on the powerful computing power of the computer, through the automatic processing of massive physical information, completes the recognition of various target types in the image, thereby replacing human mental work. Image recognition integrates the thinking methods of multiple disciplines, and is one of the most active content in the current computer vision field [8] . In order to design the smart real-time liquid level detection system, the first difficulty that needs to be addressed with is the algorithm for recognizing and processing the images returned by the camera. The liquid level image processing process mainly consists of four steps: sharpness improvement, straight line detection, liquid level height calculation, and threshold value setting, among which straight line detection and liquid level height calculation are the most important [9] .
Sharpness Improvement of Original Images
In general, since the captured images are blurred, some details are difficult to be recognized by the computer. In this image processing, in order to facilitate straight line detection, the sharpness of the images was improved so that the contrast of the details on the images was enhanced, making the images look clearer [10] . Here MICRO Z301P USB Linux camera was used, which can improve image sharpness.
The USB camera's working principle can be described as follows: the optical picture information that is converted through the lens of the USB camera is first projected onto the surface of the USB camera sensor and then converted into electrical signals [11] . After analog-to-digital (AID) conversion, the signals shall be converted into digital image signals and then sent to the Raspberry Pi for further digital signal processing. In the end, the processed image signals are transmitted to the computer for processing, and the image information captured by the USB camera can be seen through a liquid crystal display (LCD).
The parameters are as follows: ◆Rate: 30fps/s ◆USB interface with a cable of about 1.2 meters, which is easy to use ◆Metal case and all-glass lens ◆Appearance size: 36mm*36mm ◆Signal-to-noise ratio: greater than 48dB ◆Video data format: 8/16bit ◆Working temperature: 0°-40° The advantages are as follows: •Adopts the most classic and stable MICRO solution with realistic colours; • Adopts high-quality non-deforming lenses to restore colours and make images more straightforward; •Ease-of-use USB interface; •Photosensitive devices with high-quality CMOS 1/3inch, super CCD photosensitive effect, and VGACIF format; •Supports moving and still image capture and Avi video recording; •High-quality 64-bit true-colour; •Fast compression with the compression ratio of 1:4 to 1:8.
Straight Line Detection
The flow of the conventional straight-line detection algorithm is shown as follows. To effectively eliminate sharp noise and smooth the image, the image was blurred by a Gaussian function (as shown in Figure 2(b)).
3. To reduce the interference of other lines in the edge detection to some extent before calculating the gradient, the grayscale image's contrast was reduced adequately in this paper (as shown in Figure 2(d)).
4. Then, edge detection was performed using the canny edge detection algorithm, and the binary image was obtained (as shown in Figure 2(e). Figure 2(f) showed the image without proper contrast reduction).
5. To remove the small disturbing blocks in the image, a simple morphological operation, that is, erosion operation was conducted on the image in this paper (as shown in Figure 2(g) and Figure 2(h)) Note: The kernel of (3, 3) was chosen because the image was not large. The horizontal lines (Figure 2(g)) and vertical lines (Figure 2(h)) were eliminated by changing the operator, making necessary preparations for straight line detection later. 6. Probabilistic Hough Transform was used for straight-line detection, i.e., the HoughLinesP function in OpenCV was used to implement straight line detection (as shown in Figure 2(i) and Figure 2(j)).
Height Calculation and Threshold Value Setting
The specific algorithm steps are as follows.
1: Extract the XY coordinates of the line. In the conventional process, the doctor shall think the infusion is over when the drip chamber is one-third full. This paper, in order to improve image recognition accuracy, indicates the completion of the infusion.
Optimized Liquid Level Detection Algorithm
The above algorithm is very demanding for images. Some of the demanding requirements are listed as follows.
1) The image shall have a very high resolution.
2) The pixels at the edges of the drip chamber in the image shall have varying gradients.
3) The noise interference at the liquid surface demarcation shall not be too blatant.
However, it is difficult to obtain the images satisfying the above conditions in the experiment, which results in a low success rate of the above scheme. Therefore, the following optimized algorithm was proposed in this paper for liquid level detection. (As shown in Figure 3) Step 3: Count the number of pixels in the images.
Step 4: As we can see from the pixel histogram (Figure 4), when the liquid level does not reach the specified position shown in Figure 2(k), the number of white pixels (256) in the binary images remains constant. Therefore, we assume that the number of white pixels is (H+ -S), where S can be used as a threshold value for the liquid level judgement.
Step 5: When the liquid level reaches the segmentation position, the number of white pixels (256) in the binary images shown in Figure 2(l) will increase significantly. Assuming that the increment is X, as shown in Figure 5. X>>S indicates that the liquid level reaches the border height and the infusion is complete.
Hardware
Hardware: Raspberry Pi 3 B+ development board, 2 million pixel industrial camera, Windows10 PC Software: python 3.0 in PyCharm, OpenCV, Raspberry Pi 3 B+ Linux mirror system, remote desktop software XRDP for Raspberry Pi 3 B+, camera software webcam for Raspberry Pi 3B+ Download Raspberry Pi Linux mirror system to an SD card, and configure Wi-Fi connection and SSH file in the SD card. Insert the card into the card slot of Raspberry Pi. Query the IP port of Raspberry Pi and use the remote desktop software to log in to Raspberry Pi through IP and configure the development environment. Install webcam through Linux terminal. Connect the USB industrial camera. Use python compiler to create a command script, and run the host's script through remote control. The camera can take a liquid level image of the drip chamber every 3 seconds, and the images can be sent to the computer for processing through Wi-Fi. Then use canny, Hough Lines, and other algorithms to binarize the images. Perform straight-line detection and image segmentation. The technical flow of measuring the drip chamber's liquid level employing Raspberry Pi is shown as follows.
Experimental Data
The experimental device is shown as follows. The algorithm of the system before optimization was tested first. In this experiment, a 62mm drip chamber was chosen. The alarm liquid level was set at 10mm from the bottom with an error range of 3mm. The liquid level detection system was tested, and the results were shown in Table 1. After several trials, the detection success rate was only 75%. Through research, it was found that the reason for the low success rate was that the detection algorithm before optimization required high accuracy of the images collected by the camera.
Trials were conducted on the optimized algorithm. The camera captured an image every 0.1 seconds. The number of white pixels in the 20 images captured in two seconds was summed up, and the maximum value of the cumulative number in the process was taken as the peak value. Then the peak value was averaged to eliminate the jitter error. When the average peak number of white pixels exceeded the threshold value, the trial was considered to be successful. The experimental data is shown in Table 2. After several trials, the detection success rate reached 96%, which was a high success rate, showing that the system was feasible.
Summary
This paper describes the implementation of a liquid level monitoring system. The system can be used to monitor the liquid level and obtain relevant data for real-time monitoring and information transmission of the liquid level of medical appliances in medical systems. It not only improves the work efficiency of medical personnel but also reduces labor costs. In this system, smart real-time liquid level monitoring is achieved based on the image recognition algorithm. Also, by automatically capturing real-time liquid level change data and uploading them to the computer locally or through the network for smart data processing and information identification, the liquid level can be remotely controlled in a smart manner. With the advantages of small size, good stability, real-time and straightforward operation, the system can better benefit the medical system. Suppose the smart liquid level monitoring system is used. In that case, the need to change a hanging bottle can be automatically identified, and an alarm can be sent out when the remaining medicine liquid in the hanging bottle is insufficient and a new bottle is required. In other words, nursers can be automatically notified to replace the hanging bottle or perform other operations. In the future, this system can be integrated with the hospital information system to improve nursing work efficiency. | 2,921.8 | 2021-09-01T00:00:00.000 | [
"Computer Science"
] |
Analytical evaluation of cosmological correlation functions
Using the Schwinger-Keldysh-formalism, reformulated in arXiv:2108.01695 as an effective field theory in Euclidean anti-de Sitter, we evaluate the one-loop cosmological four-point function of a conformally coupled interacting scalar field in de Sitter. Recasting the Witten cosmological correlator as flat space Feynman integrals, we evaluate the one-loop cosmological four-point functions in de Sitter space in terms of single-valued multiple polylogarithms. From it we derive anomalous dimensions and OPE coefficients of the dual conformal field theory at space-like, future infinity. In particular, we find an interesting degeneracy in the anomalous dimensions relating operators of neighboring spins.
Introduction
De Sitter space-time (dS) is arguably the most relevant and, at the same time, simple model for the early, and late time evolution of the Universe in a cosmological setting. It is a maximally symmetric solution of the Einstein equations with a positive cosmological constant, hence experiencing accelerated expansion. Overwhelming observational evidence points to the fact that in the distant past our universe went through a phase of accelerated expansion called inflation, while the asymptotic future seems to be described by an accelerated expansion as well. Both of these scenarios may be approximately described by a de Sitter space-time. Furthermore, to explain the spectrum of density fluctuations in the cosmic microwave background (CMB) [2] and structure formation in the universe, which originate from the early stage of the universe, it is important to understand quantum field theory in this background. Nevertheless, despite its relevance, this topic is much less developed than quantum field theory in Anti-de Sitter space-time (AdS) let alone Minkowski space-time. This is mainly due to conceptual and technical difficulties since dS, in contrast to AdS, does not posses a globally defined time-like Killing vector, which makes the choice of a vacuum more ambiguous and the definition of an asymptotic region, relevant for scattering experiments, much more challenging.
In this paper we would like to advance the study of QFT in dS by calculating the one-loop corrections to the cosmological correlation function of a conformally coupled real scalar field with a quartic self-interaction. One of our main motivations is to make sense of the notion of holography in the cosmological context. Similar to AdS, one can define a conformal boundary for dS which, however, is given by a space-like surface at future infinity, in contrast to AdS. It is therefore not possible to fix boundary conditions in the same way as in AdS since this is incompatible with unitary time evolution. Nevertheless, one expects a CFT description of the bulk theory in dS on the boundary since the symmetry group of dS acts on the future boundary as the euclidean conformal group.
There have been many attempts to implement the concept of holography in dS, starting with [3]. Most of them focus on the calculation of the wave function of the universe [4] for the Bunch-Davies vacuum [5]. In this case, there is a straightforward relation to the situation in AdS. Calculations of the expansion coefficients to the wave function have been pushed forward recently, using direct integration, unitarity methods, Mellin space, differential representations and polytopes [6][7][8][9][10][11][12][13][14][15][16][17][18]. As the wave function itself is, however, not an observable these results are more of a conceptual rather than phenomenological value. In principle one could obtain a cosmological correlation function by taking expectation values from this wave functional, but this approach is impractical in reality since it requires non perturbative knowledge of the wave function which, for interacting theories, is technically out of reach at the moment, at least to our knowledge.
Another approach, which we will follow in this work, is to evaluate the correlation function directly by performing a path integral along a closed time contour, the so called Schwinger-Keldysh or in-in formalism [19,20]. This approach has lead to several interesting results and the development of new techniques [1,[21][22][23][24][25][26][27][28][29]. We are going to take advantage of progress made in [26][27][28] to express the cosmological correlation functions in the Schwinger-Keldysh formalism as a sum over euclidean AdS (EAdS) Witten diagrams which was expressed in [1] as an auxiliary EAdS action.
Here we calculate the four-point functions up to one-loop order by direct integration in position space, applying the formalism developed in [30] to evaluate EAdS Witten diagrams. Interestingly, the Witten diagrams up to this order do not contain any elliptic integrals, in contrast to EAdS, and therefore can all be expressed in terms of single-valued multiple polylogarithms. We then compare the late time cosmological correlator to the conformal block expansion which allows us to extract the data of the dual CFT.
We find that the CFT is given by a deformation of a direct product of generalized free fields. However, in contrast to the CFT corresponding to the expansion of the wave function, the cosmological CFT contains three different trajectories of double trace operators due to the mixing of fields with different boundary conditions pictured in figure 1. We find that the cosmological correlators obey several CFT consistency conditions at different loop orders reflecting the fact the boundary theory is in fact a CFT. The second order (one-loop) anomalous dimensions for the double trace operators : O 1 ✷ n ∂ l O 1 :, : O 2 ✷ n ∂ l O 2 : and : O 1 ✷ n ∂ l O 2 :, derived in section 4 for all n and l, γ (2)S n>0,l>0 = − γ 2 l(l+1) ; γ (2)A n>0,l>0 = − γ 2 (2n + l)(2n + l + 1) γ (2) n,2l>0 = γ (2)S n,2l>0 ; γ (2) n,2l+1>0 = γ highlight an interesting symmetry between the anomalous dimensions at different spins. From the bulk perspective, this is could be a consequence of the symmetry in the EAdS action in eq. (3.3), enforced by the Schwinger-Keldysh formalism and the fact that we take a conformally coupled scalar field. We do not expect this symmetry to hold for general masses. The equations for γ (2)S n>0,l>0 and γ (2) n,l>0 even show a degeneracy for the conformal dimensions of these operators for all twists ∆ n,l − l, which seems quite remarkable.
This paper is organised as follows: In section 2 we briefly review the Schwinger-Keldysh formalism in the context of QFT in dS, define the propagators and give the auxiliary EAdS action first derived in [1]. Section 3 is where we present the calculation of the cosmological correlation function in terms of EAdS Witten diagrams and in section 4 we compare the results to a conformal block expansion on the boundary and extract anomalous dimensions. We conclude in section 5 with a short summary of the results and some suggestions for further investigation. The expression for the Witten diagrams are collected in the appendix A for the cross diagram and appendix B for the one-loop diagram. The single-valued multiple polylogarithms entering these evaluations are collected in the appendix C. The OPE coefficients are conformal blocks for generalized field are recalled in appendix D.
Perturbative QFT in de Sitter space
The main reason why Quantum field theory in dS is less straightforward than in AdS is the fact that it does not have a globally defined time-like Killing vector in all patches relevant for cosmology, leading to a time-dependent classical background. In this section we will describe how to deal with this issue.
Schwinger-Keldysh formalism in de Sitter space
To calculate expectations values of a time dependent background the Schwinger-Keldysh formalism is well suited. For this one specifies the initial vacuum and calculates the expectation value of local field insertions φ(X 1 ) · · · φ(X n ). Here we will work on the Poincaré patch parametrized by coordinates X = ( x, η) as in figure 2, which is given by the lower half space equipped with the metric In the interaction picture we then have 3) Here |0 BD is the Bunch-Davies vacuum to be defined below, while U I and U † I are the time-ordered and anti-time ordered evolution operator in the interaction picture given by where H I is the interaction Hamiltonian and T andT denote time-and anti-time ordering respectively. The Bunch-Davies vacuum condition is imposed at η → −∞. The denominator in equation (2.3) cancels vacuum bubble contributions, just as in flat space. There are two ways to perform this calculation. One is to expand the exponentials in U I and U † I and use Wick contraction on the left and right of the insertions to calculate the correlator. Denoting the fields on the time ordered side of the integral by φ T (X), the anti-time order fields by φ A (X) and the field insertions on the time slice at future infinity byφ( x), we have the Wick contractions where are time-and anti time-ordered correlators, while Λ T A (X 1 , X 2 ) and Λ AT (X 1 , X 2 ) are the retarded and advanced Green functions respectively. Similarly, whereΛ T /A (X 1 , x 2 ) is obtained from Λ T T or Λ AA by taking X 2 → x 2 to the future space-like conformal boundary of dS. A free massive scalar field in dS evolves according to the Klein-Gordon equation where the d'Alembertian is related to the quadratic Casimir of the SO(d + 1, 1) isometry group of dS as C 2 = − 1 a 2 dS . Comparing this with the weight ∆ and spin ℓ representations of the conformal group in d Euclidean dimensions with we recover the familiar relation between the mass of a scalar field and the scaling dimension on the boundary (2.10) These equations are invariant under the shadow transformation ∆ → d − ∆, which relates two unitarily equivalent representations. We can label the irreducible representations by the spin of the SO(d) part. We have a Lorentzian field theory and the states appearing should therefore correspond to unitary representations of the symmetry group SO(d + 1, 1). The scaling dimension ∆, which can take complex values restricted by unitarity, is restricted to fall into different classes. The most relevant ones to our analysis are the principal and complementary series.
The equations of motion guarantee, that any free field transforms in a unitary irreducible representation of the de Sitter group. Heavy fields with mass 4m 2 > a 2 d 2 in dS correspond to the principal series which exists for any spin ℓ. They have a complex valued scaling dimension Light fields in dS with mass 0 ≤ 4m 2 < a 2 d 2 correspond to the complementary series given by the real valued dimension where − d 2 < ν < d 2 for ℓ = 0 and 1 − d 2 < ν < d 2 − 1 for ℓ > 0. As we will discuss later this class of representations will be most relevant to us, since we will consider a conformally coupled field. For more details on this topic we refer the interested reader to [31].
EAdS can be constructed from the same ambient Minkowski space as dS with the same signature of the metric. Therefore, we could conclude that the Hilbert space of EAdS should be constructed from unitary irreducible representations of SO(d + 1, 1). But this is well-known not to be the case. The scaling dimension for EAdS can be obtained by setting a → ia in equation (2.10). The value for ∆ is therefore always real and the fields transform under unitary irreducible representations of SO(d, 2) the symmetry group of the Lorentzian version of EAdS. This is not a problem since QFT in EAdS is a euclidean field theory. Only after Wick rotation to Lorentzian AdS the Hilbert space should be given by unitary representations which it clearly does.
The situation for dS is different. In the four-point function that we analyse in our perturbative calculation, we will see that there are operators appearing in the spectrum with arbitrary dimensions not obeying any SO(d + 1, 1) unitarity constraints. However, since there is no operator state correspondence in dS, this does not really pose a problem, it just hints at the fact that the relation between the bulk and boundary degrees of freedom is more obscure in dS than in AdS. These points have been raised recently in the context of a proposed cosmological bootstrap in [1,25].
Here we will focus on the four-point function of a scalar field theory with interaction term The four-point function evaluated at future infinity is given by (2.14) Let us begin with the disconnected part where each two-point function is just given by the propagator Λ with both legs taken to future infinity. The first order term in the coupling constant λ has two contributions from contractions with the time-ordered and the anti-time-ordered Hamiltonian. We perform this integral after a Wick rotation for η T and η A individually, such that we do not cross the branch cut, With this transformation we can write the cross diagram as where now, X := ( x, z).
To define the propagator we consider the euclidean version of dS, which is a sphere. Upon Wick rotating back to dS and restricting to the Poincaré patch this fixes the vacuum as the Bunch-Davies or euclidean vacuum (see [32][33][34]). The propagator for a scalar field of mass m between two bulk points X and Y on the sphere reads 1 is the inverse of the geodesic distance and is a normalization constant. The Green function in dS is obtained from equation (2.20) by Wick rotating back and restricting to the Poincaré patch, which we will denote by Λ(K(X, Y )). To obtain the correct time ordering for the Feynman propagator when taking the flat limit, we have to obtain the correct behaviour across the branch cut at 0 < K(X, Y ) < 1 which coincides with the region of time-like separation. We therefore demand that the commutator between two fields at space-like separation should vanish, while at time-like separation it should be non-vanishing. 1 The Gauß hypergeometric function is defined as for ℜ(b) > 0 and ℜ(c) > 0.
Expressed in terms of two-point functions of the vacuum state defined by the analytic continuation of (2.20), this means For this expression to be non-vanishing for time-like separation we have to demand that we approach the branch cut from above and below depending on the timeordering. In the Poincaré patch this means doing the replacement where ε is an infinitesimal, positive, real parameter. The two-point function with the correct behaviour across the branch cut is therefore given by with 0 ≤ ε ≪ 1. The time ordered Feynman two-point function is given by This can be written in a more compact form replacing K(X, Y ) → K(X, Y ) + iε in (2.20), where we used that K(X, Y ) expressed in local Poincaré coordinates is given by (2.26) The time ordered Feynman Green function in dS is therefore given by while the anti-time ordered two-point function is given by These Green functions define the Bunch-Davies or euclidean vacuum. Let us mention that this is not the unique de Sitter invariant vacuum. There is an infinite space of de Sitter invariant vacua parametrised by two continuous parameters [35]. All these vacua have singularities at points related by the antipodal map and therefore do not provide the correct flat limit. The Bunch-Davies vacuum is therefore special from a physical perspective. Also, from a cosmological point of view, the Bunch-Davies vacuum seems to be the only reasonable choice, since it gives mode functions for the field that behave like in flat space when going to the infinite past or to wavelengths much smaller than the horizon. From now on we will only work in the Bunch-Davies vacuum. Note, that, contrary to EAdS, we cannot fix the fall-off behaviour of the Green function at future infinity to be either ∼ K ∆ + or ∼ K ∆ − . This is due to the fact that we can always rewrite the Bunch-Davies propagator as a sum of the propagators with a definite fall-off behaviour. By applying the following identities for the hypergeometric function 2 F 1 a, b; we can rewrite the hypergeometric function in (2.20) as With this formula we can express the time ordered Bunch-Davies propagator (2.27) in terms of propagators with fall-off behaviour Here we introduced the propagator with a definite fall off behaviour as the Wick rotation of the propagator in EAdS with equivalent expressions for Λ AA and Λ T A . Following the same conventions as for the AdS/CFT case we introduce the bulkto-boundary propagator as the limit of the bulk-to-bulk propagator taking on leg to future infinityΛ T /A (X 1 , x 2 ) := lim η 0 :=η 2 →0 Λ T /A,T /A (X 1 , X 2 ). (2.34) It does not matter if the boundary limit is taken with a time-or anti-time ordered point since there is no notion of time ordering at future infinity. The bulk-toboundary propagator reads where we introduced the bulk to boundary propagators with definite late-time fall-off behaviour asΛ Focusing on the conformally coupled case with ∆ ± = d±1 2 and using equation (2.36) and the Wick rotation (2.17) we can recast the four-point function (2.18) up to the second subleading order in η 0 → 0 as (2.37) The evaluation of the tree-level four-point function is therefore reduced to a calculation in EAdS, with two different boundary conditions contributing, corresponding to conformal dimensions ∆ + and ∆ − . We could proceed with this calculation diagram by diagram, which is the way this relation between cosmological correlator and EAdS Witten diagrams was first written down in [26][27][28]. However, as shown in [1], there is an elegant way to rewrite the dS action with the Schwinger-Keldysh contour directly in terms of an auxiliary EAdS action, from which the cosmological correlation functions can be extracted by straightforward functional derivation. We will review this formulation in the next subsection.
Auxiliary action for EAdS
In this section we review the derivation of section 3 of [1], for the auxiliary action for computing de Sitter correlators.
The closed time evolution between two in-states from the infinite past can be expressed by a path integral with closed time curves. Then a correlation function is given by taking functional derivatives of the time and anti-time ordered sources j T and j A of the partition function with the closed time action given by (2.39) Performing the Wick rotation η = ze ±i π 2 as described above, the action becomes As discussed above the classical solution of a free scalar field in de Sitter is given by which plugged in the action leads to leading to the result, derived in [1], We want to study a theory in dS with the potential V (φ) = λ 4! φ 4 . In that case the action (2.44) becomes In this work we consider the case of the conformally coupled scalar with ∆ + = d+1 This action can now be used to calculate correlation functions in dS, showing to all orders in perturbation theory, that cosmological correlators can be expressed in terms of EAdS Witten diagrams. The leading contributions in the late time expansions are given by calculating the EAdS correlators of the field φ − . Note however, that this field alone will not give a consistent CFT at the boundary, since there will be mixing interaction vertices between φ − and φ + . To be able to describe the CFT on the boundary we have to take into account the subleading terms in the late time expansion of the cosmological correlator as well. We also notice that the kinetic term in the action is not necessarily positive, leading to ghost-like behaviour of one of the fields. This would be a problem if we wanted to interpret this action as describing a bulk theory in EAdS, however, since we only us this action as a tool to describe a theory in dS, we should treat these signs only as a way to keep track of the correct relative prefactors in the expansion.
de Sitter correlation functions from EAdS Witten diagrams
In this section we focus entirely on the conformally coupled scalar field. As we noticed, perturbatively, this can be treated like a theory of two interacting scalar fields with boundary scaling dimensions ∆ + = d+1 2 and ∆ − = d−1 2 in EAdS, governed by the action (2.46) for odd boundary dimension. The propagators (2.33) then read We will then be able to use the formalism of [30] for evaluating the Witten diagrams. From now, we specialize to the case of d = 3.
To avoid unnecessary prefactors in the calculation we are changing the normalisation of the fields as φ ± → φ ± / √ 2 and the coupling constant as λ → 2λ The L-loop Witten diagrams between sets of fields of dimensions ∆ 1 and ∆ 2 are denoted by is evaluated in section 3.2.2, and the mixed correlators with (∆ 1 , ∆ 2 , ∆ 3 , ∆ 4 ) = (2, 2, 1, 1) and permutations are evaluated in section 3.2.3. Using the normalization of the fields and the coupling constant introduced in (2.46) and the conformal mappings as described in [30], we can write a generic EAdS fourpoint Witten diagram with equal external dimensions ∆ as is the corresponding Witten diagram in EAdS with standard normalization of the propagator as defined in [30]. The four-point function with mixed boundary conditions will be given by acting with the differential operator, defined in section 3.2 of [30], onto the corresponding legs of the ∆ = 1 Witten diagrams. All calculations will be done in the loop dependent dimensional regularisation scheme introduced and described in section 3.1.2 of [30].
Two-point functions
If we represent the propagators as then the loop corrections to the boundary two-point function up to order λ 2 for ∆ = 1 correspond to the diagrams For ∆ = 2 the diagrams are the same up to replacing the external lines by the ∆ = 2 bulk to boundary propagator. Using the results from [36], it can be checked that the integrals appearing in (3.7) all reduce to a divergent piece times a mass-shift term. We can therefore use the same argument that the renormalized mass should be fixed at the value "measured" at the boundary, which in our case fixes the leading order fall off behaviour at future infinity to ∆ = 1. As a result, we can ignore the loop corrections to the two point function in the following calculation of the four-point function, and we will draw the renormalised propagators as
Four-point functions
Recalling (2.42), the dominant term contribution to the bulk scalar field φ is contained in φ − . From this one may conclude that the four-point correlation function at future infinity is given by calculating the correlation functions of the auxiliary field φ − at the boundary of EAdS, with action (2.46). However considering only φ − as a boundary field one will not be able retrieve all the information of the dual CFT. This can also be seen form the bulk action (2.46) in which φ − and φ + are coupled.
To access the full CFT information we rather have to expand the four-point function to second subleading order in η 0 , that is The contributions to the leading term of the late time expansion of the four-point correlation function in equation (3.9) is given by (3.10) • The disconnected part is given by the product of two-point functions Here we follow the notation and conventions of [30] for the cross-ratio • The cross terms is given by the ∆ = 1 term in EAdS where the norm is defined with a euclidean signature and the radial coordinate z is expressed with the help of the normal vector to the boundary u = (0, 0, 0, 1) such that • For the one-loop contributions we use the following expression for the square of the propagator: where σ(X) is the antipodal map after Wick rotation We have used that the bulk-to-bulk propagator without the normalisation factor is given by for ∆ = 1 and ∆ = 2 together with the identity (3.19) Then, by regrouping contributions from the ∆ = 1 and ∆ = 2 fields propagating in the loops in (3.10), one can see that for the sum, over ∆, of the propagators squared the cross-terms cancel so that After unfolding the integral to the whole space R 4 the one-loop contribution, in the s-channel, for four external scalars of the same dimension ∆ adds up to x 1 x 2 x 3 with similar expressions for the other channels. Finally, performing the conformal mappings as described in [30] the integrand of equation (3.21) takes the form These integrals were already calculated in [30] and the results are given in appendix B.1. Note that, because of (3.20), the elliptic sector, which was present in the one-loop EAdS computation for ∆ = 1, cancels out. By consequence, the loop integrals are linearly reducible [37] and thus can be expressed in terms of multiple polylogarithms using the program HyperInt [38]. The entire four-point function then becomes The integrals W 1111,4−4ǫ 0 (v, Y ) and L 1,i 0 have been evaluated in [30]. We have recalled their expressions in (B.10) for L 1,i 0 .
3.2.2
The contributions to the φ + φ + φ + φ + term of the late time expansion of the four point correlation function in equation (3.9) are given by (3.24) • The cross term is again just given by the same expression as the ∆ = 2 cross in EAdS, given in appendix A.
• Since the squares of the bulk-to-bulk propagators are the same, similar arguments as for the ∆ = 1 case hold, i.e. the result can be written as a sum of the divergent and finite parts of the one-loop Witten diagrams with ∆ = 2. The details are given in appendix B.3 The entire four-point function at this order is therefore given by where W 2222,j 1,fin and L 2,i 0 have been calculated in [30] and recalled in (B.26) and (B.27) respectively. In fact, as described in [30] there is a differential relation between the correlators with φ + and φ − external legs. We will make use of this in the next subsection.
Mixed correlators
Additionally, we have the correlation functions of φ + with φ − , which are sub-leading in the late-time expansion. They are equivalent up to permutation of the operators and discuss the other combinations at the end of section 3.3.
The diagrams we calculate are given by x 1 x 2 x 3 x 4 + 1 2 x 1 x 2 x 3 • The disconnected part only contains the product of two propagators and is therefore given by • The tree-level contribution can be inferred from (3.13) by acting on the latter with The one-loop contributions can be obtained in the same way. We observe that the sum of the first two terms contains a term like equation (3.20). The same arguments apply therefore for the cancellation of the mixed terms and, we get 1 2 x 1 x 2 x 3 x 4 + 1 2 x 1 x 2 x 3 (3.33) Therefore the product appearing in the Witten diagrams is given by: We can unfold the region of integration of the last two diagrams from (H + D ) 2 to R 2D by using that the measure of integration is odd under the action of the antipodal map, like the product of propagators in (3.34). We then have We then unfold the z integration to the full space R D to get Including the correct normalization we end up with with i ∈ {s, t, u} is given in appendix B.2. The complete four-point function is therefore given by can be obtained from this result by exchanging external points accordingly. This, however, only works after regularisation as we will discuss in the next section.
Renormalization and finite result
To simplify the calculation in EAdS we changed the normalisation of the fields φ ± and the coupling constant λ in the auxiliary action (2.46). However if we want to interpret our result in terms of a de Sitter calculation we have to reverse that procedure, especially if we want to compare the β function with the well-known flatspace result. At leading order they should coincide, since the leading short distance divergence does not depend on the global geometry.
Following the same arguments as in section 4.2.1 in [30], we introduce the renormalized coupling constant λ R through the divergent bare coupling as λ = λ R (aµ)µ 2ǫ + δλ. Then, up to finite terms, the connected part of the four-point functions is given by This determines the counter-term while the finite log µ contribution to λ gives rise to the Callan-Symanzik equation which leads to the leading order contribution to the beta function coinciding with the flat space result. After renormalisation with a minimal subtraction scheme and restoring the canonical normalisation of the fields and coupling constant, from a dS point of view, we obtain the following finite results for the four-point functions with equal external dimensions ∆ − = 1 or ∆ + = 2 . As discussed in section 4.2.1 of [30] this is done to restore the global AdS symmetry in the bulk, guaranteeing that the renormalized four-point function transforms homogeneously under dilatations on the boundary. As a consequence one should be able to obtain the four-point functions φ by simple permutation of the external points in equation (3.46), resulting in transformations on the conformal cross-ratios.
Concretely, the correlation function φ We checked explicitly, that this holds for our result, providing an additional test for the loop dependent regularisation scheme introduced in [30] to restore the conformal symmetry on the boundary, which is a priori broken by naive dimensional regularisation.
Conformal block expansion
We have seen in the last section that we can interpret the leading-and subleading expansion coefficients of the field at late times as operators, O 1 and O 2 , of dimension ∆ = 1 and ∆ = 2 respectively, living on the euclidean R 3 hypersurface at future infinity. Furthermore, since we have an auxiliary EAdS action for the correlation functions of the latter, we conclude that the theory on the boundary should, at least perturbatively, be described by a dual CFT. In total there are five different four-point functions to be considered for describing this CFT. We write the possible OPEs between the operators O 1 and O 2 schematically as where a ij O are OPE coefficients and "∼" means that the contributions of descendant operators are implicit.
In terms of conformal blocks [39], the general form of the five four-point functions we have to consider is where GÕ ,l is the conformal block for the primary fieldÕ. In the following we will denote the square of the OPE coefficients by capital letters, that is Since we have no three-point functions due to the quartic vertex none of the "single trace" operators O 1 and O 2 will appear in the OPE.
The spectrum of "double trace" operators for the disconnected part can be read off from the corresponding four-point functions by conglomeration as described in [40]. The possible primary operators are given by which we will denote by respectively. They have the corresponding scaling dimension 2 + 2n + l, 4 + 2n + l and 3 + 2n + l with n, l ∈ N. Recall that in the scalar four-point function we can only distinguish operators by their scaling dimension, which may be the same for different values of n and l. Furthermore, while the dimensions of O 1 and O 2 are determined by the (renormalized) mass m, which is fixed for a conformally coupled bulk scalar, we may expect that the "double trace" operators pick up anomalous dimensions due to the bulk interaction term.
Correlation functions with degenerate conformal block expansion
where the additional factor of 1/2 guarantees canonical normalization of the final result. Combining (4.1), (4.4) and (4.6) we then write where the OPE coefficients a ∆,∆ [O ∆ O ∆ ] n,l for the generalized free field are given in the appendix D.
To find the OPE coefficients of the operators in the orthogonal basis we can express the four-point functions of the generalized free field in terms of conformal blocks as with the equation for the conformal blocks G a,b ∆ n,l given in the appendix D. We used the fact that the conformal blocks for operators with the same dimension and spin are identical, meaning they coincide for O S n,l and O A n,l . Comparing this expansion to the generalized free field, we see immediately that the OPE coefficients in the new basis must obey the following conditions Note, that from the second condition it follows that a 2,2 Eqn. (4.10) does not determine the zeroth order OPE coefficients uniquely. We have to proceed to first order in λ to obtain additional conditions to fix them. We expect the operators O S n,l and O A n,l to receive anomalous dimensions from the interaction term in the bulk In the following we will go in detail through the process of extracting the anomalous dimensions and OPE coefficients up to second order in λ. Since this part is quite technical, we highlighted the main result, which are the first and second order anomalous dimensions.
First order calculation The first order contributions in λ to the four-point functions (4.9) are then given by For n > 0, equations (4.10) and (4.15) require either γ (4.18) From the pieces without logarithmic terms we can access information about the first order OPE coefficients. Since we chose γ (1)A n,l = 0 this determines only the OPE coefficients for O S n,l : for n ≥ 1 .
and since the expansion of equation (4.14a) with ∆ = 2 starts with O(v 2 ) we need to have Second order calculation At second order in λ, the contributions from the conformal block expansion are given by and where all single trace primaries have the same weight in the first equation. Again we compare this to the results from the bulk calculation. The terms proportional to log(v) 2 provide us with a consistency check between the first and second order calculation. We find from which it follows that γ (1)A n>0,0 = 0 in consistency with the first order calculation, while for n = 0 we find that From condition (4.22) it follows then, that and from equation (4.23) we get The expansion of equation (4.24b) starts at order O(v), where the terms at that order contain log(v) terms and terms purely polynomial in v, Y . The logarithmic terms can be absorbed by imposing equation (4.23) providing an additional consistency check between the first and second order calculation. The polynomial parts give γ , which can only be solved, if we go to the next order in λ.
The expansion of where the terms at this order are purely polynomial in v and Y . These terms can be absorbed by choosing The coefficients of the log(v) terms give us access to the sum and difference between the second order anomalous dimensions. We obtain the following results If l = 0 we find the following with H (1) n = n m=1 1/m the harmonic number, which implies that Remarkably the anomalous dimensions for O S n>0,l>0 seem to be completely degenerate for all values of n and the dimension for O A n,l>0 can be brought into the general form where∆ A n,l = ∆ A n,l | λ=0 = 2 + 2n + l. For the n = 0 trajectory we can again only make a statement about the sum
Correlation functions with non-degenerate conformal block expansion
The four-point functions Since the two-point function between these operators vanishes, the OPE will be regular. The double trace operators appearing in the free four-point function are the double trace operators [O 1 O 2 ] n,l with scaling dimension ∆ n,l = 3 + 2n + l. Since they have odd dimensions for even spin and even dimensions for odd spin, they can be distinguished from the operators O S n,l and O A n,l in the OPE and the conformal block expansion will be non-degenerate.
The free four-point functions are given by Expanding in terms of conformal blocks gives where the squared OPE coefficients are given in the appendix D. A major difference with respect to the OPE of the correlation functions in the previous section is the fact that now also operators with odd spin l contribute. At first order in the bulk coupling λ we can determine the first order anomalous dimensions and OPE coefficients through Comparing with the bulk calculation gives the result The result is the same for both of the above four-point functions showing the consistency of the calculation. At second order in λ we get the following conformal block expansion Again the coefficient of the log(v) 2 term provides us with a consistency check between the first and second order calculation which our results pass. From either (4.41a) or (4.41b) we can determine the second order anomalous dimensions. As it should be they lead to identical results given by the following formulas: for l mod 2 = 1.
The whole picture
Let us summarize the results of this rather technical section: We confirmed the proposal in [1,[26][27][28], that cosmological four-point functions can be described by a CFT dual to an effective field theory in Euclidean AdS, by describing explicitly the CFT dual to conformally coupled scalar n,2l+1>0 = γ (2)A n,2l+2 l > 0. (4.44) This relation seems to suggest a symmetry between the operators O A n,l , O S n,l and [O 1 O 2 ] n,l , which could have several origins. One possible explanation is the special choice for the scaling dimension of the single-trace operators, ∆ ± ∈ {1, 2}. It is easily checked that for different values of ∆ ± the relative coefficients between the vertices in (2.45) change and even new vertices of the form φ + 3 φ − are generated. The cancellation of the elliptic sector, discussed in section 3.2, does not occur anymore, and we expect the integrals to have a very different structure. As we do not have a simple form for the propagator for general values of ∆ the technical implementation of the explicit loop calculation, necessary to check this claim, is much more involved, and we leave it for future studies. For conformal coupling in odd d the propagator simplifies to a rational function of K and the auxiliary EAdS action (3.3) is always the same. We therefore expect the general structure of the results, including the apparent symmetry to hold for those cases as well.
On the other hand, for generic scaling dimension of the single trace operators, the action (2.46) still displays a symmetry due to the fact that all vertices have the same coupling constant λ, which look fine-tuned in the general class of φ 4 theories in EAdS. Possibly, the apparent symmetry in the anomalous dimensions of the double trace operators is related to this.
Comparing with previous work [10,30] we can draw the following picture. Starting from the theory in the bulk we can calculate either the Bunch-Davies wave function [10] or the cosmological correlation functions as we did here. The Bunch-Davis wavefunction is defined as where φ 0 and π 0 (x) denote the value of the bulk field and its canonically conjugate momentum at the boundary respectively. From a dS point of view Ψ[π 0 ] corresponds to choosing Neumann instead of Dirichlet boundary conditions at future infinity. Performing a semiclassical expansion of (4.45) one finds that the Bunch-Davis wave function has an interpretation as a generating functional for a CFT at future infinity. A conformally coupled scalar field in dS, without self-interactions, will give rise to a direct product of CFTs of two generalized free fields, where Ψ[φ 0 (x)] corresponds to the external dimension ∆ = 2 while Ψ[π 0 (x)] to ∆ = 1. Introducing interactions in the bulk theory deforms the theory on the boundary. However, no non-trivial OPEs between O 1 and O 2 are introduced. Thus, the deformations will only affect the ∆ = 1 and ∆ = 2 sector separately and the theory keeps its product structure. In [10] it was shown that the deformed CFT obtained in this way is identical to that obtained from a bulk theory in EAdS considered in [30,36,42].
The cosmological correlator CFT introduces non-trivial OPE's between O 1 and O 2 . Thus, the deformed CFT looses its product structure. Additionally, a new tower of double trace operators [O 1 O 2 ] n,l receives anomalous dimensions due to the new mixing vertex introduced by the Schwinger-Keldysh formalism. Curiously we noticed, that the anomalous dimensions generated for these new operators are the same as the ones already found for O S n,l and O A n,l . There is, however, a relation between the CFT of the Bunch-Davies wave function and that of cosmological correlators. This can be seen by expressing a cosmological correlation function as 46) or, equivalently, (4.47) where in the second step we used the inverse Fourier transformation of (4.45) as is explained in [1]. Analogous expressions exist for π 0 (x). The CFT of cosmological correlators can therefore be understood as a functional integral over the wavefunction CFTs with all possible boundary conditions, where the mixing between the two kinds boundary conditions contained in the Fourier exponential. This is analogous to the mixing vertex that was introduced in section 2.1 resulting from the Schwinger-Keldysh contour.
Finally, let us note, that the expression (4.46) is merely of conceptual value since it requires the exact knowledge of the wavefunctionals to perform the integral. From (4.46) it is not even clear that the result of the functional integration should preserve conformal symmetry. Computationally, the way to go is through the Schwinger-Keldysh formalism and the auxiliary EAdS action, introduced in [1] and reviewed in section 2.2. The two different ways to deform the generalized free field is schematically depicted in figure 1.
Outlook
The goal of this work was to extend the technique of mapping EAdS Witten diagrams to flat space Feynman integrals, developed in [30], to calculate cosmological correlation functions in a de Sitter background. We achieved this goal for a conformally coupled, field with quartic self-interaction, by applying the Schwinger-Keldysh formalism in the form of [1], where it was shown that the calculation can be mapped to an equivalent problem for an auxiliary EAdS action.
We succeeded to extract anomalous dimensions of "double trace" operators appearing in the conformal block expansion of the four point functions up to one-loop order. As suspected, we find that the cosmological correlator CFT differs from the Bunch-Davies wave function CFT. Furthermore, there is no straightforward way to obtain the conformal data of the latter from the former.
Interestingly, we find an apparent symmetry between different operators in the OPE's. We expect this to be explained by either the special choice of the field masses, the constraints coming from the Schwinger-Keldysh contour, or a combination of both. To further investigate this phenomenon, one would have to consider different masses of the fields which, however, is very nontrivial due to the complicated structure of the propagator in those cases.
Another way to proceed would be to test if this symmetry still holds for higherloop contributions. The cancellations in the loop integrals discussed in section 3.2, points to some simplifications regarding the corresponding calculation in EAdS. Es-pecially, the diagrams given by multiple bubbles attached after one another, which are expressible in terms of single-valued multiple polylogarithms at any loop order.
One can also try to make contact with the cosmological bootstrap program by expressing our results for the two-and four-point function in momentum space (with respect to the three-dimensional space-like hypersurface). This can be of use since, to our knowledge, loop corrections have not been available in that formalism so far. It would be interesting to analyze the connection with the position space results of this work.
Another interesting avenue is to make contact with inflationary cosmology which deviates form the de Sitter geometry but, for the two-point function in momentum space the violation of scale-invariance manifests itself only in the spectral index. Perhaps, there is a similarly tractable pattern for three-and four-point functions along the lines of [24]. term for ∆ = 1, before expanding in D = 4 − 4ǫ. We obtain the following parametric representations We obtain for the O(1) terms (A.4) The sub-sub-leading term is given by the ∆ = 2 result from EAdS, which in dimensional regularisation in D = 4 − 4ǫ is given by (A. 5) with the coefficient of the order ǫ term W 2222,4 0,ǫ = 3(ζζ) 2 − ζ +ζ 3 + 2 ζ +ζ 2 ζζ + 2 ζ +ζ 2 − 8ζζ ζ +ζ + 4ζ 2ζ 2 + 4ζζ The leading term is given by the correlation function of the ∆ = 1 scalar field in EAdS, with only the divergent part contributing.
• For the t-channel with U t and F t given in (B.5).
• For the u-channel with U u and F u given in (B.7).
Integrating over the Feynman parameters and expanding in ǫ we find the following structure for each diagram where the finite part W 1,finite for each diagram is given by where D(ζ,ζ), f 1 , f 2 and f 3 are given in appendix C.
C Recurring expressions
In this appendix we collect the recurring expressions entering the evaluation of the Witten diagrams. These expressions are single-valued multiple polylogarithms. The evaluation of the parametric form of the Witten diagram is done using HyperInt [38]. We will the conventions of this work for the multiple polylogarithms · · · x p k k p s k k for |x 1 · · · x i | < 1, ∀i ∈ {1, .., k} .
(C.1) The sum s 1 + s 2 + · · · + s k is referred to as the weight of the multiple polylogarithm.
Some useful definitions and identities are Note that we use a slightly different normalization compared to [47]. | 10,910.2 | 2022-04-14T00:00:00.000 | [
"Physics"
] |
The Prediction of Pervious Concrete Compressive Strength Based on a Convolutional Neural Network
: To overcome limitations inherent in existing mechanical performance prediction models for pervious concrete, including material constraints, limited applicability, and inadequate accuracy, this study employs a deep learning approach to construct a Convolutional Neural Network (CNN) model with three convolutional modules. The primary objective of the model is to precisely predict the 28-day compressive strength of pervious concrete. Eight input variables, encompassing coarse and fine aggregate content, water content, admixture content, cement content, fly ash content, and silica fume content, were selected for the model. The dataset utilized for both model training and testing consists of 111 sample sets. To ensure the model’s coverage within the practical range of pervious concrete strength and to enhance its robustness in real-world applications, an additional 12 sets of experimental data were incorporated for training and testing. The research findings indicate that, in comparison to the conventional machine learning method of Backpropagation (BP) neural networks, the developed CNN prediction model in this paper demonstrates a higher coefficient of determination, reaching 0.938, on the test dataset. The mean absolute percentage error is 9.13%, signifying that the proposed prediction model exhibits notable accuracy and universality in predicting the 28-day compressive strength of pervious concrete, regardless of the materials used in its preparation.
Introduction
Pervious concrete, acknowledged as an innovative and environmentally friendly construction material, features remarkable attributes such as excellent permeability, antislip properties, corrosion resistance, and durability [1,2].With its applications ranging from urban development to environmental protection, pervious concrete offers a wide range of potential uses [3].Among its key performance indicators, compressive strength emerges as pivotal.Precise prediction of pervious concrete's compressive strength holds paramount importance in enhancing the design and construction quality of structures employing this material.
In recent years, extensive research has been conducted on the performance indicators of pervious concrete.Many studies have employed diverse experimental materials and conducted comparative experiments with varying mix proportions to investigate the influence of different materials on the compressive strength and other performance indicators of pervious concrete under different design conditions.These investigations have covered various aspects, including different fly ash substitution rates [4][5][6], various aggregate types [7][8][9], and different cement varieties [10,11], aiming to understand the impact of these factors on pervious concrete performance.With the increasing application of pervious concrete in urban construction, researchers have begun to explore the effects of novel materials on its performance.This includes investigating the influence of different types of fibers [12][13][14] and utilizing construction waste to prepare pervious concrete to meet specific performance requirements in urban construction [15,16].Considerations have also been given to factors such as curing conditions [17] and porosity [18,19], analyzing the variations in compressive strength in pervious concrete from multiple perspectives.In addition to macroscopic studies on the strength variation patterns of pervious concrete, some researchers have employed advanced techniques such as Scanning Electron Microscopy (SEM) to investigate factors influencing its performance at a microscopic level.For example, Kelly Patrícia Torres Vieira et al. [20] observed recycled aggregate pervious concrete samples using SEM, discovering a significant decrease in compressive strength with an increase in the proportion of recycled aggregate.Conversely, Xiaoyan Zheng et al. [21] utilized field emission SEM and X-ray diffraction to study the mechanism of alkali-activated materials on pervious concrete performance.These microscopic studies not only provide a profound analysis of the variation mechanism of compressive strength in pervious concrete through extensive experimental data but also reveal key factors at the microscopic level, offering valuable guidance for future pervious concrete design and construction.
While previous studies have derived variation patterns of compressive strength in pervious concrete [22] with specific material configurations based on experimental materials, proposing empirical formulas, Table 1 illustrates some of these formulas and their predictive effectiveness.However, the empirical formulas may not precisely predict the compressive strength of pervious concrete with different materials and mix proportions as the scope of application widens and new materials are developed.Given the rigor of previous research and the resource consumption of comparative experiments, comprehensive consideration of pervious concrete compressive strength may be limited by unique geological conditions and technological capabilities in different regions.Different regions possess distinct construction experiences and explorations of pervious concrete preparation methods, making it challenging to obtain a prediction method applicable to the compressive strength of all pervious concrete using traditional empirical formulas.Therefore, it is of practical significance to fully utilize the results and experimental data from previous research to accurately predict the 28-day compressive strength of commonly used formulations of pervious concrete.1 illustrates that with the introduction of complex methods such as logarithms, the predictive accuracy of traditional empirical formulas continues to improve.However, it is crucial to note that the accuracy of traditional empirical formulas often relies on specific mix proportions.Consequently, empirical formulas that perform well in certain studies may not be applicable to research based on different mix proportions.This limitation arises from the fact that mix proportions and material types are typically not considered when constructing these formulas.Incorporating mix proportion information significantly increases the data requirements for model construction.Additionally, traditional regression methods may struggle to handle such large datasets effectively.
In recent years, research in machine learning and deep learning has made it feasible to predict sample indicators by integrating past large datasets with existing sample features [28,29].Existing studies indicate that models based on machine learning and deep learning can predict the compressive strength of concrete with relatively high accuracy [30][31][32].For instance, predictive models constructed using BP neural networks have shown good predictive performance regarding the compressive strength of different types of concrete [33][34][35][36][37]. Additionally, other machine learning methods, besides BP neural networks, have demonstrated a favorable trend in predicting the 28-day compressive strength of concrete [38][39][40][41][42].With the continuous advancement of deep learning, models for predicting the compressive strength of concrete established using CNNs and improved convolutional neural networks exhibit superior predictive performance compared to traditional machine learning methods [43][44][45][46].This provides novel insights and methods for researching the prediction of compressive strength in pervious concrete.For example, Ziyue Zeng et al. [47] analyzed the effectiveness of various deep learning and machine learning methods in predicting the compressive strength of concrete and developed a CNN-based predictive model for concrete compressive strength.This model was trained using data on concrete compressive strength from various materials with different material types and mix proportions.Testing revealed an R 2 of 0.967 on the test set, confirming that the CNN-based predictive method for concrete compressive strength can be applied to concrete prepared from different materials with better applicability compared to traditional empirical formulas.
This study presents a predictive model for the 28-day compressive strength of pervious concrete utilizing CNN.The methodology integrates various material characteristics of pervious concrete, effectively merging existing research data and practical engineering experience to yield reliable strength predictions.By employing a deep neural network and utilizing the content of each component as input for data training and analysis, this approach is not only operationally straightforward but also adeptly characterizes key features influencing concrete strength, including water-to-cement ratio, sand-to-aggregate ratio, fly ash substitution ratio, etc.
The main contributions of this study are as follows: (i) Development of a CNN model to predict the 28-day compressive strength of pervious concrete.Comparative assessments based on goodness of fit, average absolute percentage error, root mean square error, and mean absolute error demonstrate the superior performance of the proposed model.(ii) Integration of existing mix proportion information into the model, primarily obtained from previous studies on the mechanical performance of pervious concrete.This utilization of component information as input simplifies the model's operation, alleviating additional workload for engineers and enhancing its practicality.(iii) The proposed model achieves a goodness of fit greater than 0.9 on the test set, indicating its effectiveness in predicting the 28-day compressive strength of pervious concrete with different materials.The average absolute percentage error on the test set is less than 10%, suggesting that the CNN model's prediction errors regarding pervious concrete strength under the influence of different materials fall within an acceptable range, affirming the applicability of the proposed method.
This research presents a novel approach, providing theoretical support for predicting the strength of pervious concrete.The findings offer valuable insights for future in-depth studies in related fields.
Data Source and Model Testing
To ensure experimental reproducibility, this section will introduce the data sources used to train the CNN, the methods employed for acquiring experimental data, as well as the structural information of the CNN and the specifics of the training and testing processes.
Data Source
To validate the universality of the predictive model established in this study, we employed data from experiments on pervious concrete using different materials reported in the literature for training and testing a convolutional neural network.Table 2 provides the sources and relevant component information for these 111 sets of data.Given that the compressive strength of pervious concrete is typically lower than that of conventional concrete and varies within the range of 2-28 MPa [22], although advances in research have led to improvements in the compressive strength of pervious concrete, it still remains lower than that of conventional concrete.Based on this, we searched for pervious concrete samples within the compressive strength range of 2 to 40 MPa for model training and prediction.Additionally, due to variations in the mix composition of pervious concrete studied in different articles, adding samples beyond a certain range of material types may lead to difficulty in model convergence during training.Therefore, we selected several sets of samples with conventional mix compositions to ensure better applicability of the model at the current stage.Specifically, during the data collection process, we gathered information on coarse and fine aggregate types and contents, water content, admixture content, cement types and contents, fly ash content, silica fume content, and 28-day compressive strength for each sample group.Furthermore, to mitigate the potential impact of minor variations in the training or testing sets on model training or prediction errors, we additionally collected 12 sets of data through experiments (as described in Section 3.2) and incorporated them into the dataset.Subsequently, each sample in the overall dataset was randomly assigned to the training set and testing set, ensuring that a different training set and testing set were used each time the model was run to enhance its robustness.Considering the aim of this study to establish a method for predicting the 28-day compressive strength of pervious concrete, it is noted that existing publicly available datasets do not encompass the mix proportion information of pervious concrete prepared from different types of materials.In such a scenario, including material types as input parameters may potentially hinder model convergence or lead to poor predictive performance.Therefore, in the process of model development, this study opts to disregard restrictions on aggregate and cement types and endeavors to seek pervious concrete data that approximate material preparation during model training.
The dataset obtained from the literature was split into two subsets, with 70% designated as the training set and the remaining 30% as the testing set, utilized for training and evaluating the CNN models.Each sample in the dataset includes nine pieces of information: coarse and fine aggregate content, water content, admixture content, cement content, fly ash content, and silica fume content, covering eight input variables, along with the corresponding compressive strength after 28 days of curing.
Model Information 2.2.1. Background
CNN is a standard neural network structure in deep learning, originating from computer vision and finding widespread applications.It typically consists of layers, including convolutional layers, pooling layers, batch normalization layers, activation functions, and more.Among these, the convolutional layer performs convolution operations on the output data from the preceding layer to extract diverse features.This process can be represented as: In the equation, Y k i,j represents the value at position (i, j) after the original data undergoes processing with the k-th convolutional kernel.H represents the height, W represents the width, C represents the number of channels in the input image, X(i, j, l) represents the value at position (i, j) on channel l of the input data, S denotes the stride of the convolutional kernel, set to 1 in this study.ω(m, n, l, k) signifies the value of the k-th convolutional kernel at position (i, j) on channel l, and b(k) denotes the bias of the k-th convolutional kernel.The convolutional layer extracts features from the output data of the preceding layer following the described rules, as illustrated in Figure 1.The pooling layer is employed to filter redundant components from the output data of the preceding layer, reducing the computed data volume, enhancing the model's noise resistance, preventing overfitting, and simultaneously preserving the original data features.Common pooling methods include max pooling and average pooling, with this study choosing max pooling to filter the data processed by the convolutional layer.Specifically, within each data region of the pooling kernel size, the maximum value is selected to form the new output.The primary computational process can be expressed by the following formula: In the equation, , l i j Y represents the data value at position (i, j) for the l-th channel of The pooling layer is employed to filter redundant components from the output data of the preceding layer, reducing the computed data volume, enhancing the model's noise resistance, preventing overfitting, and simultaneously preserving the original data features.Common pooling methods include max pooling and average pooling, with this study choosing max pooling to filter the data processed by the convolutional layer.Specifically, within each data region of the pooling kernel size, the maximum value is selected to form the new output.The primary computational process can be expressed by the following formula: In the equation, Y l i,j represents the data value at position (i, j) for the l-th channel of the pooling layer's output, and X(i, j, l) represents the value at position (i, j) for channel l of the input data.S denotes the stride of the pooling kernel, set to 1 in this study, and P represents the size of the pooling kernel.The process of the pooling layer's treatment of the output data from the preceding layer (with a stride of 2 for the pooling kernel) is illustrated in Figure 2.
of the preceding layer, reducing the computed data volume, enhancing the model's noise resistance, preventing overfitting, and simultaneously preserving the original data features.Common pooling methods include max pooling and average pooling, with this study choosing max pooling to filter the data processed by the convolutional layer.Specifically, within each data region of the pooling kernel size, the maximum value is selected to form the new output.The primary computational process can be expressed by the following formula: In the equation, , l i j Y represents the data value at position (i, j) for the l-th channel of the pooling layer's output, and X(i, j, l) represents the value at position (i, j) for channel l of the input data.S denotes the stride of the pooling kernel, set to 1 in this study, and P represents the size of the pooling kernel.The process of the pooling layer's treatment of the output data from the preceding layer (with a stride of 2 for the pooling kernel) is illustrated in Figure 2. The batch normalization layer can be viewed as a preprocessing step involving data standardization and regularization.It is frequently applied in convolutional neural networks to normalize the output of each convolutional layer, ensuring that the outputs adhere to a Gaussian distribution with consistent mean and variance.This aids in preventing continuous shifts in the distribution of input data across different layers, thereby enhancing the stability and efficiency of the training process.The batch normalization process involves the following steps.Firstly, calculate the variance and mean for each data batch: The batch normalization layer can be viewed as a preprocessing step involving data standardization and regularization.It is frequently applied in convolutional neural networks to normalize the output of each convolutional layer, ensuring that the outputs adhere to a Gaussian distribution with consistent mean and variance.This aids in preventing continuous shifts in the distribution of input data across different layers, thereby enhancing the stability and efficiency of the training process.The batch normalization process involves the following steps.Firstly, calculate the variance and mean for each data batch: In the equation, m represents the size of each batch, and X i represents the i-th sample within each batch.
By computing the variance and mean of each batch, the normalization process is applied to the batch data: Here, ε is a small positive constant introduced to prevent division by zero in the denominator.
Finally, the normalized data is shifted and scaled to accelerate the training process while ensuring the stability of the model: Here, γ represents the scaling parameter, and β is the translation parameter.
The activation function introduces non-linearity into the model, thereby enhancing the expressive power of the neural network.Common activation functions include the Rectified Linear Unit (ReLU) function, the Sigmoid function, and others.In this study, the ReLU function is utilized as the activation function for the convolutional neural network.The computation process of the ReLU function for each input data x is represented as follows: The fully connected layer achieves a linear combination of data and weights, introducing non-linearity through the incorporation of an activation function.This allows the model to extract more complex features from the data and undergo non-linear transformations, thereby enhancing its flexibility.The computation process is as follows: In the equation, X represents the input data vector, W is the weight matrix, b is the bias vector, and f (x) denotes the ReLU function.
CNN Structure
CNNs have been extensively explored by researchers for predicting concrete compressive strength.For instance, Deng et al. [54] developed a neural network with a convolutional layer kernel and a hidden layer containing four neurons, utilizing four input features to predict recycled aggregate concrete.Conversely, Zeng et al. [47] argued that as the number of input indicators increases, the CNN's structure should be adjusted accordingly.Therefore, they expanded the number of convolutional kernels and conducted a search for the optimal number of neurons in the fully connected layer within the range of 4 to 128.
Considering the significant variations in raw materials among samples in this study, the convolutional structure is enhanced accordingly.Each convolutional structure comprises a convolutional layer (with a kernel size of 3 × 1 × 1), a pooling layer (with a pooling kernel size of 1 × 1), a batch normalization layer, and a ReLU activation function layer.The collected data enter the model through the input layer.After undergoing basic training with three convolutional structures, redundant data are eliminated through dropout layers.Subsequently, data fusion is accomplished through fully connected layers.Finally, the model is trained, and data prediction is performed using a regression layer.The structure of the CNN is illustrated in Figure 3. model is trained, and data prediction is performed using a regression layer.The structure of the CNN is illustrated in Figure 3.
Model Training and Testing
The model undergoes training using a loss function, which quantifies the disparity between predicted and actual values.The CNN is iteratively optimized during training to minimize this loss.Common loss functions encompass mean square error, mean absolute error, and cross-entropy.In this study, the root mean square error (RMSE) is chosen as the loss function.The error after a training iteration is computed as: ( ) Here, N represents the total number of samples, i y denotes the actual value of the i- th sample, and ˆi y is the predicted value for the i-th sample.
The optimizer fine-tunes model parameters for the CNN based on predefined criteria to minimize the loss function.In this study, stochastic gradient descent (SGD) is employed as the optimizer.For each training iteration, SGD randomly selects a subset from all samples to calculate the gradient, subsequently updating the model parameters.The main formula for this process is as follows:
Model Training and Testing
The model undergoes training using a loss function, which quantifies the disparity between predicted and actual values.The CNN is iteratively optimized during training to minimize this loss.Common loss functions encompass mean square error, mean absolute error, and cross-entropy.In this study, the root mean square error (RMSE) is chosen as the loss function.The error after a training iteration is computed as: Here, N represents the total number of samples, y i denotes the actual value of the i-th sample, and ŷi is the predicted value for the i-th sample.
The optimizer fine-tunes model parameters for the CNN based on predefined criteria to minimize the loss function.In this study, stochastic gradient descent (SGD) is employed as the optimizer.For each training iteration, SGD randomly selects a subset from all samples to calculate the gradient, subsequently updating the model parameters.The main formula for this process is as follows: In the equation, θ t represents the values of the model parameters at the t-th training iteration, η is the learning rate, L(θ t ; x i , y i ) denotes the loss function of the sample (x i , y i ) at the t-th training iteration, and ▽L(θ t ; x i , y i ) is the partial derivative of the loss function with respect to the model parameters.
During the initial training phase of convolutional networks, a higher learning rate aids in the rapid convergence of the model.However, as the model approaches the optimal point, a higher learning rate may lead to oscillations during training.Therefore, this study adopts a strategy of dynamic learning rate adjustment.We set the initial learning rate to 0.01, with a learning rate decay factor of 0.5.After 500 training iterations, the learning rate is reduced to 0.005.Additionally, during training, each batch consists of 30 samples, with a maximum of 2000 training iterations.The dataset is divided into 70% for training and 30% for testing.Given that this study aims to validate and test the predictive performance of CNNs for pervious concrete compressive strength, MATLAB 2021b is utilized for CNN model construction, training, and testing.Due to cost considerations, a shared data platform is not established at present.
CNN Model Predictive Performance
To visually demonstrate the predictive capabilities of the CNN model for the compressive strength of pervious concrete, this study selects results from a specific experiment, as illustrated in
CNN Model Predictive Performance
To visually demonstrate the predictive capabilities of the CNN model for the compressive strength of pervious concrete, this study selects results from a specific experiment, as illustrated in The data points in the graph are primarily clustered around the diagonal line, indicating a strong alignment between the model's predictions and the observed results.This clustering pattern suggests a high degree of concordance between the predicted and actual compressive strength values.The model demonstrates notable accuracy in forecasting compressive strength for pervious concrete, as evidenced by the proximity of the data points to the diagonal line.This visual analysis underscores the CNN model's ability to provide accurate predictions for compressive strength.
Enhancements for Improved Robustness of the Model
Figure 4 illustrates the favorable predictive performance of the CNN model trained in this study for various types of pervious concrete.However, there is a lack of training data in the range of 10 to 20 MPa, which could lead to inadequacies in the model's predictions for the compressive strength of pervious concrete within this range.To enhance the robustness of the model, ensuring that subtle variations in the material composition of pervious concrete do not significantly impact the prediction results, experimental data on the measured 28-day compressive strength in the range of 10 to 20 MPa will be added.This additional data aims to augment the training set of the CNN predictive model, improving its applicability to different types of pervious concrete.
Method for Enhancing Model Robustness (1) Experimental Materials
This study employed various materials for the pervious concrete experiments, including Ordinary Portland Cement (OPC) of grade 42.5.The OPC has a standard consistency of 27.1%, a specific surface area of 357 m 2 /kg, an initial setting time of 203 min, and a final setting time of 250 min.For coarse aggregates, 5-20 mm aggregates supplied by the Jinying Hardware Business Department in Jiangning District, Nanjing, were chosen.These aggregates exhibit an apparent density of 3.0149 g/cm 3 , a bulk density of 3.0045 g/cm 3 , a compacted bulk density of 2.9246 g/cm 3 , and a crushing value of 3.04%.
Additionally, low-calcium Class I fly ash produced by the Nanjing Thermal Power Plant was employed, featuring a density of 2.04 g/cm 3 , a water demand ratio of 0.95, and fineness (remaining on the 45 µm sieve) of 6%.To enhance concrete performance, a high-performance polycarboxylate superplasticizer from Wuhan Greelan Building Material Technology Co., Ltd., located in Wuhan City, Hubei Province, China, was introduced.This superplasticizer, in powder form with a gray-white appearance, has a bulk density ranging from 350 to 450 kg/m 3 and achieves a 25% to 30% reduction in mortar water content.The water used in concrete mixing adhered to the standards outlined in JGJ63-2006 for concrete water usage [55].
(2) Experimental Procedure In this study, twelve sets of pervious concrete were prepared with different mix proportions, and their specific compositions are detailed in Table 3.The pervious concrete was fabricated using the slurry coating method, following a specific procedure: Initially, the coarse aggregates were mixed with approximately 3% water for 30 s in a mixer to ensure thorough pre-wetting of the aggregate surfaces, enhancing their adhesion to cement.Subsequently, 100% cement, water, and corresponding additives were added, and the mixture was stirred for 180 s to form a highly flowable slurry, significantly reducing friction between the aggregates.This process effectively prevented the crushing of the coarse aggregates when their resistance exceeded acceptable limits while facilitating uniform coating of the aggregate surfaces by the slurry, promoting the formation of a spherical structure and enhancing the porosity of the pervious concrete.The freshly mixed concrete was then poured into cubic molds measuring 100 × 100 × 100 mm 3 and compacted by vibration.After being left to stand for 24 h, the specimens were demolded and placed in a standard curing chamber.After 28 days, compressive strength tests were conducted on the specimens according to the "Standard for Test Method of Mechanical Properties of Ordinary Concrete" (GB/T 50081-2002) [56].The porosity and compressive strength test results of the pervious concrete specimens are listed in Table 4. Since the pervious concrete specimens were prepared to supplement the data gap in the 10-20 MPa range, the compressive strength test results showed relatively close values, with a standard deviation of approximately 4.63 MPa.
Predictive Performance after Model Training Enhancement
To visually showcase the predictive performance of the CNN developed in this study for estimating the compressive strength of pervious concrete with different material compositions, actual data from various sources were compared with their corresponding predicted values generated by the model.The comprehensive predictive performance is illustrated in Figure 5. Notably, the additional data incorporated in this study successfully addressed the data gap within the 10~20 MPa range.Following the retraining process, the CNN model demonstrated favorable predictive accuracy across all sample data, with data points clustered closely around the diagonal line.
After incorporating additional training data, the prediction performance of the CNN model on both the training and test sets is illustrated in Figure 6.It is evident that the model's predicted values closely match the actual values in both the test and training sets, with minimal absolute errors.This observation signifies that the model, retrained with the inclusion of new data, showcases excellent predictive capabilities without encountering underfitting or overfitting issues.The model consistently achieves high accuracy in predicting the 28-day compressive strength of diverse types of pervious concrete.
positions, actual data from various sources were compared with their corresponding pre-dicted values generated by the model.The comprehensive predictive performance is illustrated in Figure 5. Notably, the additional data incorporated in this study successfully addressed the data gap within the 10~20 MPa range.Following the retraining process, the CNN model demonstrated favorable predictive accuracy across all sample data, with data points clustered closely around the diagonal line.To comprehensively demonstrate the predictive performance of the model, this paper employs the following metrics to further evaluate the prediction performance of the CNN model.The coefficient of determination R 2 , characterizing the prediction effect, is calculated by the Formula (11).The coefficient of determination is a commonly used evaluation index for assessing the prediction and fitting effects of the model.According to its definition, the value of R 2 falls between [0, 1], with a value closer to 1 indicating that the model's predicted values are closer to the actual values.
( ) ( ) In Formula (11), Predictedi represents the predicted strength of the i-th sample in the model, Actuali represents the measured strength of the i-th sample, and Actual denotes the average measured strength of all samples.
In addition to the coefficient of determination, this paper assesses the predictive performance of the CNN model on pervious concrete using Root Mean Square Error, Mean Absolute Percentage Error (MAPE), and Mean Absolute Error (MAE).Formulas ( 12)-( 14) present the expressions for these indicators, and Table 5 provides the values of these evaluation metrics in a single model training experiment.To comprehensively demonstrate the predictive performance of the model, this paper employs the following metrics to further evaluate the prediction performance of the CNN model.The coefficient of determination R 2 , characterizing the prediction effect, is calculated by the Formula (11).The coefficient of determination is a commonly used evaluation index for assessing the prediction and fitting effects of the model.According to its definition, the value of R 2 falls between [0, 1], with a value closer to 1 indicating that the model's predicted values are closer to the actual values.
In Formula (11), Predicted i represents the predicted strength of the i-th sample in the model, Actual i represents the measured strength of the i-th sample, and Actual denotes the average measured strength of all samples.
In addition to the coefficient of determination, this paper assesses the predictive performance of the CNN model on pervious concrete using Root Mean Square Error, Mean Absolute Percentage Error (MAPE), and Mean Absolute Error (MAE).Formulas ( 12)-( 14) present the expressions for these indicators, and Table 5 provides the values of these evaluation metrics in a single model training experiment.In addition to the aforementioned indicators, this paper also visually presents the distribution of relative errors between predicted and actual values in both the training and test sets through histograms, as illustrated in Figure 7.According to the calculations, the minimum relative error in the training set can reach 0.03%, and in the test set, it can reach 0.08%.Moreover, in the training set, over 60% of the relative errors are less than 10%, and in the test set, a similar proportion of over 60% of the relative errors fall below 10%.The percentage of relative errors exceeding 20% is only 9.30% in the training set and 8.11% in the test set.These findings indicate the CNN model's ability to provide reliable and accurate estimates of compressive strength for pervious concrete with varying material compositions.
Buildings 2024, 14, x FOR PEER REVIEW 13 of 18 In addition to the aforementioned indicators, this paper also visually presents the distribution of relative errors between predicted and actual values in both the training and test sets through histograms, as illustrated in Figure 7.According to the calculations, the minimum relative error in the training set can reach 0.03%, and in the test set, it can reach 0.08%.Moreover, in the training set, over 60% of the relative errors are less than 10%, and in the test set, a similar proportion of over 60% of the relative errors fall below 10%.The percentage of relative errors exceeding 20% is only 9.30% in the training set and 8.11% in the test set.These findings indicate the CNN model's ability to provide reliable and accurate estimates of compressive strength for pervious concrete with varying material compositions.
Comparative Analysis between CNN and Other Prediction Methods
To further highlight the superiority of CNN, this study conducted a comparative analysis of the predictive performance between CNN and another widely used machine learning model, the BP Neural Network.Both models were trained and tested using the same dataset.Figure 8 visually illustrates the predictive capabilities of the BP Neural Network regarding the compressive strength of pervious concrete from various sources.It is evident that the data points are generally distributed in proximity to the diagonal line, with some points exhibiting a certain distance from the diagonal but lacking clear outliers.This observation suggests that the BP Neural Network also demonstrates acceptable predictive performance for the 28-day compressive strength of pervious concrete with diverse material compositions.
Comparative Analysis between CNN and Other Prediction Methods
To further highlight the superiority of CNN, this study conducted a comparative analysis of the predictive performance between CNN and another widely used machine learning model, the BP Neural Network.Both models were trained and tested using the same dataset.Figure 8 visually illustrates the predictive capabilities of the BP Neural Network regarding the compressive strength of pervious concrete from various sources.It is evident that the data points are generally distributed in proximity to the diagonal line, with some points exhibiting a certain distance from the diagonal but lacking clear outliers.This observation suggests that the BP Neural Network also demonstrates acceptable predictive performance for the 28-day compressive strength of pervious concrete with diverse material compositions.
work regarding the compressive strength of pervious concrete from various sources.It is evident that the data points are generally distributed in proximity to the diagonal line, with some points exhibiting a certain distance from the diagonal but lacking clear outliers.This observation suggests that the BP Neural Network also demonstrates acceptable predictive performance for the 28-day compressive strength of pervious concrete with diverse material compositions.To visually compare the efficacy of CNN and the BP neural network in predicting pervious concrete compressive strength, this paper generated a comparative graph illustrating their predicted values against the actual values.To visually compare the efficacy of CNN and the BP neural network in predicting pervious concrete compressive strength, this paper generated a comparative graph illustrating their predicted values against the actual values.To provide a more explicit comparison of the predictive capabilities for the 28-day compressive strength of pervious concrete between the CNN and BP models, this study contrasts the predicted values of CNN and BP with the actual values individually, as depicted in Figure 10.The results in the figure clearly demonstrate that both BP and CNN exhibit satisfactory predictive performance across the entire dataset.However, the data points in the CNN model are more densely clustered around the diagonal, indicating closer proximity between CNN predictions and the measured compressive strength.Additionally, the overall R 2 for the sample predictions in the CNN model is 0.931, while for the BP neural network, it is 0.893.This implies that the overall predictive performance of CNN surpasses that of the BP neural network.To provide a more explicit comparison of the predictive capabilities for the 28-day compressive strength of pervious concrete between the CNN and BP models, this study contrasts the predicted values of CNN and BP with the actual values individually, as depicted in Figure 10.The results in the figure clearly demonstrate that both BP and CNN exhibit satisfactory predictive performance across the entire dataset.However, the data points in the CNN model are more densely clustered around the diagonal, indicating closer proximity between CNN predictions and the measured compressive strength.Additionally, the overall R 2 for the sample predictions in the CNN model is 0.931, while for the BP neural network, it is 0.893.This implies that the overall predictive performance of CNN surpasses that of the BP neural network.To comprehensively compare the predictive performance of CNN and BP, this evaluated the RMSE, MAE, and MAPE metrics between the two models.Figure 11 trates the comparative results of CNN and BP based on these metrics.The findings r that CNN exhibits smaller RMSE, MAE, and MAPE in comparison to BP, indicatin the average differences between predicted values and actual values are reduced i CNN model.Moreover, the MAPE for the BP test set is 14.40%, surpassing the des threshold of 10%.This suggests that while the BP neural network demonstrates reaso predictive performance for most pervious concrete samples, it may exhibit notable d tions from actual values for specific mix ratios, presenting challenges in practical ap tions.In contrast, CNN demonstrates smaller error metrics, with all MAPE values f below the 10% threshold, signifying that its predictive performance is within an op range.Therefore, CNN provides reliable predictions for the 28-day compressive str of pervious concrete with various material compositions.To comprehensively compare the predictive performance of CNN and BP, this study evaluated the RMSE, MAE, and MAPE metrics between the two models.Figure 11 illustrates the comparative results of CNN and BP based on these metrics.The findings reveal that CNN exhibits smaller RMSE, MAE, and MAPE in comparison to BP, indicating that the average differences between predicted values and actual values are reduced in the CNN model.Moreover, the MAPE for the BP test set is 14.40%, surpassing the desirable threshold of 10%.This suggests that while the BP neural network demonstrates reasonable predictive performance for most pervious concrete samples, it may exhibit notable deviations from actual values for specific mix ratios, presenting challenges in practical applications.In contrast, CNN demonstrates smaller error metrics, with all MAPE values falling below the 10% threshold, signifying that its predictive performance is within an optimal range.Therefore, CNN provides reliable predictions for the 28-day compressive strength of pervious concrete with various material compositions.To comprehensively compare the predictive performance of CNN and BP, this study evaluated the RMSE, MAE, and MAPE metrics between the two models.Figure 11 illustrates the comparative results of CNN and BP based on these metrics.The findings reveal that CNN exhibits smaller RMSE, MAE, and MAPE in comparison to BP, indicating that the average differences between predicted values and actual values are reduced in the CNN model.Moreover, the MAPE for the BP test set is 14.40%, surpassing the desirable threshold of 10%.This suggests that while the BP neural network demonstrates reasonable predictive performance for most pervious concrete samples, it may exhibit notable deviations from actual values for specific mix ratios, presenting challenges in practical applications.In contrast, CNN demonstrates smaller error metrics, with all MAPE values falling below the 10% threshold, signifying that its predictive performance is within an optimal range.Therefore, CNN provides reliable predictions for the 28-day compressive strength of pervious concrete with various material compositions.The results of this study enable the prediction of the 28-day compressive strength of various types of pervious concrete using existing pervious concrete preparation experience, better meeting the needs of practical construction.However, due to the limited nature of the dataset, factors such as aggregate size, type, cement grade, curing conditions, etc., were not included as input parameters.The diversity of pervious concrete types may result in suboptimal performance of the predictive model constructed in this study.Therefore, in future work, to build CNN predictive models more suitable for different materials, it is essential to fully utilize existing experimental data, incorporate material information and preparation conditions of pervious concrete into the model's input parameters, and collect sufficient data to ensure model convergence.Additionally, with the increase in input parameter variables and the significant expansion of the dataset, determining specific values for hyperparameters such as learning rate and learning rate decay factor will become a complex issue.It will be necessary to develop appropriate algorithms to partition a portion of the overall dataset for estimating these hyperparameter values, enabling the model to converge more quickly and thereby improve prediction accuracy and applicability.
Figure 1 .
Figure 1.Schematic Diagram of the Convolution Process.
Figure 1 .
Figure 1.Schematic Diagram of the Convolution Process.
Figure 2 .
Figure 2. Schematic Diagram of the Pooling Process.
Figure 2 .
Figure 2. Schematic Diagram of the Pooling Process.
Figure 3 .
Figure 3.The Structure of the CNN.
Figure 3 .
Figure 3.The Structure of the CNN.
Figure 4 .
The vertical axis represents the 28-day compressive strength predicted by the CNN model, while the horizontal axis represents the actual compressive strength data obtained through literature review and experimental testing.This figure enables a direct visual comparison of the CNN model's predictions on the training and testing sets.Buildings 2024, 14, x FOR PEER REVIEW 9 of 18
Figure 4 .
The vertical axis represents the 28-day compressive strength predicted by the CNN model, while the horizontal axis represents the actual compressive strength data obtained through literature review and experimental testing.This figure enables a direct visual comparison of the CNN model's predictions on the training and testing sets.The data points in the graph are primarily clustered around the diagonal line, indicating a strong alignment between the model's predictions and the observed results.This clustering pattern suggests a high degree of concordance between the predicted and actual compressive strength values.The model demonstrates notable accuracy in forecasting compressive strength for pervious concrete, as evidenced by the proximity of the data points to the diagonal line.This visual analysis underscores the CNN model's ability to provide accurate predictions for compressive strength.
Figure 4 .
Figure 4. Relationship between Predicted and Actual Values in CNN Training and Testing Sets.
Figure 4
Figure 4 illustrates the favorable predictive performance of the CNN model trained in this study for various types of pervious concrete.However, there is a lack of training data in the range of 10 to 20 MPa, which could lead to inadequacies in the model's predictions for the compressive strength of pervious concrete within this range.To enhance the robustness of the model, ensuring that subtle variations in the material composition of pervious concrete do not significantly impact the prediction results, experimental data on the measured 28-day compressive strength in the range of 10 to 20 MPa will be added.This additional data aims to augment the training set of the CNN predictive model, im-
Figure 4 .
Figure 4. Relationship between Predicted and Actual Values in CNN Training and Testing Sets.
Figure 5 .
Figure 5. Prediction Performance of Samples from Different Sources in the Model: (a) Training Set; (b) Test Set.After incorporating additional training data, the prediction performance of the CNN model on both the training and test sets is illustrated in Figure 6.It is evident that the model's predicted values closely match the actual values in both the test and training sets, with minimal absolute errors.This observation signifies that the model, retrained with the inclusion of new data, showcases excellent predictive capabilities without encountering underfitting or overfitting issues.The model consistently achieves high accuracy in predicting the 28-day compressive strength of diverse types of pervious concrete.
Figure 5 .
Figure 5. Prediction Performance of Samples from Different Sources in the Model: (a) Training Set; (b) Test Set.Buildings 2024, 14, x FOR PEER REVIEW 12 of 18
Figure 6 .
Figure 6.Comparison between Predicted and Actual Values of the Improved CNN Model: (a) Training Set; (b) Test Set.
Figure 6 .
Figure 6.Comparison between Predicted and Actual Values of the Improved CNN Model: (a) Training Set; (b) Test Set.
Figure 7 .
Figure 7. Histogram of Relative Errors in the Training Set and Test Set.
Figure 7 .
Figure 7. Histogram of Relative Errors in the Training Set and Test Set.
Figure 8 .
Figure 8. Predictive Performance of BP Neural Network for Data from Different Sources.
18 Figure 8 .
Figure 8. Predictive Performance of BP Neural Network for Data from Different Sources.
Figure 9 provides an intuitive representation of the variance in predictive performance between the BP neural network model and CNN for the 28-day compressive strength of pervious concrete.Notably, both CNN and BP's predicted values exhibit clustering around the actual values.However, the predicted values of the CNN model are in closer proximity to the real values, visually indicating superior predictive performance.This visual analysis underscores that the CNN model offers greater accuracy and reliability in predicting the compressive strength of pervious concrete compared to the BP neural network.
Figure 9 .
Figure 9.Comparison between Predicted Values of CNN and BP and Actual Values.
Figure 9 .
Figure 9.Comparison between Predicted Values of CNN and BP and Actual Values.
Buildings 2024 ,Figure 10 .
Figure 10.Predictive Performance of BP Neural Network and CNN.
Figure 11 .
Figure 11.Comprehensive Comparison of CNN and BP Metrics.
This paper introduces a CNN model designed for predicting the 28-day compre strength of pervious concrete, utilizing eight mix proportion parameters as input bles.The model undergoes training and testing on a dataset comprising 123 samples literature and experiments.The key findings of this study are as follows:
Figure 10 .
Figure 10.Predictive Performance of BP Neural Network and CNN.
Figure 11 .
Figure 11.Comprehensive Comparison of CNN and BP Metrics.
This paper introduces a CNN model designed for predicting the 28-day compressive strength of pervious concrete, utilizing eight mix proportion parameters as input variables.The model undergoes training and testing on a dataset comprising 123 samples from literature and experiments.The key findings of this study are as follows:
Figure 11 .
Figure 11.Comprehensive Comparison of CNN and BP Metrics.
This paper introduces a CNN model designed for predicting the 28-day compressive strength of pervious concrete, utilizing eight mix proportion parameters as input variables.The model undergoes training and testing on a dataset comprising 123 samples from literature and experiments.The key findings of this study are as follows: (I) The proposed CNN model showcased remarkable accuracy.Through multiple experiments, the CNN model achieved an R 2 of 0.938 and a MAPE of 9.13% on the test set, indicating acceptable prediction errors and robust model stability.This underscores the CNN model's capability to precisely predict the 28-day compressive strength of pervious concrete, making it adaptable to diverse material compositions.(II) The predictive model presented in this paper demonstrates enhanced stability and outperforms traditional methods.In comparison to the BP neural network trained and tested on the same dataset, the CNN model exhibits considerably lower prediction error metrics (RMSE, MAE, and MAPE) and notably higher R 2 , signifying superior predictive performance and stability compared to traditional approaches.(III) This study supplemented the model training with experimental data encompassing the compressive strength range of 10-20 MPa for pervious concrete, ensuring coverage of the common compressive strength spectrum of pervious concrete.The test set results indicate that the model data augmented with experimental data performs well in predicting data obtained from different literature sources as well as data acquired through experiments.
Table 1 .
Partial Predictive Models for Compressive Strength of Pervious Concrete and Their Performance.
Table 2 .
Source and Information of the Dataset.
Table 4 .
Pervious Concrete Porosity and Compressive Strength Test Results.
Table 5 .
Values of evaluation indicators in a single model training experiment.
Table 5 .
Values of evaluation indicators in a single model training experiment. | 10,782.8 | 2024-03-27T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Equal sums of quartics (In context with the Richmond equation) (
Consider the below mentioned equation: ) In section (A) we consider solution’s with the condition on the coefficient’s of equation (1). Namely the product ( ) square. In section (B) we consider the coefficients of equation (1), with the product of coefficient’s (abcd) not equal to a square. Historically equation (1) has been studied by Ajai Choudhry, A. Bremner, M.Ulas (ref. 5) in 2014. Also Richmond (ref. 1 & 2) has done some ground work in 1944 & 1948. This paper has gone a step further, by finding many parametric solutions & new small numerical solutions by the use of unique Identities. The identities are unique, because they are of mixed powers(combination of quartic & quadratic variables) which are then converted to only degree four identities. As an added bonus in section (B), we came up with a few quartic ( ) numerical solutions for ( ) by elliptical mean’s. A table of numerical solutions for the(4-1-n)equation arrived at by brute force computer search is also given (ref # 7).
We consider below equation: Where, in equation (1), ( a, b, c, d)=( p, q,-p,-q) To obtain a rational solution of equation (2) for t, we have to find the rational solution of (3).
( ) Thus we obtain a parametric solution.
Furthermore, we can find other parametric solution by using this method.
Hence by repeating this process, we can obtain infinitely many integer solutions for equation (1).
Example: If 'm' is written as (p/q) the above can be re-written as below: ( ) ( ) m,n are arbitrary.
By parameterizing the first equation and substituting the result to second equation, then we obtain quartic equation below to make (w= square). Since, (t, z) ( ) we get ( ) ( ) so we parameterize for (t, z) We get: Hence we get: This quartic equation (4) is bi-rationally equivalent to an elliptic curve below.
( ) This point P is of infinite order, and the multiples nP, n = 2, 3, ...give infinitely many points.
This quartic equation (4) above has infinitely many parametric solutions one of which is shown below.
Hence we can obtain infinitely many integer solutions for equation (1).
has infinitely many integer solutions. d, m are arbitrary.
So, we look for the integer solutions By parameterizing the first equation and substituting the result to second equation, then we obtain quartic equation (4) below. (4) is bi-rationally equivalent to an elliptic curve below.
This quartic equation
This point P is of infinite order, and the multiples nP, n = 2,3, ... give infinitely many points.
This quartic equation has infinitely many parametric solutions below.
Hence we can obtain infinitely many integer solutions for equation (1).
We show above Diophantine equation has infinitely many integer solutions. d,m are arbitrary.
We use an identity, By parameterizing the first equation and substituting the result to second equation, then we obtain quartic equation below.
This quartic equation is bi-rationally equivalent to an elliptic curve below.
This point P is of infinite order, and the multiples nP, n = 2,3, ...give infinitely many points.
This quartic equation has infinitely many parametric solutions below. n=2, Hence we can obtain infinitely many integer solutions for equation (1).
Example: Case A:
Substitution above values in equation (1) we get:
Above is equivalent to: Hence we get: Above equation (2) is parameterized as: And as, & we get the numerical solution: Therefore we take, Thus we have, Note: Reader's may be interested in the table of result's below. These were arrived at by elliptical method. We have not attempted to parameterize the family of equations below, but others can make an attempt. Since knowing the numerical solution is helpful towards parameterization. w 19 5 13 2 23 9 1 26 73 23 39 22 13 3 11 2 17 7 1 22 9 131 157 92 1 31 41 58 17 7 15 10
Hence we can obtain infinitely many integer solutions for equation (1).
Numerical example: From above we deduce: There are more (4-1-n) numerical solution's as derived by elliptical method for ( n < 50) & is shown below: We use an identity: By parameterizing the second equation and substituting the result to first equation, then we obtain quartic equation below.
This quartic equation is bi-rationally equivalent to an elliptic curve below.
Hence we get 2P(X,Y), This point P is of infinite order, and the multiples mP, m = 2,3, ...give infinitely many points.
This quartic equation has infinitely many parametric solutions below.
Hence we can obtain infinitely many integer solutions for equation (1).
Example:
Numerical example: We show Diophantine equation ( ) has infinitely many integer solutions for b is arbitrary.
This quartic equation is bi-rationally equivalent to an elliptic curve below.
The corresponding point is ( This point P is of infinite order, and the multiples mP, m = 2,3, ...give infinitely many points.
This quartic equation has infinitely many parametric solutions below. .
Hence we can obtain infinitely many integer solutions for equation (1). Example: ) has infinitely many integer solutions. d is arbitrary.
So, we look for the integer solutions This quartic equation is birationally equivalent to an elliptic curve below.
This point P is of infinite order, and the multiples nP, n = 2,3, ...give infinitely many points.
This quartic equation has infinitely many parametric solutions below. For n=2 Hence we can obtain infinitely many integer solutions for equation (1). Example: Re: d=1, Is equivalent to (4-1-25) equation: We show Diophantine equation above has infinitely many integer solutions. 'm' is arbitrary.
We use an identity: So, we look for the integer solutions This quartic equation is bi-rationally equivalent to an elliptic curve below.
This quartic equation has infinitely many parametric solutions below. For n=2, ( ) Hence we can obtain infinitely many integer solutions for equation (1). Example: For m=2, multiplying throughout by two we get: After removing common factor's we get: Also see Table no. 4 at the end of this paper for results arrived at by brute force for equation ( ). For (abcd) not equal to a square & k=(a+b+c+d) for ( k < 30 ). | 1,444.8 | 2020-10-01T00:00:00.000 | [
"Mathematics"
] |
Tight Constraints on the Excess Radio Background at $z = 9.1$ from LOFAR
The ARCADE2 and LWA1 experiments have claimed an excess over the Cosmic Microwave Background (CMB) at low radio frequencies. If the cosmological high-redshift contribution to this radio background is between 0.1% and 22% of the CMB at $1.42\,$GHz, it could explain the tentative EDGES Low-Band detection of the anomalously deep absorption in the 21-cm signal of neutral hydrogen (Fialkov&Barkana 2019). We use the upper limit on the 21-cm signal from the Epoch of Reionization ($z=9.1$) based on $141\,$hours of observations with LOFAR (Mertens et al. 2020) to evaluate the contribution of the high redshift Universe to the detected radio background. Marginalizing over astrophysical properties of star-forming halos, we find (at 68% C.L.) that the cosmological radio background can be at most 0.7% of the CMB at $1.42\,$GHz. This limit rules out strong contribution of the high-redshift Universe to the ARCADE2 and LWA1 measurements. Even though LOFAR places limit on the extra radio background, excess of $0.1-0.7$% over the CMB (at $1.42\,$GHz) is still allowed and could explain the EDGES Low-Band detection. We also constrain the thermal and ionization state of the gas at $z = 9.1$, and put limits on the properties of the first star-forming objects. We find that, in agreement with the limits from EDGES High-Band data, LOFAR data disfavour scenarios with inefficient X-ray sources and cases where the Universe was ionized by massive halos only.
do not constrain properties of the first population of starforming objects such as their star formation efficiency, feedback mechanisms that regulated primordial star formation, and the properties of the first sources of heat (e.g., Xray binaries). These properties can be probed using lowfrequency radio observations of the redshifted 21-cm signal of neutral hydrogen (e.g., Pober et al. 2014;Greig et al. 2016;Singh et al. 2017;Monsalve et al. 2018Monsalve et al. , 2019. The 21-cm signal is produced by atomic hydrogen in the IGM. The hyper-fine splitting of the lowest energy level of a hydrogen atom gives rise to the rest-frame ν21 = 1.42 GHz radio signal with the equivalent wavelength of about 21 cm (see Barkana 2016, for a recent review). Owing to its dependence on the underlying astrophysics and cosmology, this signal is a powerful tool to characterise the formation and the evolution of the first populations of astrophysical sources and, potentially, properties of dark matter, across cosmic time. Because the 21-cm signal is measured against the diffused radio background, usually assumed to be only the Cosmic Microwave Background (CMB), this signal can also be used to constrain properties of any excess background radiation at low radio frequencies.
Recently, a detection of the global 21-cm signal from z ∼ 17 was reported by the EDGES collaboration . The reported signal significantly deviates from standard astrophysical models (e.g., Cohen et al. 2017, show a large set of viable 21-cm global signals varying astrophysical parameters in the broadest possible range) and concerns about the signal being of cosmological origin have therefore been expressed (Hills et al. 2018;Sims & Pober 2019;Singh & Subrahmanyan 2019;Spinelli et al. 2019;Bradley et al. 2019). Despite these concerns, several theories have been proposed to explain the stronger than expected absorption. Over-cooling of hydrogen gas by dark matter (Barkana 2018) has been proposed as a possible solution. Alternatively, the existence of a new component of radio background at low radio frequencies in addition to the CMB could also lead to a deeper 21-cm absorption feature due to the stronger contrast between the temperatures of the background and the gas (e.g., Bowman et al. 2018;Feng & Holder 2018). Astrophysical sources such as supernovae or accreting supermassive black holes (Biermann et al. 2014;Ewall-Wice et al. 2018Jana & Nath 2018;Mirocha & Furlanetto 2019) could produce such an extra radio background. However, these sources would need to be several orders of magnitude more efficient in producing synchrotron radiation than corresponding sources at low redshifts, which is not very likely. An extra radio background can also be created by more excessbackground agents such as active neutrinos (Chianese et al. 2018), dark matter (Fraser et al. 2018;Pospelov et al. 2018) or superconducting cosmic strings (Brandenberger et al. 2019). Interestingly, excess radio background at low radio frequencies was claimed by the ARCADE2 collaboration at 3 − 90 GHz (Fixsen et al. 2011) as well as by LWA1 at 40 − 80 MHz (Dowell & Taylor 2018). Specifically, the latter measurement shows that the excess can be fitted by a power law with a spectral index of −2.58±0.05 and a brightness temperature of 603 +102 −92 mK at the reference frequency 1.42 GHz. However, the nature of this excess is still debated (Subrahmanyan & Cowsik 2013).
Apart from the EDGES Low-Band, several other global signal experiments report upper limits: the Large-Aperture Experiment to Detect the Dark Ages (LEDA, Price et al. 2018) reported an upper limit of 890 mK on the amplitude of the 21-cm signal at z ∼ 20 (Bernardi et al. 2016); at lower redshifts both the EDGES High-Band collaboration (z ∼ 6.5 − 14.8, Monsalve et al. 2017) and the Shaped Antenna measurement of the background RAdio Spectrum (SARAS, z ∼ 6.1 − 11.9, Singh et al. 2017) reported non-detections. The data collected by the latter two experiments rule out scenarios with negligible X-ray heating, placing limits on the properties of first star forming objects and X-ray sources (Monsalve et al. , 2019Singh et al. 2017Singh et al. , 2018. In parallel, interferometric radio arrays are placing upper limits on the fluctuations of the 21-cm signal, including the Low-Frequency Array (LOFAR 1 , Patil et al. 2017;Gehlot et al. 2019;Mertens et al. 2020 The recently reported LOFAR measurements (Mertens et al. 2020) are based on 141 hours of observations and are currently the tightest upper limits on the 21-cm power spectrum from z = 9.1, allowing us to rule out scenarios with cold IGM for models with the CMB as the background radiation (Ghara et al. 2020). In this paper we use these upper limits to constrain any excess radio background. We also derive limits on astrophysical parameters and the properties of the IGM with and without the excess radio contribution. This paper is structured as follows: In Section 2, we describe the simulations used to generate the mock data sets of the 21-cm signals. In Section 3 we describe the mock data set and the ranges of parameters probed. In Section 4, we discuss the statistical analysis employed to constrain the model parameters. In Section 5, we report our constraints on the amplitude of the excess radio background and compare it to the values that could explain the EDGES Low Band detection. We also place limits on the thermal and ionization state of the gas at z = 9.1, and on the properties of the first star-forming objects. We provide a qualitative comparison with the results of Ghara et al. (2020) in Section 6. Finally, we conclude in Section 7.
SIMULATED 21-CM SIGNAL
The 21-cm signal of neutral hydrogen observed against a background radiation of the brightness temperature T rad (at 1.42 GHz at redshift z), depends on the processes of cosmic heating and ionization. The brightness temperature of the 21-cm signal is given by: where TS is the spin temperature of the transition which at Cosmic-Dawn redshifts is coupled to the kinetic temperature of the gas, Tgas, through Ly-α photons produced by stellar sources (Wouthuysen 1952;Field 1958). The value of τ21 is the optical depth at redshift z given by where dv/dr = H(z)/(1 + z) is the gradient of the line of sight component of the comoving velocity field, H(z) is the Hubble parameter at z, and nH is the neutral hydrogen number density at z which depends on the ionization fraction and is driven by both ultraviolet and X-ray photons. The spin temperature encodes complex astrophysical dependencies and can be written as where xC is the collisional coupling coefficient and xα is the Wouthousen-Field coupling coefficient (Wouthuysen 1952;Field 1958). Both xC and xα depend on the value of T rad : with Pα being the total rate (per atom) at which Ly-α photons are scattered within the gas and T * is the effective temperature of the 21-cm transition (0.068 K). The collisional coupling coefficient is where κ i 10 is the rate coefficient for spin de-excitation in collisions with the species of type i of density ni, where we sum over species i (see e.g., Barkana 2016, for a recent review).
Radio Background
Usually the CMB is assumed to be sole contributor to the background radiation and T rad = TCMB(1 + z), where TCMB is the present day value of the CMB temperature, 2.725 K. However, as was mentioned in Section 1, the anomalously strong EDGES Low-Band signal has encouraged the development of alternative models in which the radio background is enhanced (e.g., Bowman et al. 2018;Feng & Holder 2018). Here we adopt a phenomenological global extra radio background with a synchrotron spectrum in agreement with observations by LWA1. The total radio background at redshift z is then given by where ν obs is the observed frequency, Ar is the amplitude defined relative to the CMB temperature and calculated at the reference frequency of 78 MHz (which is the centre of the absorption trough reported by the EDGES collaboration) and β = −2.6 is the spectral index (in agreement with the LWA1 observation). We vary the value of Ar between 0 and 400 at 78 MHz with the upper limit being close to the LWA1 limit and corresponds to 21% of the CMB at 1.42 GHz. All values of Ar between 1.9 (equivalent to 0.1% of the CMB at 1.42 GHz) and 400 were shown to explain the EDGES Low detection (for a tuned set of astrophysical parameters; see more details of the modelling in Fialkov & Barkana 2019).
Astrophysical Parameters
Astrophysical processes affect the 21-cm signal by regulating the thermal and ionization states of the gas. In our modelling, we account for the effect of radiation (Ly-α, Lyman-Werner, X-ray and ionizing radiation) produced by stars and stellar remnants on the 21-cm signal. The process of star formation is parameterized by two parameters. The first parameter is the star formation efficiency, f * , defined as the amount of gas in halos that is converted into stars, which we vary in the range f * = 0.1% to 50%. The second parameter is the value of circular velocity of dark matter halos, Vc, which is varied between 4.2 km s −1 (molecular hydrogen cooling limit, corresponding to the dark matter mass of M h = 1.5 × 10 6 M ⊙ at z = 9.1) and 100 km s −1 (M h = 2×10 10 M ⊙ at z = 9.1). The high values of Vc implicitly take into account various chemical and mechanical feedback effects (e.g., the supernovae feedback which is expected to expel gas from small halos thus rising the threshold mass for star formation) which we do not include explicitly. Cooling of gas via molecular hydrogen cooling channel, and subsequent star formation, happens in small halos of circular velocity 4.2 km s −1 < Vc < 16.5 km s −1 (M h ∼ 10 5 − 10 7 M ⊙ ). Abundance of molecular hydrogen is suppressed by Lyman-Werner (LW) radiation (Haiman et al. 1997;Fialkov et al. 2013). Additional inhomogeneous suppression is introduced by the relative velocity between dark matter and baryons, v bc (Tseliakhovich & Hirata 2010), which imprints the pattern of Baryon Acoustic Oscillations (BAO) in the 21-cm signal (Dalal et al. 2010;Maio et al. 2011;Visbal et al. 2012). Higher mass halos (Vc > 16.5 km s −1 ) form stars owing to atomic hydrogen cooling and are sensitive to neither the LW feedback nor to the effect of v bc , but are affected by photoheating feedback (Sobacchi & Mesinger 2013;Cohen et al. 2016;Sullivan et al. 2018). X-ray sources re-heat and mildly re-ionise the gas after the period of adiabatic cooling. We assume hard X-ray spectral energy distribution (SED) typical for a population of high redshift X-ray binaries (a complex function of X-ray energy with a peak at ∼ 2 keV adopted from Fragos et al. 2013). We vary the normalisation constant fX defined as the ratio of the total X-ray luminosity to the star formation rate, A value fX = 1 yields LX normalized to observations of X-ray binaries found in low metallicity regions today (see Fragos et al. 2013). Here we explore the wide range fX = 10 −6 − 100. Values of fX 100 are unlikely as such a population would saturate the unresolved X-ray background observed by the Chandra X-ray Observatory , while values fX 10 −6 contribute negligible X-ray heating.
In our simulations the effects of ionizing radiation (ultraviolet radiation from stars) are defined by two parameters: the mean free path of ionizing photons, R mfp = 10 − 70 comoving Mpc, and the ionizing efficiency of sources, ζ, which is tuned to yield the CMB optical depth τ in the range between 0.045 and 0.1. For a fixed set of astrophysical parameters, either ζ or τ can be used as parameter (for more details on the relation between ζ and τ see Cohen et al. 2019a). Here we choose to use the latter as it is directly probed by the CMB experiments. The latest values of τ measured by the Planck satellite are τ = 0.054 ± 0.007 (e.g., Planck Collaboration et al. 2018). However, because in this paper we focus on the constraints imposed by the LOFAR upper limits, we explore a broader range of values (0.045 − 0.1) including higher values of τ which can be constrained by the LOFAR data. We
MOCK DATA SETS AND PARAMETER SETS
We use a hybrid computational framework (similar to, but independent of the publicly available 21cmFAST code of Mesinger et al. 2011) to estimate the evolution of the large scale 21-cm signal. The code takes into account the effects of inhomogeneous star formation, thermal and ionization histories. Processes on scales below the resolution scale of 3 comoving Mpc (such as star formation, LW and photoheating feedback effects, effects of v bc ) are implemented using subgrid prescriptions. Radiation produced by stars and stellar remnants is propagated accounting for the effects of redshift on the energy of the photons and absorption in the IGM. Reionization is implemented using an excursion set formalism (Furlanetto et al. 2004). Astrophysical parameters (f * , Vc, fX, τ, R mfp , Ar, and the spectral energy distribution of X-ray photons taken from Fragos et al. 2013) are received as an input. The code generates cubes of the 21-cm signal at every redshift along with the temperature of the neutral IGM, ionization state, intensity of the Ly-α and LW backgrounds (see more details in Visbal et al. 2012;Cohen et al. 2017;. The comoving volume of each simulation box is 384 3 Mpc 3 . The simulation is run from z = 60 to z = 6. Using the same set of initial conditions for the distribution and velocities of dark matter and baryons (the fiducial IC), and then varying the astrophysical and background parameters, we construct a total of 20762 models, 6515 of which have a boosted radio background with respect to the CMB and are referred to as the excess-background models, while the remainder are reference standard models with Ar = 0 (used as a separate data set).
For each simulation, we calculate the values of the spherically averaged binned 21-cm power spectra P (kc), where kc is the centre of a wave-number bin chosen by Mertens et al. (2020). The power spectrum is averaged over redshifts z = 8.7 − 9.6 (to account for the LOFAR bandwidth), and binned over wave-numbers in agreement with the LOFAR observational setup (see Table 1 for the details of the wave-number binning). From each simulation, we also extract: the mean temperature of the gas in neutral regions Table 1. Summary of LOFAR measurements directly taken from Table 4 of Mertens et al. (2020). From left to right: central mode of each bin in units of h Mpc −1 , the extent of each k bin, spherically averaged power spectrum in each bin, 1 − σ error in the binned power spectrum. at z = 9.1, Tgas, the mean ionization fraction at z = 9.1, xH ii, the redshift at which the ionization fraction (of volume) is 50%, zre, and the duration of reionization ∆z, defined as the redshift range between the epoch when the mean ionization fraction was 90% and 10%.
Sample Variance
The lowest wave-number observed by Mertens et al. (2020) with LOFAR is kc = 0.075 h Mpc −1 , which corresponds to the scale of ∼ 125 comoving Mpc and is a significant fraction of the size of our simulation box (384 Mpc). Therefore, power spectrum in the lowest k-bin is subject to strong statistical fluctuations due to sample variance, as is shown in Figure 1. For the set of initial conditions that we used to generate the entire data-set (our fiducial IC) the bin-averaged power spectrum in the lowest k-bin is 1.1σ away from the mean calculated over 18 realizations. We correct for this systematic offset by introducing a bias factor. We perform a new suite of simulations to systematically estimate the effect of sample variance. For each set of astrophysical parameters out of 360 selected combinations 6 , 10 simulations with different initial conditions, including the fiducial set, were performed. The bias in the binned power spectrum was subsequently calculated at every k-bins as the ratio of the binned power spectrum averaged over 10 realizations to the one derived from the fiducial set: We find that at z = 9.1 (close to the mid-point of reionization for the models that can be constrained by LOFAR in the standard case) the bias varies as a function of the reionization parameters τ and R mfp , while it has a very weak dependence on the rest of the parameters (Vc, f * , fX and Ar). We jointly fit the bias as a second order polynomial in τ times a linear function of R mfp . Because the entire data set described in Section 3 was created using the fiducial IC set, we apply the corresponding parameter-dependent and kc-dependent bias factor to all the simulated results to compensate for the effect of sample variance. Multiplying by the bias factor is essentially equivalent to averaging over 10 simulations. Furthermore, we fit the variation in the simulated power in each bin (σSV,sim(kc), blue error bars in Figure 1), as a function of astrophysical parameters. We find that the fractional standard deviation, σSV,sim(kc)/P fiducial (kc), can be fitted with a quadratic function of τ times a linear function of R mfp , similarly to bSV(kc). The variation due to sample variance has a very weak dependence on Vc, f * , fX and Ar. The error in the power spectrum (after it has been corrected by the bias factor) is then given by σSV,sim(kc)/ √ 10. Finally, in order to account for theoretical uncertainty in modelling 7 , we impose a lower limit of 10% on the relative error of the power spectrum of each individual simulation (Ghara et al. 2020). In the total error budget of the corrected power spectrum this error should also be divided by √ 10.
The total theoretical parameter-dependent error in the 7 The values of the 21-cm signal generated by the numerical simulation are subject to uncertainty. This is because some of the effects of order ∼ (1 + δ), where δ is the stochastic dimensionless perturbation of the density field, have not been taken into account. For example, at the moment we assume linear growth of structure on large scales (> 3 Mpc).
binned power spectrum is, thus, given by where ∆ 2 th (kc) = P fiducial (kc)k 3 c / 2π 2 is the calculated power spectrum in mK 2 units.
Binning over the Model Parameters
Our goal is to derive constraints on the excess radio background and also explore implications for the rest of the model parameters, as well as for the thermal and ionization states of the IGM. Based on the value of the power spectrum for each set of model parameters, we evaluate the likelihood of each point in the parameter space θ as described in the next Section. We, therefore, need to bin the parameter space θ and calculate the binned power spectra ∆ 2 th (kc, θ) and the corresponding theoretical error σ th (kc, θ). To remind the reader, ∆ 2 th (kc, θ) and σ th (kc, θ) are binned in redshift, wave-number and θ.
We explore two discrete sets of the parameter spaces with θ defined as either the model parameters θ = [f * , Vc, fX , τ, R mfp , Ar] or the derived quantities θ = [Tgas,xH ii, zre, ∆z]. The range of each parameter is divided into 10 equally spaced bins, and each bin is tagged by the bin-averaged value of relevant parameters. Due to the large ranges, the binning is logarithmic for f * , Vc, fX , Ar and Tgas, and linear for τ , R mfp ,xH ii, zre and ∆z. We assume flat priors on each of the parameters across the entire allowed range (see Section 2): 0.001 f * 0.5, 4.2 km s −1 Vc 100 km s −1 , 10 −6 fX 100, 0.045 τ 0.1, 10 R mfp 70 comoving Mpc and zero outside these ranges. In the standard case Ar = 0 and in the excess background case, we vary 0.2 Ar 400 (thus covering the range 0.01%-21% of the CMB at 1.42 GHz). The priors on [Tgas,xH ii, zre, ∆z] are defined based on the ranges of these parameters found in our simulations (Section 3): 2.2 K Tgas 400 K (the lower limit is the temperature of the gas in an adiabatically expanding universe at z = 9), 0.02 xH ii 1.00, 6 zre 10 and 2 ∆z 5, and zero outside these ranges.
For θ = [f * , Vc, fX , τ, R mfp , Ar] this binning gives rise to 10 5 bins in the standard case and 10 6 bins in the excessbackground case; for θ = [Tgas,xH ii, zre, ∆z] there are 10 4 bins in each case. Due to the relatively small number of models, not all bins are populated. To solve this issue, we use the model sets to train Artificial Neural Networks (ANN, see Appendix A for details) and use that to construct an emulator (similar approach has been taken by Cohen et al. 2019b;Kern et al. 2017), which we then use to interpolate the empty bins.
STATISTICAL ANALYSIS METHODOLOGY
In general, the 21-cm signal is expected to be a non-Gaussian field (Bharadwaj & Pandey 2005;Mellema et al. 2006;Mondal et al. 2015) and the non-Gaussian effects will play a significant role in the error estimates of 21-cm power spectrum (Mondal et al. 2016(Mondal et al. , 2017. In addition, the data in LOFAR bins are slightly correlated due to the finite station size. Therefore, the power spectrum error-covariance matrix is expected to be non-diagonal. However, in reality bins show very weak correlation because the bins are chosen relatively wide compared to the footprint of a LOFAR station which acts as a spatial convolution kernel. With minimal error, we can therefore assume that the bins are uncorrelated and the covariance matrix is diagonal. The probability of a model (tagged by θ) given data can then be written as a product of the probabilities in each individual wave-number bin kc ∈ ki. In addition, because of the bin-averaging and large scales considered, we can assume that the signal is close to a Gaussian random field.
The LOFAR measurements reported by Mertens et al. (2020) are upper limits. Therefore, following Ghara et al. (2020), we can represent the probability of a model given the observed power spectrum values using the error function: where ∆ 2 21 (ki) is the measured power spectrum in the i-th kc bin with uncertainty ∆ 2 21, err (ki) listed in Table 1. The total variance in the bin is given by According to this definition, the probability of a model is close to unity when its power spectrum at z = 9.1 is less than [∆ 2 21 (ki)−σ(ki, θ)] for all ki, and the probability is close to zero when ∆ 2 th (ki, θ) is greater than [∆ 2 21 (ki) + σ(ki, θ)] for any ki.
As an illustration, in Figure 2 we show the complete set of excess-background power spectra (6515 models in total) colour-coded by the probability that the data is consistent with the model. For comparison, we also show the maximum power of the models in the standard case (white line). The upper limits from Mertens et al. (2020) are plotted for reference. As we see from the figure, the current observational limits from LOFAR are strong enough to rule out a significant fraction of the explored excess-background scenarios (all corresponding to a cold IGM with ∼ 50% ionization at z = 9.1, as we will see later). However, for the standard astrophysical scenarios where the values of the power spectra are lower, only the most extreme models can be ruled out, and only in the lowest k-bin. A set of corresponding thermal histories is plotted in the right panel of Figure 2. The LOFAR upper limits by Mertens et al. (2020) disfavour a late X-ray heating which leaves the IGM cold for most of the EoR. Scenarios with early X-ray heating cannot be ruled out by the data as, typically, the corresponding power spectrum values are low.
RESULTS
Using the predicted values of the spherically averaged binned power spectrum in all seven k-bins we can rule out scenarios which yield strong fluctuations at z = 9.1. A few factors need to come together to ensure maximum power. First, the spin temperature has to be fully coupled to the gas temperature, which, for realistic star formation scenarios, is guaranteed to be the case at z = 9.1 (e.g., Cohen et al. 2018). Second, the larger the contrast between Tgas and T rad , the stronger the signal. Because we assume the evolution of T rad to be independent of star formation, the strongest contrast between Tgas and T rad is reached in cases of cold IGM (for both excess-background and standard models). In addition, owing to the higher radio background temperature, the signals are enhanced in our excess-background case compared to the standard case. Finally, fluctuations in the gas temperature and the neutral fraction play a role. Because here we have chosen a hard X-ray spectrum (Fragos et al. 2013;, heating is nearly homogeneous, and the dominant source of fluctuations at z = 9.1 is the nonuniform process of reionization with peak power at ∼ 50% ionization fraction. For a fixed thermal history, nearly homogeneous reionization would result in a smoother signal and, thus, lower power of the 21-cm fluctuations, compared to a patchy reionization scenario.
Limits on the Excess-Radio Background
Using L( θ), we calculate the normalized probability for each of the parameters, θ = [f * , Vc, fX , τ, R mfp , Ar], and parameter pairs, marginalising over the rest of the parameter space. The resulting probability distributions are normalized using the criterion that the total probability (area under the curve) is 1 within the considered prior ranges. The resulting two-dimensional and one-dimensional probabilities of all the model parameters are shown in Figure 3, where we divided each probability function by its peak value to show the marginalised likelihood of all possible combinations uniformly. Using one-dimensional probabilities we find the 68% confidence intervals for each one of the parameters (see Table 2). We calculate the 68% confidence levels (C.L.) by selecting parameter-bins with the highest probability up to a cumulative probability of 0.68. We also find the regions where the one-dimensional probabilities are below exp(−1/2) of the peak.
Marginalising over the residual model parameters (f * , Vc, fX , τ, R mfp ), we derive constraints on Ar finding that LOFAR upper limit rules out Ar > 13.2 at 68%, equivalent to 0.7% of the CMB at 1.42 GHz. The likelihood, which peaks at low values of Ar, drops by a factor of exp(−1/2) by Ar = 60.9, which corresponds to 3.2% of the CMB at 1.42 GHz. In our analysis we have fixed the value of the spectral index of the radio background to β = −2.6. We have checked that the uncertainty in the spectral index ∆β = 0.05, reported by LWA1 (Dowell & Taylor 2018), would lead to only up to ∼ 3 per cent variation in the intensity of the excess radio background at the frequency of 140 MHz corresponding to z = 9.1. showed that the global signal reported by EDGES Low-Band can be produced by adding an extra radio background with 1.9 < Ar < 418 relative to the CMB at the 78 MHz reference frequency (corresponding to 0.1 − 22 per cent of the CMB at 1.42 GHz). Even though part of this range is now ruled out by the new LOFAR limits, models with values of Ar between 0.1% and 0.7% of the CMB at 1.42 GHz are still allowed and could fit the EDGES Low-Band detection. Such a small contribution is within the measurement error of LWA1 (Dowell & Taylor 2018, report excess background of 603 +102 −92 mK at the 21-cm rest frame frequency of 1.42 GHz) and would remain a plausible ex- Figure 2. We show the excess-background models colour-coded with respect to the probability that the data is consistent with the model (Eq. 10) as is indicated on the colour bar. Left: Binned power spectra vs wavenumber (in units of Mpc −1 , where we have assumed h = 0.6704 for conversion from Table 1) at z = 9.1. The white dashed line shows the maximum power of the models in the standard case (L = 0.4898). Magenta data points correspond to the LOFAR data from Table 1 (two-sided error bars). Right: corresponding thermal histories, i.e., evolution of the mean temperature of neutral intergalactic gas with redshift. Each curve is shown down to the (model-dependent) redshift of end of reionization. Table 2. Limits on astrophysical parameters and the derived IGM parameters. From left to right: type of model; mean temperature of neutral gas at z = 9.1 in K; ionization fraction of the IGM; duration of reionization defined as the redshift interval between 90% neutral IGM and 10% neutral; redshift of the mid-point of reionization, zre (defined as the redshift at which neutral fraction is 50%); star formation efficiency; minimum circular velocity of star forming halos in km s −1 ; X-ray heating efficiency; CMB optical depth; mean free path of ionizing photons in comoving Mpc; amplitude of the excess radio background compared to the CMB at the reference frequency of 78 MHz (as defined in Eq. 6). For the case of excess radio background (Ex. bck. in the table) we show both 68% limits (top row) and we find the parameter values at which the likelihood drops to exp(−1/2) of the peak value (middle row). In the standard case we can only show the 68% limits, as the 1D PDFs are rather flat.
Model
Tgasx planation for the detected EDGES signal even if the excess measured by ARCADE2 and LWA1 is due to an erroneous Galactic modelling (Subrahmanyan & Cowsik 2013). In Figure 4, as an illustration, we show global 21-cm signals for those excess-background models from our data set that are broadly consistent with the tentative EDGES Low-Band detection. In order to define this consistency we follow the simple approach taken by −min [T21(68 < ν < 88)] < 1 K. (13) The signals in Figure 4 are colour-coded with respect to the LOFAR likelihood (same as in Figure 2). All the signals consistent with EDGES Low-Band have relatively high LOFAR likelihood, L 0.31. This is because the EDGES detection implies an early onset of the Ly-α cou-pling (Schauer et al. 2019) due to efficient star formation (f * > 2.8%) in lower-mass halos with circular velocity below Vc = 45 km s −1 (corresponding to M h < 7.8×10 8 M ⊙ at z = 17 . In such models the IGM is heated and partially ionized by z = 9.1, resulting in relatively low-intensity 21-cm signals in the LOFAR band.
Astrophysical Limits
Next, we explore the implications of the LOFAR upper limits for the rest of the model parameters (f * , Vc, fX , τ, R mfp ). In this work we assume hierarchical structure formation with a simple prescription for the formation of stars and X-ray binaries. Therefore, LOFAR limits at z = 9.1 can be used to constrain properties of the first star forming halos (appearing at z ∼ 30 − 60 in our simulations) and first sources of light at Cosmic Dawn. The resulting two-dimensional and one-dimensional probabilities are shown in Figure 3 and the limits are summarised in Table 2. In the limiting case of the negligible radio background our results converge to the standard case with the CMB as the background radiation. This trend is demonstrated in Figure 3 where the two-dimensional probabilities of models with Ar = 0 are appended below the Figure 3. 1D and 2D marginalised likelihood of the astrophysical parameters (f * , Vc, f X , τ, R mfp , Ar) obtained using excessbackground models. In addition, we append the normalized likelihood values for our standard models below the white band of the bottom row to highlight the consistency with the excess-background case. The standard-case normalized likelihood was calculated by using a joined set of the excess-background models and standard models. Regions of 2D marginalised likelihoods which are on the darker (red, purple and black) side of the solid lines are disfavoured with more than 68% confidence, and the regions which are on the darker side of the dashed lines are below exp(−1) of the peak probability (similar to the 2D Gaussian 1-sigma definition). The grey regions in the 1D likelihood distribution are also disfavoured at the 68% confidence level, and the black regions are below exp(−1/2) of the peak probability. Threshold parameter values at which likelihood drops by a factor of exp(−1/2) and the 68% limits are listed in Table 2. white band. For completeness we also explore the set of standard models separately, showing their two-dimensional and one-dimensional probabilities in Figure 5 and listing the corresponding 68% limits in Table 2. All disfavoured models feature efficient star formation with f * 5% at 68% C.L. (Table 2). However, the corresponding 1D marginalised likelihood is rather flat and never drops below a factor of exp(−1/2) relatively to its peak value. Higher values of f * result in stronger fluctuations which are easier to rule out. Higher values of f * also imply stronger Ly-α background and, thus, an earlier onset of Ly-α coupling which yields signals with larger amplitudes (e.g., Cohen et al. 2019a).
Another model parameter related to star formation in first halos is Vc. Higher Vc is equivalent to larger minimum mass of star forming halos which are more strongly clustered, thus yielding stronger fluctuations. In the hierarchical model of star formation that we adopt here, higher Vc also implies later onset of star formation and X-ray heating. In such models chances are that fluctuations (e.g., heating and Ly-α) are not yet saturated by z = 9.1 resulting in stronger 21-cm signals that can be ruled out by LOFAR. We find that values of Vc above 28 km s −1 (corresponding to 4.5×10 8 M ⊙ at z = 9.1) are disfavoured by the data at 68% (the corresponding 1D marginalised likelihood is rather flat and never drops below the threshold value of exp(−1/2) relatively to its peak value). The standard-physics limit is 36 km s −1 , or 9.5 × 10 8 M ⊙ at z = 9.1. Even though the limits on Vc are weak at the moment, the LOFAR data favour the existence of low-mass halos (in agreement with EDGES High Band results, Monsalve et al. 2019). In our models gas temperature is regulated by the interplay between several cooling and heating mechanisms with the major roles played by adiabatic cooling due to the expansion of the universe and X-ray heating by X-ray binaries, although the latter is partially degenerate with f * and Vc which regulate the number of X-ray binaries 8 . Therefore, the X-ray efficiency of the first X-ray binaries is directly constrained by LOFAR with a value fX < 1 × 10 −2 disfavoured at 68% C.L., implying a lower limit on the total X-ray luminosity per star formation rate (Eq. 7) of 3 × 10 38 erg s −1 M −1 ⊙ yr. The 1D likelihood, which peaks at high fX values, is steep enough and drops below the threshold exp(−1/2) of its peak value at fX = 8 × 10 −4 (corresponding to 2.4 × 10 37 erg s −1 M −1 ⊙ yr). In the standard case only the 68% limit can be calculated and is fX < 5 × 10 −3 (1.5 × 10 38 erg s −1 M −1 ⊙ yr respectively). The current LOFAR data also disfavour models with mid-point of reionization at z ∼ 9. In such models the peakpower from ionizing fluctuations falls within the currentlyanalysed LOFAR band, and, consequently, such models are relatively easy to exclude. This constraint can be mapped on to limits on τ : scenarios with τ > 0.076 (excess background) or τ > 0.080 (standard models) are disfavoured at 68%. In both theories, the 1D likelihood curves of τ peak at low values of τ but do not drop below the threshold value of exp(−1/2) within the prior ranges. Finally, we find that the constraints on the model parameter R mfp are very weak, with the 1D marginalised likelihood being very flat. This means that our model power spectrum is not sensitive to the changes in R mfp value at z ∼ 9.
Comparison with EDGES
Focusing on the standard models we can compare the LO-FAR limits reported above to the limits extracted from the data of the global 21-cm instrument EDGES High-Band (90 − 190 MHz). Using a similar set of standard models and similar prior ranges of parameters as we explore here, Monsalve et al. (2019) found that the EDGES High-Band data favour (at 68% confidence) the following parameter ranges assuming a fixed X-ray SED (softer than what we use here; however, the global signal constraints prove to be nearly insensitive to the X-ray SED, Monsalve et al. 2019): R mfp < 36.1 Mpc, Vc < 21.5 km s −1 (equivalent to 2 × 10 8 M ⊙ at z = 9.1), fX > 2.5 × 10 −3 , f * < 0.4% or f * > 3.6% (signals with both lower and higher values of f * are likely to be outside of the band of EDGES High), τ < 0.072 or 0.074 < τ < 0.079 (where the second band is most likely due to the instrumental systematic). Overall LOFAR and the EDGES High-Band are in agreement ruling out scenarios with inefficient X-ray heating and models in which the Universe was ionized by massive halos only (of mass few ×10 8 M ⊙ or higher, at z ∼ 9.1). Similar trends were found with the SARAS2 data (although only ∼ 300 models were examined in that case, Singh et al. 2017).
Limits on the Thermal and Reionization Histories
We use the LOFAR upper limits on the 21-cm power spectrum to put limits on the thermal and ionization state of the IGM at z = 9.1. We repeat the likelihood calculation applying it to the IGM parameters θ = [Tgas,xH ii, zre, ∆z]. The resulting two-dimensional and one-dimensional probabilities of Tgas,xH ii, zre and ∆z are shown in Figure 6 (left panel shows the case of the extra radio background, while the standard case is shown on the right for comparison). Our results are also summarised in Table 2 where the 68% limits are listed. We have also tabulated the limits obtained from the regions where the one-dimensional probabilities are below exp(−1/2) of the peak (similar to the Gaussian 1-sigma definition).
As we see from the figure and the table, the LOFAR data indeed disfavour scenarios with cold IGM. The lower limit on the temperature of neutral gas at z = 9.1 is 19.2 K at 68% C. L. (while it is only 10.1 K in the standard case). The likelihood, which peaks at high values of Tgas, drops by a factor of exp(−1/2) at Tgas = 6 K in the excess-background Ar Figure 5. 1D and 2D marginalised likelihood of the astrophysical parameters (f * , Vc, f X , τ, R mfp ) obtained using standard models (Ar = 0). The regions of 2D marginalised likelihoods which are on the darker side of the solid lines are disfavoured with more than 68% confidence. The grey regions in the 1D likelihood distribution are also disfavoured at the 68% confidence level (also listed in Table 2).
case. As expected, there is some degree of degeneracy between the constraints on the thermal and reionization histories with the strongest limits on temperature coming from the cases with mid-point of reionization occurring at z ∼ 9.
Through marginalising over the thermal histories we can put limits on the process of reionization ( Figure 6 and Table 2). We find that the LOFAR limits disfavour fast reionization scenarios (with ∆z 3) with ionized fractions between ∼ 38% and ∼ 72% at z = 9.1. The high end of the allowedxH ii values (xH ii > 72% at z = 9.1) is inconsistent with other probes of reionization and would be ruled out if joined constraints were considered: e.g., Ly-α damping wing absorption in the spectrum of the quasar at z = 7.54 suggests that the Universe is ∼ 60% neutral at that redshift (ionization fraction less than 40% Bañados et al. 2018;Davies et al. 2018). The quantitative joint analysis, however, is beyond the scope of this paper. The grey regions in the 1D likelihood distribution are also disfavoured at the 68% confidence level, and (for excess-background models) the black regions are below exp(−1/2) of the peak probability. Threshold parameter values at which likelihood drops by a factor of exp(−1/2) and the 68% limits are listed in Table 2.
Ghara
cal constraints on the IGM properties assuming standardphysics models (with the CMB as the background radiation). We verify the consistency of our conclusions with Ghara et al. (2020) by qualitatively comparing our standard case results for the thermal and ionization states of the IGM. A quantitative comparison between the two works is not possible at this stage because of the different choices of modelling, parameterization and priors. Moreover, because the 21-cm signal is sensitive to the thermal and ionization histories the values of the gas temperature and ionization fraction can be directly constrained using the data. However, the mapping between these quantities and the astrophysical properties of the UV and X-ray sources (in our case f * , Vc, fX , τ and R mfp ) is model-dependent. Therefore, in this paper we refrain from comparing the astrophysical constraints leaving it for future work.
In their work Ghara et al. (2020) explored two scenarios: (1) homogeneous spin temperature, which implies saturated Ly-α background and homogeneous X-ray heating. The parameters that are varied in this case include gas temperature (or, equivalently, spin temperature), minimum halo mass and ionizing efficiency. (2) Inhomogeneous heating by soft X-ray sources with power-law SED where Mmin and the spectral index of X-ray sources were kept fixed; the parameters that were varied are ionizing efficiency, X-ray efficiency (defined differently than in our work) and minimum mass of X-ray emitting halos. In all cases the value of star formation efficiency was kept fixed at f * = 2%. In comparison, we explore the popular case of heating by a realistic population of X-ray binaries with hard SED. In this case heating is inefficient and fluctuations are smoothed out ). Therefore, we expect our results to be closer to case (1) of Ghara et al. (2020). Moreover, in our work all the parameters (except for X-ray SED) are allowed to vary over a wide range, e.g., f * is varied between 0.1% and 50%.
Despite these differences in our modelling, qualitatively our work is consistent with Ghara et al. (2020). Both works rule out a cold IGM with an ionization fraction close to 50%. Namely, in their case (1)xH ii ∼ 0.2 − 0.75 and Tgas 3 K are disfavoured; while we find thatxH ii ∼ 0.38 − 0.72 and Tgas 10.1 K are disfavoured (at 68%).
CONCLUSIONS
In this paper we have used the upper limit on the 21-cm signal from z = 9.1 based on 141 hours of observations with LOFAR (Mertens et al. 2020) to evaluate the contribution of the high redshift Universe to the excess radio background over the CMB detected by ARCADE2 (Fixsen et al. 2011) and LWA1 (Dowell & Taylor 2018). Assuming synchrotron spectrum of the radio background with spectral index β = −2.6 and marginalising over the astrophysical properties of first star-forming sources, we find (at 68% C.L.) the contribution above the CMB level to be less than a factor of 13.2 at the reference frequency of 78 MHz, equivalent to 0.7% of the CMB at 1.42 GHz. This limit, for the first time, rules out strong contribution of the high-redshift Universe to the excess detected by ARCADE2 and LWA1. At the level below 0.7% of the CMB, the extra radio background could, on one hand, be strong enough to explain the tentative EDGES Low-Band detection which requires an excess of at least 0.1% of the CMB ). On the other hand, such a small contribution would be within the measurement error of the radio telescopes. Hence, it would remain a plausible explanation for the detected EDGES signal, even if the excess radio background measured by AR-CADE2 and LWA1 is due to an erroneous Galactic modelling (Subrahmanyan & Cowsik 2013).
We also use LOFAR data to constrain thermal and ionization state of the IGM at z = 9.1 in models with and without the extra radio background over the CMB. If such an extra radio background is present at z = 9.1, the fluctuations in the 21-cm signal are boosted compared to the standard case, which gives LOFAR a larger lever to reject models. Therefore, for the models with excess radio background, constraints on the astrophysical properties and the properties of the IGM are tighter than in the standard case. In particular, warmer IGM scenarios with mean neutral gas temperature of up to 19.2 K are disfavoured (at 68% C.L.) compared to only up to 10.1 K in the standard case.
Using the LOFAR data we have also derived 68% C.L. limits on the astrophysical parameters of Cosmic Dawn and EoR. The data disfavour very efficient star formation above 5%, imply the existence of small halos at early times (of masses below few×10 8 M ⊙ at z = 9.1), require the presence of X-ray sources, and disfavour a CMB optical depth above τ ∼ 0.076. For the suite of standard models, we point out that the LOFAR data rule out similar type of models as those rejected by the global signal experiments, namely the EDGES High-Band (Monsalve et al. 2019) and SARAS2 (Singh et al. 2018). Finally, we note that our constraints of the standard-physics parameters are in a qualitative agreement with the results reported by Ghara et al. (2020). A detailed comparison between these two works is beyond the scope of this paper.
Although other high redshift probes (e.g., the Planck measurement of the CMB optical, high redshift quasars and galaxies) allow to put tighter constraints on the ionization history, the 21-cm observations provide a unique way to probe the thermal history of the Universe, constrain properties of first star forming halos and test the nature of the radio background. Figure 6 (right) | 11,284 | 2020-04-01T00:00:00.000 | [
"Physics"
] |
Manganese stimulation and stereospecificity of the Dopa (3,4-dihydroxyphenylalanine)/tyrosine-sulfating activity of human monoamine-form phenol sulfotransferase. Kinetic studies of the mechanism using wild-type and mutant enzymes.
Kinetic studies were performed to dissect the mechanism underlying the remarkable Mn(2+) stimulation of the Dopa/tyrosine-sulfating activity of the human monoamine (M)-form phenol sulfotransferase (PST). The activities and the stimulation by Mn(2+) are highly stereospecific for the d-form enantiomers of tyrosine and Dopa. Analysis of the kinetic results strongly suggests that tyrosine-Mn(2+) and tyrosine-Mn(2+)-tyrosine complexes are obligatory substrates, whereas Dopa-Mn(2+) complexes may be better substrates than Dopa alone. This activation of the Dopa/tyrosine-sulfating activity of M-form PST by Mn(2+) via complex formation between Mn(2+) and the substrate is the first reported case of a regulatory mechanism in this important class of enzymes. Our previous studies using point-mutated M-form PSTs established that the Mn(2+) (in the substrate-Mn(2+) complex) exerts its stimulatory effect by binding predominantly to the Asp-86 residue at the active site. We present here further studies using dopamine as substrate to bolster this conclusion. The possible physiological implications of this rather unusual specificity for the d-amino acid and its derivatives and the stimulation by Mn(2+) are discussed in the context of protective and detoxification mechanisms that may operate in neurodegenerative processes in the brain. The Mn(2+) stimulation of the activity of M-form PST toward d-enantiomers of Dopa/tyrosine may have implications for other substrates (including chiral drugs) and for the other cytosolic sulfotransferases that are involved in the regulation of endogenous metabolites as well as in detoxification.
Sulfotransferases (STs) 1 are enzymes ubiquitous in both plants and animals that catalyze the sulfation of a variety of compounds containing hydroxyl or amino groups using 3Ј-phosphoadenosine-5Ј-phosphosulfate (PAPS) as the sulfonyl group donor (1)(2)(3). Although the membrane-bound STs use proteins, glycolipids, and other macromolecules as substrates, the cytosolic STs sulfate smaller molecules and are part of the Phase II detoxification pathway for the biotransformation/excretion of drugs and xenobiotics. This serves to both detoxify dietary, therapeutic, and environmental xenobiotics as well as regulate the levels and activities of endogenous molecules such as thyroid and steroid hormones, catecholamine hormones/neurotransmitters, and bile acids (4,5). Except during the early stage of development, cytosolic STs in general have been shown to be constitutive enzymes with little known about the regulation of their enzymatic activities (6,7). In the past several years, however, studies performed in our laboratory reveal that Mn 2ϩ exerts a stimulatory effect on sulfation of some substrates by the human monoamine (M)-form phenol sulfotransferase (PST) (8,9).
The human M-form PST is the only sulfotransferase that sulfates the catecholamines, in particular the neurotransmitter dopamine, with high activity (4). This enzyme is found in the upper gastrointestinal tract, brain, platelet, and lung (10). In the gastrointestinal tract it may detoxify potentially lethal dietary catecholamines. In the brain it may play a role in regulating the levels of dopamine. We had previously demonstrated that besides its activity toward catecholamines, M-form PST could uniquely sulfate the free amino acid form of tyrosine and 3,4-dihydroxyphenylalanine (Dopa) (8,9,11). Interestingly, it showed higher activities toward the D-enantiomers (as compared with the L-enantiomers) of these compounds and a remarkable stimulation (by more than 100-fold) of the activities by sub-millimolar and millimolar levels of Mn 2ϩ , especially with the D-enantiomers. Mn 2ϩ also stimulates the activity with dopamine, although only 2-3-fold (9). Mn 2ϩ is known to be present at higher levels in human neuronal tissue (12) and is sequestered intracellularly in mitochondria (13). Oxidative stress or damage, which has been implicated in neuronal apoptosis that occurs in neurodegenerative diseases, generally results in mitochondrial dysfunction (14). The consequent release of Mn 2ϩ into the cytosol may activate the M-form PST and, in particular, its Dopa/tyrosine-sulfating activity. It has also been observed that D-amino acids accumulate in aging tissues, especially if the levels of D-amino acid oxidases are low (15). Attempts have been made to link the amount of specific D-amino acids to oxidative damage and to neurodegenerative disorders such as Alzheimer's and Parkinson's diseases (16,17). A clear picture is yet to emerge. However, the removal of D-amino acids by sulfation may be viewed as a detoxification process. From a different perspective, the sulfation of D-tyrosine could also serve as a useful model in the study of the stereospecific action of the PST enzymes on chiral drugs (18 -21). It should also be emphasized that the dramatic stimulation by Mn 2ϩ of the sulfation of these substrates may be part of a more general mechanism to increase the promiscuity of M-form PST toward unusual or xenobiotic substrates in the presence of a molecular trigger such as increased Mn 2ϩ concentrations.
In this paper, we report kinetic studies on the sulfation of dopamine and the D-and L-enantiomers of tyrosine and Dopa by the wild-type M-form PST and its Asp-86 point mutant and the stimulation by Mn 2ϩ . M-form PST is known to exist as a homodimer in its native state (22), and the reported x-ray structure of the protein (23) revealed that residues 84 -92 of one subunit form a "mobile" loop that may intercalate into the active site of the other subunit. It was suggested that the presence of this mobile loop might hinder the proper positioning of some substrates (23). Our previous studies (11) have established the importance of two regions in the sequence of M-form PST, designated Region I (spanning residues 84 -89) and Region II (residues 143-148), to its dopamine-sulfating activity as well as its Dopa/tyrosine-sulfating activity and the Mn 2ϩ stimulation. These are the regions that vary between M-form PST and the P-form PST (that does not possess Mn 2ϩstimulated Dopa/tyrosine-sulfating activity), which otherwise are more than 93% identical in their amino acid sequences (11). That the Region I is part of the above-mentioned mobile loop intercalating into the active site allows for the formulation of an attractive model to explain our kinetic results. Our previous studies with point mutants in Regions I and II have also underlined the importance of residues Asp-86 and Glu-89 in Region I and of residue Glu-146 in Region II in the dopaminesulfating activity of M-form PST (24). Further studies with such point mutants and two deletional mutants (lacking residues 84 -90 and 84 -86, respectively, of the purported loop intercalating into the active site in the wild-type M-form PST) have revealed that both the loop as a whole (rather than the residues comprising it) as well as residue Glu-146 in Region II are important to the stereospecificity of M-form PST for the D-enantiomers of Dopa and tyrosine. Residue Asp-86 in Region I, on the other hand, is the one most important to the Mn 2ϩ dependence of this activity. 2 We also present in this paper studies with the D86A point mutant to further dissect the structural basis for these activities and their activation by Mn 2ϩ .
Bacterial Expression and Purification of the Recombinant Human Wild-type and D86A Point-mutated M-form PSTs-Competent E. coli BL21 cells transformed with pGEX-2TK vector harboring the wild-type or D86A point-mutated M-form PST cDNA were grown to A 600 nm ϭ 0.8 in 1 liter of LB medium supplemented with 50 g/ml ampicillin. After induction with 0.1 mM isopropyl -D-thiogalactopyranoside overnight at room temperature, the cells were collected by centrifugation and homogenized in 25 ml of an ice-cold lysis buffer containing 20 mM Tris-HCl, pH 7.5, 150 mM NaCl, 1 mM EDTA, and 10% glycerol using an Amicon French press. Ten l of a 10 mg/ml aprotinin solution was added to the homogenate, which was then centrifuged at 10,000 ϫ g for 15 min at 4°C. The supernatant collected was fractionated by equilibrating with 1.5 ml of glutathione-Sepharose for 20 min at 4°C, and the supernatant and the washings with the lysis buffer were discarded. The bound fusion protein was treated with 3 ml of a thrombin digestion buffer (containing 5 units/ml bovine thrombin, 50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 2.5 mM CaCl 2 , and 10% glycerol). After a 30-min incubation at room temperature, the preparation was centrifuged. The recombinant enzyme present in the supernatant was collected and analyzed by SDS-PAGE, and the protein concentration was determined and used in the enzymatic assays. Ten mg/ml of bovine serum albumin was added as a stabilizing agent to this preparation, that was then stored in small aliquots at Ϫ70°C until use.
Enzymatic Assay-The sulfotransferase assays were performed using [ 35 The enzyme dilutions were prepared in 50 mM TAPS, pH 8.25, or 50 mM HEPES, pH 7.0, containing 10% glycerol. The MnCl 2 and enzyme solutions were added last to the reaction mixture, which was immediately incubated for 3 min at 37°C. The reaction was stopped by the addition of 5 l of 2.5 M acetic acid, vortexed, and centrifuged to clear any precipitates (26). The amount of enzyme chosen was such as to ensure that there was not more than 5% reaction so that the reaction was linear with time and amount of enzyme. The final reaction mixture was subjected to the analysis of i 35 S-sulfated product by spotting a 3-l aliquot onto the cellulose TLC plate, which was then subjected to ascending TLC using a solvent system containing n-butanol, isopropanol, 88% formic acid, H 2 O in a 3:1:1:1 ratio by volume. In the case of Dopa and dopamine, where the sulfated product migrated too close to unused [ 35 S]PAPS for efficient separation or whenever the background was too strong, a two-dimensional separation was performed on the samples spotted on the TLC plate by first running a high voltage (1000 volts) thin-layer electrophoresis in the first dimension followed by the above-mentioned ascending TLC in the second dimension (27). Afterward, the plates were air-dried and autoradiographed. The radioactive spots on the TLC plates due to 35 S-sulfated products were cut out and eluted by shaking in 0.5 ml of H 2 O in glass vials. Four ml of scintillation fluid was then added to each vial and thoroughly mixed, and the radioactivity was counted using a liquid scintillation counter. The counts obtained were used to calculate the specific activity of the enzyme under the particular reaction conditions in units of nmol of sulfated product formed/min/mg of enzyme. Assays were performed in triplicate, and a control without enzyme was installed to correct for any background counts.
Miscellaneous Methods-[ 35 S]PAPS (carrier-free) was synthesized from ATP and carrier-free [ 35 S]sulfate using the human bifunctional ATP sulfurylase/adenosine 5Ј-phosphosulfate kinase, and its purity was determined as described previously (28). The [ 35 S]PAPS synthesized was then adjusted to the required concentration and specific activity by the addition of cold PAPS. The concentration of PAPS was confirmed by measuring its absorbance at 260 nm (29). SDS-PAGE was performed on 12% polyacrylamide gels using the method of Laemmli (30). Protein determination was based on the method of Bradford with bovine serum albumin as the standard (31).
RESULTS
Our previous studies demonstrate that M-form PST, besides sulfating dopamine, could also sulfate tyrosine and Dopa (8,9,11). Some interesting features of these latter activities were (i) that these activities can be dramatically stimulated by Mn 2ϩ and (ii) that the enzyme shows higher activities toward the D-rather than the L-enantiomers of tyrosine and Dopa. By taking a kinetic approach along with studies using mutated M-form PSTs, the current study was aimed at investigating the underlying mechanism for the Mn 2ϩ stimulation and stereospecificity.
Kinetics of Sulfation of Dopamine by M-form PST in the Presence of Varying Concentrations of Mn 2ϩ -
We first studied the modest stimulation by Mn 2ϩ of the sulfation of dopamine by M-form PST (8,9). Dopamine is believed to be the physiological substrate for M-form PST. Because it has no optical isomers and contains no carboxyl group as in Dopa and tyrosine, it was interesting to investigate the extent and mechanism of the stimulation by Mn 2ϩ of dopamine sulfation by M-form PST. The kinetics of the sulfation of dopamine by M-form PST was studied using varying concentrations (ranging from 0.5 M to 50 M) of dopamine, in the presence of different concentrations (0, 0.1, 0.5, 1.0, 2.5, 5.0 mM) of Mn 2ϩ or in the presence of 1 mM EDTA (as a control). It had been demonstrated that the dopamine-sulfating activity of M-form PST was maximal at pH 7.0 (22). HEPES buffer at pH 7.0 was therefore used in this study along with a saturating PAPS concentration of 15 M. Fig. 1 shows the plots of the velocity (v) versus substrate (dopamine) concentration ([S]) in the presence of different concentrations of Mn 2ϩ . It is clear from these plots that regardless of the Mn 2ϩ concentration, V max was reached with 10 -20 M dopamine concentrations. Mn 2ϩ appeared to increase the V max while changing K m for dopamine only slightly. In the presence of 1 mM EDTA, the V max was 370 nmol/min/mg, and the K m was 2.4 M, whereas with 5 mM Mn 2ϩ , the V max was 1000 nmol/min/mg, and the K m was 4.5 M. The kinetics appeared to be of the Michaelis-Menten type.
Effects of Mn 2ϩ on the Sulfation of Dopamine by the D86A Point Mutant of M-form PST-Our recent studies show that, although Mn 2ϩ has a remarkable stimulatory effect on the sulfation, especially of the D-enantiomers of Dopa and tyrosine (Refs. 8 and 10) by M-form PST, the D86A point mutant of this enzyme showed none or just a marginal Mn 2ϩ stimulation of sulfation of these substrates. 2 This was interpreted to imply that the stimulatory effect of Mn 2ϩ is exerted predominantly via its binding to the negatively charged residue Asp-86, which from the x-ray diffraction studies is believed to be part of the mobile loop intercalating into the active site of M-form PST (23). To find out whether residue Asp-86 also mediates the modest Mn 2ϩ stimulation of the sulfation of dopamine, the sulfation of dopamine (at 20 M) by the wild-type M-form PST and the D86A point mutant in the absence or presence of 5 mM Mn 2ϩ was studied. The results tabulated in Table I showed clearly that not only were the activity levels of the D86A point mutant lower, but the stimulatory effect of Mn 2ϩ seemed to have been lost.
Kinetics of Sulfation of D-Tyrosine by M-form PST in the Presence of Varying Concentrations of Mn 2ϩ -
The kinetics of the sulfation of D-tyrosine by M-form PST was studied using varying concentrations (ranging from 0.1 to 10 mM) of D-tyrosine in the presence of different concentrations (0, 0.1, 0.5, 1.0, 2.5, 5.0, and 10 mM) of Mn 2ϩ or in the presence of 1 mM EDTA (as a control). We had previously demonstrated that this activity was maximal between pH 8.0 and 9.0 (8). TAPS buffer at pH 8.25 was therefore used in this study. It was first established that the PAPS concentration used (15 M) was saturating, since there was no appreciable increase in the velocity of the reaction even when a 10-fold higher concentration of PAPS was used at several different D-tyrosine and Mn 2ϩ concentrations (such studies were also repeated with the other substrates subsequently used). Fig. 2 shows the plots of the velocity (v) versus total substrate (D-tyrosine) concentration ([S] t ) in the presence of different concentrations of Mn 2ϩ (the S t profiles). It is evident from these plots that at 10 mM D-tyrosine, although saturation with substrate was reached in the presence of 1, 2.5, or 5 mM Mn 2ϩ , the curves still reached maximal velocity in the presence of lower concentrations of Mn 2ϩ (0, 0.1, and 0.5 mM). However, D-tyrosine has a limited solubility in water, and it was difficult to prepare stable solutions containing greater than 10 mM D-tyrosine under the assay conditions to show the extended curves at lower concentrations of Mn 2ϩ . On the other hand, concentrations of Mn 2ϩ higher than 5 mM resulted in precipitation and, consequently, inconsistent results at higher concentrations of D-tyrosine. Therefore, with 10 mM Mn 2ϩ , only the data obtained at D-tyrosine concentrations of 5 mM or less were analyzed. This is clear from the plots of velocity (v) versus total Mn 2ϩ ([A] t ) concentration in the presence of different concentrations of substrate D-tyrosine shown in Fig. 3 (the A t profiles). The plots shown in Fig. 2 demonstrated clearly that Mn 2ϩ had a remarkable stimulatory effect on the sulfation of D-tyrosine by M-form PST. Fig. 2 also shows that at 5 mM Mn 2ϩ (and at 10 mM Mn 2ϩ ; data not shown), some inhibition started occurring in the presence of higher concentrations of D-tyrosine.
Stereospecificity of M-form PST; Lower Activity, Affinity, and Mn 2ϩ Stimulation with L-Tyrosine Relative to D-Tyrosine as
Substrate-The kinetics of sulfation of L-tyrosine at various concentrations ranging from 0.5 to 10 mM by M-form PST in the presence of either 1 mM EDTA or 0 or 5 mM Mn 2ϩ was studied. Fig. 4 shows the corresponding velocity versus [S] plots. From these plots, it is clear that saturation with substrate was not reached even at 10 mM L-tyrosine in the presence of 5 mM Mn 2ϩ . As in the case of D-tyrosine, solubility and precipitation problems made it unfeasible to extend the studies to higher concentrations of L-tyrosine or Mn 2ϩ . However, it is clear from Fig. 4 that the affinity of M-form PST for L-tyrosine is very much lower than for D-tyrosine. With D-tyrosine as substrate, saturation was reached at ϳ7 mM with 2.5 mM Mn 2ϩ and at 5 mM with 5 mM Mn 2ϩ , whereas with L-tyrosine as substrate, saturation was not reached even at 10 mM with 5 mM Mn 2ϩ . The specific activities at different substrate and Mn 2ϩ concentrations were also found to be much lower with L-tyrosine than with D-tyrosine as substrate (e.g. 750 nmol/min/mg for 5 mM D-tyrosine at 5 mM Mn 2ϩ versus 6 nmol/min/mg for 5 mM L-tyrosine at 5 mM Mn 2ϩ ). Moreover, the stimulatory effect of Mn 2ϩ on the sulfation of L-tyrosine was much less dramatic than with D-tyrosine as substrate.
Kinetics It is interesting to point out that the affinity of M-form PST for D-Dopa appeared to be much higher than for D-tyrosine. With D-tyrosine, saturation was reached at about 7 mM with 2.5 mM Mn 2ϩ and at 5 mM with 5 mM Mn 2ϩ , whereas with D-Dopa, saturation was reached at 0.5 mM with 2.5 mM Mn 2ϩ . The specific activities at different substrate and Mn 2ϩ concentrations were also much higher with D-Dopa than with D-tyrosine as substrate (e.g. 750 nmol/min/mg for 5 mM D-tyrosine at 5 mM Mn 2ϩ versus 1200 nmol/min/mg for 0.5 mM D-Dopa at 2.5 mM Mn 2ϩ ). However, the stimulatory effect of Mn 2ϩ on the sulfation of D-Dopa is less dramatic than with D-tyrosine as substrate.
Kinetics of Sulfation of L-Dopa by M-form PST; Lower Affinity and Mn 2ϩ Stimulation Compared with the Sulfation of D-Dopa-
The kinetics of sulfation of L-Dopa at various concentrations ranging from 25 to 2500 M by M-form PST in the presence of 0 mM Mn 2ϩ or 2.5 mM Mn 2ϩ was studied. Fig. 6 shows the corresponding v versus [S] plots. With or without Mn 2ϩ it appeared that there was no saturation with substrate even at 2500 M L-Dopa, which approached the solubility limit for L-Dopa under the assay conditions. However, as in the case of the tyrosine enantiomers, the affinity of M-form PST for L-Dopa seemed to be very much lower than for D-Dopa, and the stimulatory effect of Mn 2ϩ on the sulfation of L-Dopa was much less dramatic than for D-Dopa.
Model to Explain the Stimulatory Effect of Mn 2ϩ on Dopamine Sulfation by M-form PST-Our
recent studies have established that the stimulation by Mn 2ϩ of the Dopa/tyrosine-sulfating activities of M-form PST is primarily mediated by residue Asp-86 in variable Region I of the molecule (11). 2 This region has been shown by x-ray crystallography (23) to be part of a mobile loop formed by residues 84 -92 of one subunit that intercalates into the active site of the other subunit of this dimeric enzyme (22). Results from our current study on the sulfation of dopamine by the wild-type M-form PST (cf. Fig. 1) and its D86A point mutant (cf. Table I) indicated that the smaller stimulatory effect of Mn 2ϩ on the dopamine-sulfating activity of the wild-type enzyme was also mediated by the binding of Mn 2ϩ to the Asp-86 residue in the molecule. It is at present unclear how the binding of Mn 2ϩ to the Asp-86 residue at the active site stimulates the dopamine sulfation. Nevertheless, the stimulation was apparently due to an increase in V max , without significant effect on K m (cf. Fig. 1). The activity toward dopamine (with or without Mn 2ϩ ) basically followed Michaelis-Menten kinetics. This suggests that the binding of Mn 2ϩ to the Asp-86 residue may increase the catalytic efficiency of the enzyme while not affecting (or marginally hindering) the binding of dopamine at the active site. The kinetics does not suggest any involvement of a dopamine-Mn 2ϩ complex, as is to be expected because dopamine contains no negatively charged group to co-ordinate to the Mn 2ϩ . The dissociation constant for the binding of Mn 2ϩ to the enzyme at the Asp-86 residue appears to be in the mM range, based on the data presented. omers. Its activity toward these compounds was dramatically stimulated by Mn 2ϩ in the range of Mn 2ϩ concentrations up to 5 mM. The stimulation was found to be much more dramatic with the D-enantiomers and was evident at Mn 2ϩ levels as low as 0.1 mM. The kinetic plots in Figs. 2-6 indicated that the interaction of M-form PST with Dopa or tyrosine in the presence of Mn 2ϩ was co-operative in nature. This could be possibly be modeled for the behavior with tyrosine by a kinetic scheme where a tyrosine-Mn 2ϩ or a tyrosine-Mn 2ϩ -tyrosine complex is the real substrate (32,33). Because the activity with tyrosine alone was found to be negligible and that with tyrosine plus Mn 2ϩ considerably higher, the tyrosine-Mn 2ϩ and tyrosine-Mn 2ϩ -tyrosine complexes would likely be the obligatory substrates. In the case of Dopa, there was a considerable activity even without Mn 2ϩ , indicating that the Mn 2ϩ -Dopa complexes are not obligatory but rather better substrates compared with Dopa alone. Mn 2ϩ may bind to the negatively charged carboxyl group of one or two tyrosine/Dopa molecules to form the complexes. Our recent studies using point mutants suggest that a positively charged amino group of the tyrosine/Dopa-Mn 2ϩ complexes may interact mainly with the negatively charged Glu-146 residue at the active site of the enzyme. 2 Moreover, the positively charged Mn 2ϩ moiety of the complex may co-ordinate predominantly with the negatively charged Asp-86 residue of the mobile loop of the enzyme at the active site.
A search of the literature gave the log K for formation of the tyrosine-Mn 2ϩ complex (at 25°C and an ionic strength of 0.1; pH not specified) as 1.5 (molar concentrations used), whereas that for the formation of the tyrosine-Mn 2ϩ -tyrosine complex was 5.0 (34,35). We used these values as a first approximation to calculate the concentrations of the two complexes in the assay mixture with various total concentrations of Mn 2ϩ and tyrosine. The calculations were done using the equations for the two equilibria mentioned above together with the conditions [tyrosine] free ϭ [tyrosine] total Ϫ 2[tyrosine-Mn 2ϩ -tyrosine] Ϫ [tyrosine-Mn 2ϩ ] and [Mn 2ϩ ] free ϭ [Mn 2ϩ ] total Ϫ [tyrosine-Mn 2ϩ -tyrosine] Ϫ [tyrosine-Mn 2ϩ ] along with the requirement that real and positive numbers be involved. An iterative numerical procedure was used to solve these equations for the concentrations of the two complexes present when different amounts of Mn 2ϩ and tyrosine (total) are taken using a computer program written in visual basic. It is to be pointed out that the above equation showing the relationship between free and bound Mn 2ϩ concentrations does not take into consideration the possible binding of Mn 2ϩ to DTT, TAPS, and PAPS. An exhaustive literature search failed to find any association constants for the binding of these latter compounds to Mn 2ϩ . In the case of DTT, the pK a values for the two sulfhydryl groups had been determined to be 9.2 and 10.1, respectively (36). Because the assays with tyrosine as substrate in the present study were performed at pH 8.25, both the sulfhydryl groups of DTT would be predominantly uncharged and may not be expected to coordinate strongly to Mn 2ϩ under these conditions.
In Fig. 7, the velocity (in nmol of product formed/min/mg of protein) for the sulfation of D-tyrosine by M-form PST is plotted versus the total concentration of the D-tyrosine-Mn 2ϩ and D-tyrosine-Mn 2ϩ -D-tyrosine complexes (in moles/liter) in the assay mixture, calculated as indicated above. The data used are the same as presented in Fig. 2. It is clear that the data show Michaelis-Menten behavior in the interaction of the enzyme with these complexes. This is not the case if the concentration of only one of the two complexes is plotted on the x axis, indicating that the true substrates for the reaction include both the D-tyrosine-Mn 2ϩ as well as the D-tyrosine-Mn 2ϩ -D-tyrosine complexes. The K m and V max values of the enzyme for the two complexes, however, must differ because the residual charge on the Mn 2ϩ moiety in the D-tyrosine-Mn 2ϩ complex will be higher, and consequently, its interaction with the Asp-86 residue on the enzyme is expected to be stronger than in the case of the D-tyrosine-Mn 2ϩ -D-tyrosine complex. However, because we are dealing with a dynamic equilibrium, it is likely that, once the D-tyrosine-Mn 2ϩ -D-tyrosine complex enters and binds to the substrate pocket, the Asp-86 residue of the enzyme may bind to the Mn 2ϩ moiety, exchanging with the outer D-tyrosine residue and replacing it. Thus, this complex may also behave effectively like the D-tyrosine-Mn 2ϩ complex. The less than perfect fit to Michaelis-Menten behavior is to be expected considering the approximations made in using the values for the association constants for the formation of the complexes. The K m of the enzyme for these Mn 2ϩ -D-tyrosine complexes appears to be in the range of 0.75-0.85 mM, whereas the V max is in the range of 750 -850 nmol/min/mg. Our studies on dopamine sulfation by M-form PST as discussed above also indicated that Mn 2ϩ may bind to the enzyme, and it is likely that the inhibition at higher (mM) levels of Mn 2ϩ and tyrosine was due to their inhibition on the binding of the Mn 2ϩ -D-tyrosine complexes.
It is likely that similar calculations can be done to show that D-Dopa-Mn 2ϩ and D-Dopa-Mn 2ϩ -D-Dopa complexes are also responsible for the Mn 2ϩ stimulation of the D-Dopa-sulfating activity of M-form PST. However, no values of the association constants for the relevant complexes could be found in the literature, and also the analysis with D-Dopa will be compli- cated by the fact that Dopa has a substantial activity by itself. Similar scenarios can be postulated for the interaction with the L-isomers of tyrosine and Dopa.
The involvement of a Mn 2ϩ -PAPS complex as an obligatory or additional substrate appeared unlikely since saturation of PAPS was ensured in these experiments. Increasing the PAPS concentration (which was 15 M in the standard assay) 10-fold at several Mn 2ϩ concentrations did not result in any increase in reaction velocity. Moreover, our results clearly showed that the stimulatory effect of Mn 2ϩ on the sulfation activity of M-form PST is a function of a particular acceptor substrate. Another argument against a Mn 2ϩ -PAPS complex is the fact that no Mn 2ϩ requirement or stimulation has been reported for the activity of any of the other cytosolic STs (at least with their commonly used substrates), which all use PAPS as a co-substrate. Moreover, a comprehensive study performed in our laboratory on the effect of a variety of divalent metal ions on the activity of 10 known human cytosolic STs toward their commonly used substrates did not reveal any universal metal ion requirement or stimulation. 2 Stereospecificity of M-form PST and Its Relative Activity toward Dopa and Tyrosine-In this study, we found that in the presence of 5 mM Mn 2ϩ , the [S] 0.5 for D-tyrosine was around 1.9 mM, and the V max was around 750 nmol/min/mg, whereas for D-Dopa the [S] 0.5 was around 100 M, and the V max was around 1200 nmol/min/mg. Part of the difference may be due to the relative values of the dissociation constants for the two substrate-Mn 2ϩ complexes. Additionally, D-tyrosine can only be sulfated at the 4-OH group, whereas for D-Dopa, it has been demonstrated that sulfation occurs exclusively at the 3-OH group (37). The much greater affinity for D-Dopa relative to D-tyrosine can probably be explained by QSAR (quantitative structure activity relationship) analysis (23). The Mn 2ϩ -stimulated activity of M-form PST with, and its affinity for, the L-enantiomers of Dopa and tyrosine was far lower than with the D-enantiomers, which could probably also be explained by similar arguments and modeling studies (23). In our previous studies we had shown that the stereospecificity of M-form PST for the D-enantiomers of Dopa and tyrosine was mediated primarily by residue Glu-146 at the active site, which probably binds the positively charged amino groups of these substrates. 2 Point mutations of selected residues 84 -89 that are part of the putative loop at the active site did not significantly affect this stereoselective behavior. However, a residues-84 -90 deletional mutant of M-form PST showed no stereoselectivity in its sulfation activity toward tyrosine and Dopa. This suggested that although individual residues in the 84 -92 loop may not be critical for the stereoselectivity, the presence of the loop as such is essential, probably as an additional steric selector. The residues-84 -86 deletional mutant showed no activity toward any stereoisomer of either tyrosine or Dopa, possibly because the truncated loop may interfere with access of these substrates to the active site.
Physiological Relevance of the Stimulation of Sulfating Activity of M-form PST by Mn 2ϩ -In this study, the maximum stimulatory effect of Mn 2ϩ was observed at concentrations of around 5 mM. However, it was evident that significant stimulatory effects already occurred at levels as low as 0.1 mM. Mn 2ϩ is an important element biologically and has been shown to be essential to the activity of a number of enzymes in a variety of organisms (12,38,39). For example, it is central to the function of superoxide dismutase, an enzyme that protects against oxidative damage in tissues (12,8). The Mn 2ϩ concentrations in neuronal and brain tissue have been reported to be higher than in other tissues (12). Within the cell, Mn 2ϩ may be preferentially sequestered in mitochondria and endoplasmic reticulum (13). As stated previously, oxidative stress or damage, which has been implicated in neuronal apoptosis that occurs in neurodegenerative diseases, generally results in mitochondrial dysfunction (14) that may lead to the release of Mn 2ϩ into the cytosol. Elevated Mn 2ϩ concentration in the cytosol will stimulate M-form PST in its sulfating activity with dopamine and especially in its Dopa/tyrosine-sulfating activity. This may represent a detoxifying mechanism as discussed later. One such neurodegenerative disease, parkinsonism, which is believed to arise from the destruction of dopaminergic neurons, thus greatly lowering brain dopamine levels (40), may involve mitochondrial dysfunction (41). If this indeed results in a rise in the cytosolic levels of Mn 2ϩ in such cells, the activation of M-form PST may help to detoxify the dopamine and possibly other toxic substances (as discussed below) that could be released by such dying cells. In this connection, it may be pertinent to note that manganese poisoning (or manganism) is known to result in symptoms resembling Parkinson's disease (42,43). One reason could be because the activation of M-form PST in dopaminergic cells by the excess Mn 2ϩ results in mis-guided "detoxification" of dopamine in these cells and consequently parkinsonian symptoms.
Physiological Relevance of the Sulfation of D-Tyrosine by M-form PST-Our study has demonstrated M-form PST to be more active toward the D-enantiomers of tyrosine and Dopa. The stimulatory effect of Mn 2ϩ was also much more dramatic FIG. 7. Plots of velocity (v) versus the total concentration of the D-tyrosine-Mn 2؉ plus D-tyrosine-Mn 2؉ -Dtyrosine complexes derived from the data presented in Fig. 2. The concentrations in mol/liter of the complexes are calculated from the concentrations of total Mn 2ϩ and total D-tyrosine in the assay mixture based on the procedure described under "Discussion." v, in units of nmol of product formed/min/mg of protein, is the velocity of the sulfation of D-tyrosine catalyzed by the M-form PST. Each data point represents the mean value of three determinations (error bars are shown). with these D-enantiomers. Although the L-enantiomer of amino acids is used in protein synthesis, a small percentage of the amino acid pool is present in the D-form. D-Amino acids may be formed due to spontaneous racemization in proteins with low turnover rates, such as human lens protein (44), and accumulated in aging tissues lacking D-amino acid oxidases (13). Attempts have been made to link the amount of specific D-amino acid to oxidative damage and to neurodegenerative disorders such as Alzheimer's and Parkinson's disease (14,15). Although a clear picture has yet to emerge, the removal of D-amino acids, which cannot participate in protein synthesis and in most metabolic reactions, may be viewed as a detoxification process. Incidentally, detoxification of D-amino acids through sulfation is likely to be less deleterious than by D-amino acid oxidase, which causes oxidative stress (45). In a different perspective, the Mn 2ϩ -stimulated activity of M-form PST toward D-Dopa and D-tyrosine may provide clues to the understanding of its stereoselective action on chiral drugs (18 -21).
Possibility of a Dual Role of M-form PST with Mn 2ϩ Serving as a Molecular Switch-It is possible that M-form PST under normal circumstances acts on its physiological substrate, dopamine (of which the pH optimum is ϳ7.0, and the K m is ϳ 2 M) (22,46), thereby regulating the levels of this endogenous metabolite. In the presence of elevated Mn 2ϩ (possibly under conditions of oxidative stress, as discussed previously), the detoxifying action of M-form PST is activated. Mn 2ϩ may complex with various substrates, with varying affinities, and these complexes may serve as substrates for M-form PST. Mn 2ϩ may thus serve as a molecular switch to increase the substrate promiscuity of M-form PST. The affinity of the enzyme for these xenobiotic substrates will depend on the dissociation constant of the substrate-metal complex, the metal ion concentration, and the affinity of the enzyme for the complex. Incidentally the different pH optimum (ϳ8 -9 for the Dopa/tyrosine-sulfating activity of M-form PST (8)) compared with the pH optimum of around 7 for dopamine sulfation may reflect the pH dependence of the formation of substrate-metal complex and offer another level of regulation. Because sulfation is an energetically expensive process (it uses up PAPS, synthesis of one molecule of which requires expenditure of three high energy phosphate bonds of ATP (1)), this proposed dual function of M-form PST may make sense from the viewpoint of cellular economy.
In conclusion, our findings on the stimulatory effect of Mn 2ϩ on the sulfation of D-Dopa and D-tyrosine by M-form PST through the formation of a substrate-Mn 2ϩ complex represent the first report of a regulatory mechanism operating in the ST enzymes. It is possible that other xenobiotic substrates may also be acted on by this enzyme in a similar fashion, in concert with Mn 2ϩ or other metal ions, although the affinity and concentrations involved may be quite different depending on the dissociation constant of the metal-substrate complex and other parameters. It would be interesting to see if other examples of such regulatory mechanisms, possibly involving other molecular signals, also operate among other members of this important family of enzymes. | 8,461.6 | 2002-11-15T00:00:00.000 | [
"Chemistry"
] |
Addition of multiple rare SNPs to known common variants improves the association between disease and gene in the Genetic Analysis Workshop 17 data
The upcoming release of new whole-genome genotyping technologies will shed new light on whether there is an associative effect of previously immeasurable rare variants on incidence of disease. For Genetic Analysis Workshop 17, our team focused on a statistical method to detect associations between gene-based multiple rare variants and disease status. We added a combination of rare SNPs to a common variant shown to have an influence on disease status. This method provides us with an enhanced ability to detect the effect of these rare variants, which, modeled alone, would normally be undetectable. Adjusting for significant clinical parameters, several genes were found to have multiple rare variants that were significantly associated with disease outcome.
Background
Recent technological advances have made querying the importance of genetic factors on the occurrence of disease severity possible. Hundreds of published studies have acknowledged associations between certain genes and various medical conditions. Newer advances in genotyping technology have allowed researchers to determine even more precisely which genetic base pair may be a marker for the mutation responsible for causing a disease by looking at single-nucleotide polymorphisms (SNPs). SNPs are DNA sequence variations that occur when a single nucleotide (A, T, C, or G) in the genome is altered. Each individual has many SNPs that together create the unique human DNA pattern [1]. These base differences usually have a minor allele frequency (MAF) of 1% or more; SNPs with MAFs less than 1% are known as rare [2]. Previously, because of the popular common disease/common variant hypothesis, which assumes that common diseases are caused by common variants with small to modest effects [3], and because of the lack of proper technology to accurately genotype rare variants, most association studies have focused on common variants. The near complete 1000 Genomes Project will allow for more accurate genotyping of the so-called rare variants and, as a result, for consideration of rare variants as possible causes of disease [4].
A change in thought has occurred to increase the importance of rare variants in disease susceptibility [5]. Although several common SNPs have shown significant associations with diseases, these effect sizes have always been small, contributing to the idea that there must be some causal factor in the previously undiscovered rare variants [5]. Several known genetic diseases, such as schizophrenia and type 2 diabetes, have turned up only a few links in the form of the common variants, and it is now thought that common variants could be picking up a diluted signal that is instead caused by neighboring rare variants [5]. Few statistical methods exist for analyzing the role of rare variants, with most methods resulting in low power [3], and it is imperative to develop new methods to analyze these data. Because the Genetic Analysis Workshop 17 (GAW17) data set is dominated by rare variants (about 74%), the goal of this study is to investigate the potential for combinations of rare variants to strengthen the association between common variants and disease.
Methods
The GAW17 data set consists of 24,487 SNPs on 22 chromosomes for 697 unrelated individuals. Thirty percent of the individuals are known to be affected with the disease, and individual quantitative and binary disease traits, Age, Sex, and Smoking status were simulated 200 times. The underlying simulation model is presented by Almasy et al. [6]. We had no knowledge of the genes simulated to be associated with disease outcome when developing and testing our method.
We chose significant clinical parameters by fitting a multivariate logistic regression model with all the possible covariates (Age, Sex, Smoking status, and Ethnicity) and performing backwards selection. Significance was determined by calculating the 95% percentile intervals based on the 200 replicates and choosing only those covariates for which the percentile interval did not include 0.
We first tested for Hardy-Weinberg equilibrium (HWE) in both affected and unaffected populations over all 200 phenotype replicates [7]. An adjusted p-value of 10 −5 was used to correct for multiple testing in light of the fact that many of the SNPs are correlated. Those SNPs that failed the HWE test in both subpopulations in at least 95% of the replicates were eliminated from further analysis because these SNPs were thought to be privy to genotyping errors.
Because the frequency of each of the rare variants in this data set is so low (40% of the SNPs have only a single copy of the minor allele out of the 697 observations), attempting to model the relationship between each rare SNP and the disease outcome is not feasible. Even attempting to combine all the rare SNPs within a gene would not be possible because few genes have a large number of rare SNPs (Table 1). Under these conditions, the models would fail to converge in many of the phenotypes. Therefore we decided to test combinations of multiple rare variants with one common variant in a gene. Our interest lies in identifying groups of rare SNPs that will better predict the disease when added to the common SNP than in simply identifying the common SNP alone.
For each gene g, we consider common SNPs c j (j = 1, …, n g c ) and rare SNPs r s (s = 1, …, n g r ), where n g c and n g r are the number of common and rare SNPs on gene g, respectively. For all SNPs, we assume a dominant model in which a SNP is coded 1 when a minor allele is present and 0 otherwise. Because of the low frequency of rare SNPs, we thought that the dominant model would provide the best power.
For individual i, i = 1, …, 697, we define disease status as: For each c j on gene g, we fit the following multivariate logistic regression model on phenotype k (k = 1, …, 200): Smoking status Ag ge , ) where P = P(Y = 1) and 1 c j >0 is a binary indicator variable representing the presence of the minor allele in common SNP c j .. The 200 coefficients b 1,1 , …, b 1,200 are recorded.
We create a new indicator variable z to measure the presence of rare variants within a gene: By narrowing the search to only those common variants that show reproducibility over the 200 replicates at the 0.1 significance level (which would imply more stable coefficient estimates), we then fit a new multivariate logistic regression model with a binary indicator variable that represents the presence or absence of any minor allele within the gene: Smoking status) , , ( ) k Age .
We use the binary approach to increase the power to detect an association resulting from the low frequency of the minor alleles. We then compare the 200 coefficients g 1,1 , …, g 1,200 by means of a one-sided paired t test to b 1,1 , …, b 1,200 to ascertain whether there is a consistent departure from the null hypothesis that: If p < 0.05, then adding the rare variants to the common variant significantly increases the signal of the effect of the gene on disease. Therefore these rare variants must be associated with the disease. If no associations are found, we remove one rare SNP from the definition of Eq. (3) and recalculate the coefficients from Eq. (4) as before. This method is used to determine whether or not no association was detected because of too much noise resulting from the addition of too many rare variants. This method can be generalized through an iterative process by removing one rare SNP at a time until only a single rare SNP remains.
Results
The initial set of 24,487 SNPs was reduced to 24,211 because 276 SNPs failed the HWE assumption. Ethnicity was categorized into three dummy variables representing individuals of African, Asian, and European descent. The covariates Age and Smoking status were established as the only clinical parameters for this data set ( Figure 1). Any associations between SNPs and disease status were adjusted for these two covariates.
After HWE elimination, we were left with 3,167 genes over the 22 chromosomes. Of these 3,167 genes, 1,718 did not have any common variants or had less than two rare variants and so were excluded from the analysis because this is our primary interest. To eliminate the possibility of adding too much noise by creating a combination of many rare SNPs, we further restricted the analysis to those genes that contained fewer than 16 rare SNPs. Thus we were left with 829 genes to explore.
Our results show that adding multiple rare variants to common SNPs already associated with disease at the 0.1 significance level can greatly improve the ability to detect causes of disease (Table 2). We calculated p-values from a one-sided paired t test to compare the 200 b coefficients to the 200 g coefficients and used a p-value of 0.001 to determine significance [8]. For several of the genes, the signal of association became even stronger with the removal of one or two rare SNPs (Figure 2). In some cases, we discovered that larger combinations of rare SNPs were actually more significant, indicating that an optimal combination of rare and common SNPs could be found with this method (Table 2).
When one or two rare SNPs were removed from the definition of Eq. (3), some genes that had not been identified by our first pass displayed an increased effect on disease ( Table 2, last three rows). This suggests that adding the combination of rare SNPs to a common SNP adds information to the model and helps to better explain the relationship between gene and disease.
Discussion and conclusions
By taking advantage of all 200 phenotype replicates, we were able to simulate a posterior distribution for the underlying true relationship between genes and disease status, thereby inherently validating our method. When working with real data, investigators will not be able to use the replications to calculate p-values. Therefore we can apply the sample randomization technique outlined by Guo et al. [9]. This method has the following steps: (1) Calculate the coefficient for each common variant in each gene from a logistic regression model; (2) shuffle the common SNPs across the genome to generate a permuted data set; (3) calculate the coefficient from a logistic regression between common variant and disease; (4) repeat steps 2 and 3 1,000 times to obtain a null distribution of coefficients; and (5) determine which common variants are significant (at a = 0.1) by calculating the percentage of coefficients from the null distribution that are greater than the observed coefficient. This percentage is our p-value. Finally, adding all the rare SNPs to the common variant, we repeat steps 1-5 to determine which rare SNPs significantly improve the association from the common variant alone.
Although our study focused on binary disease outcome, our method can also be applied to continuous or timeto-event outcomes. The dominant model assumption for the SNPs could also be adjusted to use additive or recessive models. Our method improves on the collapsing method introduced by Li and Leal [3] by separately In the interest of time and computational abilities, we limited our analysis to those genes with less than 16 rare SNPs. Important associations may occur in genes with greater than 16 SNPs. In the future it may also be of interest to consider separately those SNPs that are synonymous and nonsynonymous or to include rare SNPs that fall just outside a gene in a larger genomic region in, say, a pathway-based analysis. Our analysis Removing one or two rare SNPs can make this association even stronger. NS, not significant at the 0.001 significance level. was stopped before considering a maximum removal of two rare SNPs from the combination of all rare SNPs in one gene. A more exhaustive search could uncover new relationships. Our intention was to conduct a proof of principle analysis to exhibit the merits of this method in finding rare SNPs associated with disease.
After the GAW17 conference, we compared the performance of our method to the simulated answers. For the correctly identified gene PTK2B, removal of simulated SNP C8S900 actually improved the disease association. This could be a result of high correlation with the other simulated SNPs for that gene. Table 3 shows that our method detected a large number of false positives and yielded a sensitivity of only 8.3%. However, our method had quite a high specificity rate of 98.5%. It must be noted that underlying correlation could create hidden relationships not specified in the simulated model. | 2,952.6 | 2011-11-29T00:00:00.000 | [
"Biology",
"Medicine"
] |
A Review- Modelling Approach and Numerical Analysis of Additive Manufacturing
: Additive manufacturing creates 3-dimensional objects by depositing materials layer by layer. Different applications of additive manufacturing were examined to determine future growth possibilities. The current research seeks to discover existing additive manufacturing techniques based on the process mechanisms, evaluate modelling approaches based on modelling methodologies, and identify required studies. A significant number of numerical simulations are conducted to evaluate the thermal FE structure in terms of solid and powder material thermo - physical properties and permissible boundary conditions. The transient heat conduction is investigated using thermal analysis with a moving heat source.
Introduction
Nowadays, industrial 3D printing is a slow and costly trial and error process. On the other hand, additive manufacturing is a group of manufacturing technologies that use a solid digital model to produce an item additively. The interaction between solid models and additive manufacturing, as well as material deposition, are entirely computer-controlled. The method, which is based on CAD models, can manufacture reliable yet complicated parts. Moreover, it pushes towards a toolless manufacturing environment, which means better quality and efficiency in many circumstances. 3D printing, freeform fabrication, and additive fabrication terms are used for representation of additive manufacturing. Lowvolume components with complicated forms and geometric characteristics have traditionally been produced using traditional additive manufacturing methods. However, SLM technology allows for creating geometries with complex features that are impossible to achieve using conventional production processes such as casting, powder metallurgy, forging, and extrusion.
Additive Manufacturing
The process of joining materials to make objects from three-dimensional model data, usually layer upon layer, instead of subtractive manufacturing methodologies i.e traditional machining is defined as Additive manufacturing (AM) [1]. The AM manufactures component layer-by-layer deposition using a laser. Using computer-aided design (CAD), additive manufacturing has the freedom to create objects with complex geometric shapes. Each process will differ depending on the material and machine technology used. Many additive manufacturing processes are available. The processes differ in the manner in which layers are deposited to form a component in the working principle and the materials used. The additive manufacturing method emerged as the primary tool in rapid prototyping. Continuous filament of a material is used in Fused deposition modelling (FDM) [2]. In DED method for Inconel 718 material the unique Mechanical properties and energy storage capacity are improved with the application in nuclear energy, Oil & gas. For powder bed fusion manufacturing in 316L stainless steel material the improved properties are better radiation tolerance and lower IASCC susceptibility for application of nuclear energy, Oil & gas [3]. Because materials for AM systems are limited, research and development are continuing to broaden materials and the application of present metal AM processes to a wider spectrum of materials. Polymers, ceramics, and metals are among the materials that can be used in AM technology [4]. Researchers and companies are becoming more interested in metallic materials among these materials.
Applications
• Aerospace -Laser-sintering meets commercial and military aircraft demands for air ducts, fittings, and mountings that carry special aeronautical equipment. • Manufacturing -Laser sintering is a cost-effective way to service specialized markets with low volumes. As economies of scale do not affect laser sintering, manufacturers may focus about batch size optimization instead. • Medical -Medical gadgets are high-value, complicated goods. They must precisely fulfil the needs of their customers. These criteria must be met not just because of the operator's personal preferences but also because of regionally differing legal requirements or conventions. As a result, there are several varieties and, as a result, tiny amounts of the variants available. • Prototyping -Laser-sintering allows the creation of both design and functional prototypes. As a result, functional testing may begin immediately and with greater flexibility. Simultaneously, these prototypes may be utilized to assess consumer acceptability.
• Tooling -The natural method removes the need for tool-path generation as well as various machining techniques like EDM. Tool inserts can be made overnight or in a matter of hours. Furthermore, design flexibility may be used to improve tool performance, for as by including conformal cooling channels within the tool.
Modelling approach and Numerical analysis
An abstract description of a process that creates a relationship between input and output values is referred to as a model. Models that seek to predict the performance of the real system simulate it. Modeling involves making a 3D model data. Methods used for modelling and numerical analysis are:
Process modelling
The majority of technology process models are classified into one of three categories. Three models are possible: white box or simulation models -are built utilizing physical relations and engineering expertise to describe the process at the needed level, black box models -To describe a process, use data or knowledge collected from studies, grey box models -while still reasonably easy to solve, also include details than an equivalent black box model [5]. The interaction amongst the laser beam and even the powder material is perhaps the most typically modelled element in laser-based powder bed technology. Models at the powder scale are frequently used to determine the required laser input energy and can also be used to explore specific phenomena such as melt pool temperature histories or microstructures [5]. The figure 2 shows overview of approach. There are two primary additive manufacturing technologies: powder-bed and directed energy deposition. Material deposition is confined and happens concurrently with laser heat deposition in direct energy technology, whether powder fed or wire fed. Computational Fluid Dynamics (CFD) is used for non-linear partial differential equations for macroscale simulation of the solidification process [6]. In direct energy deposition, a continuum thermo-mechanical modelling tool is used to simulate the melt pool shape. Melt pool calculates the process's stability based on the powder layer thickness, scanning velocity, and optical and thermal characteristics of the material. Methods used are coupled radiation, heat transfer, consolidation kinetics, conventional radiation transfer equation [7]. For simulation of melt pool geometry finite element method, Discrete Element Method and fluid flow simulation and a 3-D transient heat are approached. During processing, finite-element analysis software i.e Finite-Element Analysis (FEA) is utilized to solve the coupled governing equations for solid deformation and heat transfer [6]. The proposed integrated process-structure-properties-performance modelling and simulation methodology is presented in fig 3 [6]. The temperature distribution in a single powder process is defined using a finite element-based heat transfer model. The geometrical characteristics of a single-track configuration have been quantitatively studied [8]. The created simulation model determines how process factors affect geometric features. Investigators have recently been drawn to wire-arc manufacturing processes (WAAM) in order to build metal parts with a higher deposition rate. A framework for component certification based on international standards have been developed using a computationally efficient mathematical model metho for the metal-AM process [9]. Manufacturing companies have embraced directed energy deposition technologies because of their capacity to build significantly bigger mechanical and structural components directly from a CAD model. 3D CAD model generation, slicing 3D CAD model to acquire set of 2D geometries, producing tool path for each one of these 2D geometries, selecting deposition parameters for every layer, selecting welding parameters such as travel speed, voltage, feed rate, and so on are all part of the process planning [10]. Finally, using the WAAM process and the supplied parameters and tool route, the products can be deposited. To build 2D geometries from a 3D model, there are two methods: unidirectional slicing and multidirectional slicing. Due to its ability to produce big and slightly less difficult geometric components, WAAM methods are conquering the industrial industries. The WAAM process is highly recommended for low and medium complicated geometry component manufacture. Mesh-based approaches and mesh-free methods are the two types of numerical modelling methods [9]. To avoid wasting time and money on hit and trial testing, the use of FEM has been extended to the models of WAAM processes and their parametric optimization. At the macro-scale level, WAAM processes use a multi-physics continuum modelling method. the applicability of its versions on an industrial scale, including the multiphysics continuum modelling method at the macroscale, i.e., a thermo-mechanical model to estimate residual stress and distortion in WAAM-produced components. The numerical modelling aids in the understanding of the physics participating in the WAAM process, allowing it to be improved and changed in the field. The thermomechanical model's fundamental bifurcations are the thermal and mechanical models [9]. It is further characterized as coupled and decoupled, poorly coupled or uncoupled model based on the interconnection amongst two models.
Microstructure evolution modelling
Quantitative predictions of microstructures that are additively manufactured, and therefore their performance and properties, will necessitate collaborative work in solidification modelling at many lengths and time. For modelling microstructure evolution, the Potts Kinetic Monte Carlo technique in the simulation of melt pool geometry. EBSD data and the open-source tool DREAM3D is used to generate the microstructure representation [6].
Performance modelling
For representation of accurate physical quantities recording and tracking the evolution of metallic materials, physically-based macro-scale continuum models are created. There might be a variety of physical factors at issue, any of which could be contributing to this variation in behaviour. Finite-Element Analysis software (FEA) like Diablo and ABAQUS, are used to solve the governing equation that represents the physical mechanisms in performance modelling [6].
Topology optimization
Different types of tools are used for topology optimization. A recently used tool for the detection of shapes is the PLATO tool. However, the development of optimization software has focused chiefly on geometric optimization which can be simulated in ANSYS. The software will include numerous constraints meant for processing parameter to achieve optimal material performance as well as optimized topology in future [6]. The optimization of rapid prototyping is estimated using various processes such as virtual prototyping and virtual reality, virtual simulation, virtual fabrication, modelling module, simulation module, evaluation module. Fig shows the virtual system representation of rapid prototyping. In addition, process parameters such as nuisance, constant and control parameters, hatch space, orientation, layer thickness, overcure depth, build time, hatch style is considered [11].
Multiphysical modelling
For simulation of selective laser sintering process as shown in fig 3 for single layer of particles, a multiphysical modelling technique is used. Mechanical and thermal interactions of particle-to-wall and particle-to-particle are studied using a discrete element method by Beer-Lambert Law. Powdered particles to be sintered are represented by individual spheres of various sizes in a discrete element model. The surface-tracking approach used in this class of CFD models allows for explicit depiction of individual powder particles, which is particularly useful for the L-PBF process. In this case, unlike earlier models, the material properties of the powder layer will be similar to that of the bulk stuff. In their CFD simulation, they apply the Lattice Boltzmann Methodology (LBM). ALE3D is used to create a FEM-based CFD model for the L-PBF process of 316-L stainless steel (Arbitrary Lagrangian Eulerian code) [13]. CFD model based on Finite Volume Method FVM for IN718's and aluminium L-PBF process is also developed. The influence of linear energy density here on creation of porosity in IN718 during L-PBF using the FVM model and Marangoni effect with recoil is also implemented [13]. There's also a subset of multiphysics models that don't use a surface-tracking method and then use a Lagrangian explanation of such melt pool surface layer. These models can forecast the creation of porosity and trace the metallurgical evolution throughout the process. Recent multi-physics models, on the other hand, have a number of drawbacks.
Multi-scale modelling approach
In the L-PBF process the Build-up model, single laser track model, multi-scale approach, single layer model is used to study the various process parameters, melt pool dimension, temperatures through layer solidification applied heat input load parameters [14]. In addition, the method is used to calculate the process related to thermally induced distortions. To accomplish quick distortion prediction for MAM parts, a multi-scale simulations framework was created. Three models at various scales are included in the framework. First, a micro-scale heat source modelling is used to calibrate the heat input. The heat input is then used to determine the intrinsic stresses using a meso-scale hatching model. The inherent strains are then used to a macro-scale layer model to anticipate the part's distortion. Multi-scaling methods-AG (agglomerated heat source), MS (multi-step method), FH, SFH (flash heating and sequential flash heating), AMR (adaptive mesh refinement) [13]. Flash heating (FH) is a multi-scaling approach for parts that is focused on the layer-lumping principle. A section is splited into a particular number of pieces is along build direction in this manner. These chunks are actually a collection of actual layers that have been fused together and are referred to as meta-layers [13]. The meta-layer size is a model input variable that can be changed.
Hole-drilling method
In the L-PBF process, the Integral Method determines the residual stresses in the sample thickness. In this process the criteria used to choose the dimensions and geometries of specimens are stated as follows: Using traditional and easily measurable geometry, closed and open sections, thin-walled sections, flat surfaces, alignment along the normal or in the building direction, surfaces big enough to contain the strain gauges for the testing without suffering from edge effects induced by sharp edges [15]. Then, MATLAB estimates the residual stresses measurements. The advancement of the specimen's stress profile is shown as a function of specimen geometry, in terms of open/closed sections and orientations.
X-ray (EDX) mapping
The SLM technique includes scanning a powder bed with a laser beam to manufacture layer by layer. The microstructure and nano-mechanical properties of the material is studied. Then, the changes in scan speed influence the development of fusion lines and single tracks. Energy dispersive X-ray (EDX) mapping is used to compare the SLM material's chemical composition distribution [16]. To describe the mechanical characteristics of SLM-processed materials and assess the impact of process-induced defects heat treatment method is used. The material is processed under the SLM machine and AutoFab software. With increasing laser scan speed, the diameters of SLM-formed lines and tracks decreased linearly. Furthermore, abnormalities are identified at high scan rates [16].
CFD simulation
The high-speed imaging and CFD simulation are used for systematic parametric study to investigate the influence of laser scanning speed and powder layer thickness on porosity development and correlate porosity development with top sample surface and melt pool and flow behaviour. High-speed imagery and computational fluid dynamics (CFD) calculations were used to investigate the interaction between the powder particles and laser beam. C++ open-source the CFD toolbox called Open Field Operation and Manipulation, a simulation of the interaction between the laser heat source and the powder material is carried out [17].
Finite element modelling
To be more precise, statistical analysis and machine learning both require a large amount of data. Machine learning process parameters -Artificial neural network, Genetic Algorithm, Ensemble, Support Vector Regression. The statistical analysis method, that is Taguchi, ANOVA and regression modelling, two-level factorial design of experiment by Minitab software [18]. The mechanical performance of selective laser melting manufactured components is essential. The yield stress is the key characteristic, and it the main factor for the SLM manufactured component. Here the processing parameters such as laser power, scanning speed, and hatch space of an SLM process can be investigated [18].
During the Additive Manufacturing (AM) of metal parts, part distortion is a major concern. Finite element Modelling is used by AM for powder bed fusion manufacturing. The inherent strain technique is a quick and accurate approach for predicting residual stresses and deformation. The inherent strain method's origins may be traced back to Computational Welding Mechanics, and it's been widely adopted. It comprises of an FE quasistatic evaluation with user-defined inherent strains causing deformation [19]. To simulate the deformation of a twin-cantilever beam with different scanning tactics, a 3D simple mechanical model is created by using commercial software ABAQUS. The modelling methodology, namely layer lumping, allows for the usage of a rather coarse FE mesh. Two methods are applied for calculation of deformation purpose: Reduced order methods and Empirical methods [19]. An empirical methodology can be specified for determining characteristic inherent strains for just a specific scanning strategy, which are used as input data for linear-elastic analysis to derive distortion and residual stress fields generated by LPBF processes.
Mathematical modelling
In the RAM process the Deposition Principle and Establishment of Temperature Field Equation is required for the numerical simulation. The Gaussian distribution is analogous to the spatial accumulation distribution of mass flow [20]. By integrating the external mass supply element to the mass conservation equation, the mass growth process can be accomplished. The temperature field of a three-dimensional model must be generated using the Fourier law of conduction of heat and the principle of conservation of energy, and heat transfer problem's governing equation must be established. The singlechannel melting layer model using ANSYS is made and meshed. The temperature change of the molten layer under diverse currents was studied using a RAM simulation model, and indeed the present parameter range of the molten wire is preliminarily determined [20]. The actual physical events that occur during the AM process must be reduced to make the process available for numerical simulation. A Newtonian fluid present in melt pool is laminar and incompressible. Powder size follows a sphere-shaped Gaussian distribution. The flow at the solid-liquid interface is called mushy zones, regarded as a porous medium's isotropic permeability. A Boussinesq approximation is used to evaluate the density fluctuation of the molten pool in the momentum equation for the buoyancy term. a. Gaussian packing b. Laser source and laser absorptivity c. Governing equations d. Boundary conditions e. Material properties and numerical simulation process
Heat transfer analysis
In the numerical modelling for the heat transfer analysis of AM operations using powder-bed method FE framework is utilized. To cope with the sintering process, that converts the metal powder into a new solid part, the formulation is reinforced with an appropriate FE activation approach. a good balance of computational effort and accuracy is achieved by simplified hatch-byhatch patterns. The numerical model provides for power input and absorption, heat dissipation across boundaries via conduction, convection, and radiation and temperature dependence of material properties.
Thermo-mechanical model
The SLM manufacturing process is numerically analysed utilizing a 3D thermo-mechanical modelling method based on falter coupling. During the SLM process, this method allows for the assessment of both the mechanical stress distribution and the transient temperature.
• Heat transfer modelling The first law of thermodynamics is used to develop the differential equation controlling transient heat conduction inside a continuous medium of arbitrary volume [22].
The heat equation gives the temperature with respect to time, The decoupled or poorly coupled approach is commonly employed for AM process modelling since it requires less computational effort and time than the fully coupled method. The mechanical or structure analysis stage, however, dominates the analysis time in decoupled modelling. Since the thermal gradient near the thermal source is extremely strong, the mesh size must be adequate to capture the high graded residual stress and distortion of the diffusion layer and many layers beneath it: in the heat affected zone (HAZ) [23]. As a result, when a fine mesh is used in the model, the computational time increases. To reduce the computational time of DMD, a Finite Element-based mesh coarsening technique is developed in both thermal and mechanical analyses. When compared to a traditional analysis process that used just fine meshes with no coarsening, the computing time for the coarsening method is reduced by around three times, and the findings are comparable. The ABAQUS solution map technique is used for meshing purpose. In two steps, the adaptable meshing technique is used: fine mapping and coarsening steps which utilizes the concept of coherent strain. The model can be separated into three separate scales (micro, meso, and macro), the mechanical layer equivalent can be achieved [23]. With a true thermal heat model structure and thermal boundary conditions, a pure thermal analysis is done on a very small structure that is connected to a substrate that replicates the lower layers from the real scale. Many AM factors that have a huge consequence on the mechanical properties of the parts produced demand more inventive FE-based modelling framework. For the associated flow rule the plastic strain increment is,
∆ =
where, λ = plastic multiplier calculated through the consistency condition The total thermal strain is calculated, where, αT and = volumetric thermal expansion coefficients evaluated T = current temperature Tini = initial temperature respectively Tref = reference temperature for the thermal expansion coefficients I = second-order identity tensor Numerical simulation can efficiently determine thermal development, molten pool shape, residual stress, and deformation. The model combines heat transfer and fluid dynamics to track the boundaries of melt pools.
The precise calculation of nodal temperature data, dispersion of residual stresses & deformation of components manufactured by AM, as well as large computational efforts, are the main issues in AM modelling. Innovative solutions are proposed and applied to address these challenges in metal alloy AM techniques, as well as modelling methodologies to increase the process' efficiency and accuracy. residual stress and deformation evaluation of Direct metal deposition [23]. As the deposition progresses and multiple layers are created, the thermal expansion of layers under the heat source causes compressive pressures to be applied to the lower layers, resulting in compressive plastic strains. Because of the nature of the operation, stress residual induction is unavoidable in AM procedures. To analyse material deposition onto the substrate and the generation of residual stresses & geometric distortions in AM parts, the aforementioned models often use simplifying assumptions or employ particular methodologies.
The modelling of the AM process may be separated into two categories: (1) doing a pure thermal or thermal transfer analysis in order to acquire the nodal temperatures in the FE based model, and (2) building a structural configuration in order to evaluate mechanical behavior of the FE based model with applied nodal temperature gradients, and lastly, obtaining the deformation and residual stresses of the produced component [23]. The approach is known as coupled thermomechanical analysis when the thermal study is followed by a structural assessment for each increment. The approach is known as coupled thermomechanical analysis when the thermal study is followed by a structural assessment for each increment.
The heat transfer analysis is built on the idea of the body's energy conservation. The residual stresses and distortions are estimated by applying boundary conditions to the mechanical FE model. Properties of the material should be considered temperature dependent in both the thermal or mechanical analyses to provide a more realistic representation of the process. The application of incremental material to the substrate necessitates the use of numerical analytical tools. The most well-known strategies are (a) silent elements activation, (b) inactive parts activation, and (c) hybrid elements activation [23].
Thermal modelling
The precision of the temperature history data produced through thermal analysis in the FE based decoupling approach for a model is critical for precise estimate of a residual stress distribution & deformation of an AM part. The thermal study of AM processes has been the subject of numerous studies. Various features and improvements in the modelling of thermal analysis with AM processes is given. The heat flux model, which represents the thermal source, is an important part of the thermal analysis. The laser/electron beam characteristics, such as power, speed, orientation, shape, and efficiency, must be included in the thermal source model. 3D super Gaussian, 3D Gaussian and 3D inverse Gaussian beams are three different forms of thermal body heat flux distribution [23]. Degrading the overall heat transfer phenomenon within the melt pool is one of the simple suppositions in the thermal evaluation of the AM process, which can lead to an overestimation of a nodal temperature history and, as a result, an overestimation in residual stresses. During an AM process, the shape and parameters of the melting pool have always been a critical component in determining nodal temperature, as well as deformation and residual stress measurement [23].
Interactive CAD modelling
The production of 3D patterns that may be tailored to the CAD model using generative algorithms in parametric modelling. GD has spread to a variety of industries, including architecture, jewellery, and industrial design. GD's parametric modelling enables for the automated production of any project piece based on parameters. This means that certain algorithm-generated rule sets control the development and change of pieces inside a project [24]. Elements are drawn automatically based on user-defined algorithms, and parameters inside the algorithm can be changed. Using generative algorithms, it is possible to identify answers to difficulties that can occur with traditional CAD systems. In inclusion to reducing the structure, generative algorithms are used to simulate nonstructural aspects. For the production of 3D patterns and Voronoi tessellations, two algorithms have been implemented [24]. The first allows to create patterns on complex surfaces, while the second helps in creating a Voronoi tessellation. The new methodologies enable for the modelling and modification of non-structural components, allowing for an interactive aesthetic assessment of the geometries formed [24]. Only the structural parts are subjected to FEM analysis. To overcome any non-convergence issues caused by high displacements during flexion and extension movements, explicit dynamic simulations are conducted [24].
Conclusion
A survey and evaluation of modelling techniques is presented in this work. The classification of AM methods is based on their working principle rather than the materials utilized. Following that, modelling processes in the field of additive manufacturing were presented and classified, not only based on the operating principle but also on the simulated process characteristic and the modelling technique. The most commonly used way for handling the challenge of dimensional accuracy is through empirical models and statistical approaches (ANOVA etc.). Mechanical characteristics and dimensional stability are often modelled using numerical heat transfer models, which focus on the melt pool and material phase transition, whereas build time has been studied both analytically and numerically. New features in the CAD modelling industry that can help solve challenges related to modelling in Additive Manufacturing. | 6,046.6 | 2022-01-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Strong coupling between excitons and magnetic dipole quasi-bound states in the continuum in WS$_2$-TiO$_2$ hybrid metasurfaces
Enhancing the light-matter interactions in two-dimensional materials via optical metasurfaces has attracted much attention due to its potential to enable breakthrough in advanced compact photonic and quantum information devices. Here, we theoretically investigate a strong coupling between excitons in monolayer WS$_2$ and quasi-bound states in the continuum (quasi-BIC). In the hybrid structure composed of WS$_2$ coupled with asymmetric titanium dioxide nanobars, a remarkable spectral splitting and typical anticrossing behavior of the Rabi splitting can be observed, and such strong coupling effect can be modulated by shaping the thickness and asymmetry parameter of the proposed metasurfaces. It is found that the balance of line width of the quasi-BIC mode and local electric field enhancement should be considered since both of them affect the strong coupling, which is crucial to the design and optimization of metasurface devices. This work provides a promising way for controlling the light-matter interactions in strong coupling regime and opens the door for the future novel quantum, low-energy, distinctive nanodevices by advanced meta-optical engineering.
I. INTRODUCTION
Strong coupling of excitons to optical microcavity has received tremendous interest for its fundamental importance in basic quantum electrodynamics at nanoscale and practical applications towards quantum information processing [1][2][3] . When the coherent exchange rate between the exciton and the optical microcavity is greater than each decay rate, the interaction enters into the strong coupling regime, forming an exciton-polaron and leading to Rabi splitting and anticrossing behavior in the optical spectra [4][5][6] . Exciton-polaritons, quasi-particle in a hybrid light-matter state, have attracted a lot of researches activity over the past decade for their promising potential as a designable, low-energy consumption in the application of quantum computing and quantum emitters [7][8][9] . The ability to manipulate the strong coupling is elementary to the design of photonic devices. The most basic description of light-matter interaction is given by the coupling intensity g. In the dipole approximation, g = µ·E ∝ 1 V , µ represents the transition dipole moment and E represents the local electric field intensity [10][11][12] . In this sense, transition-metal dichalcogenides (TMDCs) have garnered much attention owing to its direct band gaps, large exciton transition dipole moment and exciton response even at room temperature due to quantum confinement in the atomic layer [13][14][15] .
Over the past decade, the strong coupling between the excitons of TMDCs and optical microcavities was mostly realized by metallic nanocavities supporting surface plasmon polaritons, which can strongly confine the electric field in ultrasmall mode volume [16][17][18] . However, the metal has thermal instability in visible region due to large ohmic loss. The Fabry-perot (F-P) cavity constructed by Bragg reflector can realize strong coupling but the integration is difficult and the volume of whispering gallery modes is large, both of which are difficult to be applied in reality [19][20][21][22][23] . Recently, the guided resonance coupled with WS 2 24 , two-dimensional dielectric photonic crystal slab with WS 2 have been successfully reported to achieve strong coupling between dielectric and excitons 25 . However, the traditional microcavity is difficult to further compress the volume due to the limit of diffraction which greatly affects the local electric field intensity. Another prospect dielectric metasurface will further minimize the volume to enhance the strong coupling. As far as we know that there are few studies on the strong coupling between TMDCs excitons and the resonance in emerging optical metasurface structures. In fact, metasurfaces can support very high diversity of resonance modes, confine the incident light into deep subwavelength volume, and enhance the light-matter interaction at the nanoscale, thus providing a versatile platform for controlling exciton coupling [26][27][28][29] .
In this paper, for the first time, we investigate the metasurface-enhanced strong coupling between excitons in TMDCs and bound states in the continnum resonance (BIC). In the hybrid structure consisting of WS 2 and titanium dioxide (TiO 2 ) nanobars, the magnetic dipole (MD) resonance governed by quasi-BIC is obtained by breaking the C 2 symmetry and analyzed using the finite element method (FEM), which provides the ideal number of photons to interaction with exciton. A remarkable spectral splitting of 46.86 meV and typical anticrossing behavior of the Rabi splitting can be observed in the absorption spectrum, which can be well described by coupled-mode theory (CMT). By further changing the asymmetry parameter and varying the thickness of the TiO 2 metasurface, it is found that the balance of line width of the quasi-BIC mode and local electric field enhancement should be reached to obtain the large Rabi splitting. Our work set an example for strong coupling in TMDCs/metasurface hybrid system and show great flexibility with diverse geometric configurations and different 2D TMDCs materials, which opens an avenue for smart design of novel integrated quantum devices.
II. STRUCTURE AND MODEL
The proposed hybrid construction, as illustrated in Fig. 1(a), is composed of a monolayer WS 2 lying on the titanium dioxide (TiO 2 ) metasurfaces. In the absence of WS 2 , the bare metasurfaces consist of a pair of parallel, geometrically asymmetric nanobars, as depicted in Fig. 1(b), the period of unit cell is p = 450 nm in both x and y directions, the width of nanobars is w = 100 nm and a fixed separation between any two neighboring bars is w a = 125 nm. The length of the long nanobar is L 1 = 400 nm, while the length of the short nanobar L 2 is variable which can generate quasi-BIC mode. The thickness H of the nanobars is also adjustable to match the exciton wavelength. Such metasurfaces can open a radiation channel via introducing an in-plane perturbation in the nanobar length with an asymmetric parameter defined as δ = ∆L/L 1 . Further, we consider a homogenous background with permittivity 1, and choose TiO 2 as the constituent material which has high refractive index and negligible absorption loss in the range of visible light. For simplification, the index of TiO 2 is assumed as n=2.6. The permittivity of WS 2 is modeled by the Lorentz oscillator model with the thickness of 0.618 nm, adopted from the experimental parameters by Li et al 30 , as shown in Fig. 2(a). The imaginary part has a sharp increase value (red line) around 2.014 eV (616 nm), which is the exciton of WS 2 shown in Fig. 2(b), indicating that WS 2 has a large line width at 2.014 eV and is suitable for being strong coupling material. In theory, it is likely to reach strong coupling when the resonance wavelength of quasi-BIC draws near the exciton wavelength (616 nm) of the monolayer WS 2 , and the FEM is used to verify the predication. The thickness of nanobars and the length of short nanobar are initially set with H = 85 nm and L 2 = 280 nm, respectively. In the numerical simulations, the transverse electric (TE) polarized plane wave is normally incident along the z direction, and the periodic boundary conditions are utilized in the x and y direction, and the perfectly matched layers are employed in the z direction.
III. RESULTS AND DISCUSSIONS
A. The magnetic dipole quasi-BIC resonance in the TiO2 metasurfaces To obtain a clearer insight into the physics of magnetic dipole quasi-BIC resonance in the metasurfaces, we analysis the transmission spectrum of the TiO 2 metasurfaces with different asymmetry parameters are shown in Fig. 3(a), which manifests a Fano lineshape resonance as a result of the in-plane symmetry breaking of the unit cell. In our work, when a perturbation is introduced into an inplane inverse symmetric (x, y) → (−x, −y) of a structure, BIC will transform into quasi-BIC and build the radiation channel between a nonradiative bound state and the free space continuum, at the same time, confine part of their electromagnetic field inside the structure is shown in Figs. 3(b) and 3(c). We take the asymmetric parameter δ=0.15 and H=85 nm at the resonance wavelength 616 nm for example, the inverse phase with almost equal amplitude of electric field can be observed in Fig. 3(b), and the circular displacement current in the nanobars generates an out-of-plane magnetic field as shown in Fig. 3(c), which reveal the properties of a magnetic dipole with a strongly localized electrical field inside the nanobars. Then a Fano line will be caught in the transmission spectrum due to the interference between the magnetic dipole and free space continuum. This capture pattern provides a platform for enhancing light-matter interaction at the near-field [31][32][33] .
We then fit the transmission spectrum T (w) by the Fano formula 34-36 where a 1 , a 2 , and b are the constant real numbers,w 0 is the resonant frequency and γ is the dissipation of the quasic-BIC, as depicted in the Fig. 3(b). B. The quasi-BIC resonance and Exciton coupling Fig. 4(a) describes the absorption spectrum of the hybrid structure of TiO 2 metasurface with monolayer WS 2 on top. Two peaks are located at 612.14 nm and 626.97 nm, respectively. The dip located at 620.06 nm shows that original resonance wavelength (616.2 nm) disappears, and the small red shifts of resonance location due to the large real part of the permittivity of WS 2 monolayer can be shown in Fig. 2(a) blue line. The obvious spectral splitting with two peaks and one dip as a result of the strong coupling between bare monolayer WS 2 and quasi-BIC, which indicates that coherent energy exchange is conducted between excitons and quasi-BIC. This finding can be explained by two-level coupled oscillator model 17,36,37 , as depicted in Fig. 4(b). The incident light can be regarded as ground state with the energ E 0 . When the metasurfaces have proper geometrical parameters, a magnetic dipole with the energy E MD can be excited by the incident light. The process can be thought of photon transition from the ground state to one excited state. In the same way, the interaction between photon and exciton in the monolayer WS 2 is considered as the energy transition process from E 0 to Eexc. Furthermore, coherent energy exchange will occur between the magnetic dipole and exciton as they share the same energy. When the energy exchange rate is greater than each decay rate, the strong coupling happens, and the original two independent energy levels will be hybridized to form a new hybrid state named polariton with two new energy levels. The electric field distributions of the new hybrid state are shown in Figs. 4(c), 4(d) and 4(e). Comparing the electric field distributions at the absorption peaks and dip, it is found that the local electric field at peaks are much higher than that at the dip, which further proves that original energy state around 2.014 eV (616 nm) disappears and forms two new state are 612.14 nm and 626.97 nm, respectively. Thus, the evident spectral splitting make clear that strong coupling between MD and exciton can be obtained by our design. It can be seen from the Fig. 5(a) (red line) that the resonant position of WS 2 does not change with thickness H, while the resonant wavelength of quasi-BIC shows a linear growth relationship with thickness H. Moreover, as the resonant wavelength increases, it will shift across the exciton resonant wavelength, therefore two branches of anti-crossover behavior can be captured and named lower branch (LB) and upper branch (UB), which is depicted from Fig. 5(b) the absorption spectrum of the hybrid structure with different thickness H. This can be explained by using coupled-mode theory (CMT). For in-plane vectors the eigenstates can be described as 1,7 Eq−BIC + iγq−BIC g g Eexc + iγexc where, E q−BIC and γ q−BIC represent the quasi-BIC energy and dissipation, respectively. E exc and γ exc represent the energy and nonradiative decay rate of the uncoupled exciton, respectively, and g is the coupling strength. α and β are the Hopfield coefficients that describe the weighting of the quasi-BIC and Exciton for LB and UB, which should satisfy |α| 2 + |β| 2 = 1. E LB,UB represent the eigvenvalues, which can be obtained from Eq. (2) When the detuning ∆=E q−BIC −E exc =0, Eq. (3) become Then we obtain the Rabi splitting energy which is owning to the strong coupling between quasi-BIC and exciton. Here, we also calculate the γ q−BIC =14.93 meV and γ exc =15 meV from Fig. 3(a), the Rabi splitting energy Ω=46 meV can be extracted from FEM simulation results shown in Fig. 5(a) (dashed line), satisfying the condition of strong coupling ( Ω > (γ q−BIC + γ exc )/2 ). We then compare the dissipation rate with the coupling strength g. From Eq. (5), we obtained g=23 meV, which indicated g > |γ exc − γ q−BIC |/2 and g > (γ exc 2 + γ q−BIC 2 ) 2 . These results are a further proof that we are indeed in the strong coupling regime. Fig. 6 shows the absorption spectra of quasi-BIC and exciton coupling with the different asymmetric parameters but at the same resonant wavelength by tuning the thickness H, which indicates that coupling strength g will reduce with the decrease of the asymmetric parameter. It can be described by CMT that when the dissipation loss of the quasi-BIC resonance gets close to the nonradiative decay rate of the exciton, the Rabi splitting reaches its maximum 7,38,39 . For the smaller asymmetric parameters, the larger local electric field, accompanied by narrower line width and the dissipation loss of quasi-BIC mode, as a result of limiting the total number of photons related to the interaction with excitons. Therefore, it is important to find a balance between local electric field and spectral line width. Finally, we also study the absorption curves of two new hybrid sates with a variable thickness H, but with fixed asymmetric parameters shown in Fig. 7(a). It is found that the absorption peaks of LB increases while the absorption peaks of UB decreases with the decrease of thickness H, which can be explained by the relative weightings of exciton and quasi-BIC in new hybrid state.
The weighting of the quasi-BIC and exciton constituents in the LB and UB can be drived from Eq. (2) and the fractions for the LB/UB polariton are which are shown in Fig. 7(b). We can found that as the thickness H increase, the exciton (quasi-BIC) fraction increases in UB (LB) and decreases in LB (UB). In other words, with the decrease of thickness in the LB, the weight of excitons decreases, which means that the number of excitons participate in the coupling decreases and the absorption summit of the lower branch decreases.
IV. CONCLUSION
In conclusions, we have theoretically investigated the strong coupling between the WS 2 excitons and quasi-BIC mode supported by TiO 2 metasurfaces. The Rabi splitting energy up to 46.86 meV is observed in the absorption spectrum of the hybrid structure. Furthermore, anticrossing behavior as a typical feature of strong cou-pling can be achieved by tuning the asymmetric parameters and the thickness of TiO 2 metasurface. More importantly, it is found that the line width of the quasi-BIC mode and local electric field enhancement should be balanced since both of them affect the strong coupling. Beyond this work, the proposed configuration can be extended to diverse kinds of strong coupling system, in principle, with various dielectric metasurface designs and different TMDCs (MoS 2 , WSe 2 etc.). Therefore, this paper provides a strategically important method for metasurface-enhanced strong coupling, and offers designable, low-energy consumption, practical platform for future research of quantum phenomena and nanophotonic devices. | 3,637.2 | 2021-01-17T00:00:00.000 | [
"Physics"
] |
Trustability for Resilient Internet of Things Services on 5G Multiple Access Edge Cloud Computing
Billions of Internet of Things (IoT) devices and sensors are expected to be supported by fifth-generation (5G) wireless cellular networks. This highly connected structure is predicted to attract different and unseen types of attacks on devices, sensors, and networks that require advanced mitigation strategies and the active monitoring of the system components. Therefore, a paradigm shift is needed, from traditional prevention and detection approaches toward resilience. This study proposes a trust-based defense framework to ensure resilient IoT services on 5G multi-access edge computing (MEC) systems. This defense framework is based on the trustability metric, which is an extension of the concept of reliability and measures how much a system can be trusted to keep a given level of performance under a specific successful attack vector. Furthermore, trustability is used as a trade-off with system cost to measure the net utility of the system. Systems using multiple sensors with different levels of redundancy were tested, and the framework was shown to measure the trustability of the entire system. Furthermore, different types of attacks were simulated on an edge cloud with multiple nodes, and the trustability was compared to the capabilities of dynamic node addition for the redundancy and removal of untrusted nodes. Finally, the defense framework measured the net utility of the service, comparing the two types of edge clouds with and without the node deactivation capability. Overall, the proposed defense framework based on trustability ensures a satisfactory level of resilience for IoT on 5G MEC systems, which serves as a trade-off with an accepted cost of redundant resources under various attacks.
Introduction
Future fifth-generation (5G) wireless cellular networks will support billions of Internet of Things (IoT) sensors and devices, including static and mobile endpoints, various robots, and self-driving cars, as illustrated in Figure 1. These devices and the related applications will attract and amplify the risk of vulnerability. 5G wireless communication technologies under development promise tremendous improvements in many areas including speed, connectivity, and reduced latency. These 5G networks can enable the movement of massive amounts of data to connect distant sensors across a critical environment, as illustrated in Figure 1. IoT on 5G will function as an integrated system with multi-access edge computing (MEC) as an extension of cloud services. IoT and MEC systems will both be under various attacks; therefore, we propose a defense framework to ensure the resilience of the entire system under attack.
A variety of customers, such as individuals employing 5G devices for personal use or large corporations and institutions, are widely using cloud computing platforms. Therefore, a wide range of applications is migrated into the cloud, including e-commerce, data storage, healthcare, gaming, and different web applications [1,2]. This allows customers to deploy and scale their services with much less effort, especially without hardware purchase requirements [3]. However, it creates new concerns, such as with security and privacy [4,5]. These are the issues that customers consider before using cloud vendors and selecting the most appropriate provider. Therefore, as cloud providers recognize these issues, they develop their services such that they would address the customers' concerns in order to attract them [1]. Similarly, devices and sensors that are connected via the internet or other types of connections are being widely adopted. The speed of this adaptation process is estimated to continue increasing as the new-generation networks such as 5G spread [6,7]. These 5G networks have had many advances in wireless networking [8][9][10][11].
There are a variety of sensors in such devices, whether it be an autonomous vehicle, a car with an active safety system, a robotic vacuum, or some other IoT device [12]. For such systems, the security of the communication between the decision makers and the sensors is critical. An attack on one of the sensors could cause undesirable outcomes specific to the task [13] or simply unauthorized access to sensitive information such as healthcare data [14].
Vendors have been implementing security measures in both cloud and systems with communicating parts [5]; however, it is difficult to completely protect the whole system from attackers [15]. Faced with such new challenges, the old security model of defending the system's perimeter is no longer valid. We must assume that whatever defense mechanisms we deploy in the system will sooner or later be breached by attackers. Therefore, it is advisable to implement new techniques, including trustworthiness assessments [16], which would help the service to survive the attacks despite having to face the cost of these techniques, such as active tracking, dynamic resource allocation, and purchase of new resources. According to the Cybersecurity and Infrastructure Security Agency (CISA), the current cyberspace shifts the attention from detection and perimeter defenses to strengthening security with resilience [17,18]. A robust mechanism that ensures resilience is the deployment of redundant resources based on the assessment of the trustworthiness of the system services. Current methods may not adequately assess the trustworthiness of the systems and their components due to disproportionate heterogeneity and multi-level hierarchies.
Various studies [19][20][21] and surveys [22,23] have been carried out on trust management frameworks and their applications. Ruan et al. [24] proposed a measurement theory-based trust management framework to provide improved flexibility to context-dependent applications by supporting multiple formulations and a new metric: the confidence of trust. Applications of this framework include stock market prediction using Twitter data [25], trust management in environmental decision making [26][27][28], and the detection of crime [29], fake users [30], and damaging users [31]. These applications show the potential of utilizing a trust management framework to facilitate decision making in various fields by measuring and assessing trust.
Trust frameworks have also been proposed to be implemented in both cloud and IoT scenarios and other network-related ones, such as the scenarios that include 5G [32,33]. Ruan et al. also proposed a trust management framework for IoT [34], multi-access edge computing [35], and cloud computing platforms [36]. Furthermore, Kaur et al. [37] proposed the use of a geo-location and trust-based framework to filter out attackers in 5G social networks. These applications serve as stepping stones toward trustworthy artificial intelligence (AI) and decision making, which have been consistently promoted by researchers [38][39][40][41][42][43][44], governments, institutions, and organizations such as the European Union [45] and the International Organization for Standardization [46].
Park et al. [13] highlighted the significance of the security and privacy of communication and connectivity functions and proposed machine learning approaches to detect anomalies in in-vehicle networks. Furthermore, Cao et al. [47] surveyed the emerging threats in deep learning-based autonomous driving and listed different types of attacks on sensors, such as jamming and spoofing. In addition to 5G, research has been carried out and a framework has been proposed for sixth-generation (6G) networks, specifically investigating the technology's applicability and the privacy concerns in relation to unmanned aerial vehicles (UAV) [48][49][50]. Ullo et al. [12] also highlighted the importance of intelligent environment monitoring systems that use IoT and sensors; however, vendors and providers would need precise metrics to take the necessary actions on time in order to assess the trustworthiness of the systems.
To address the concerns about measuring the different aspects of trustworthiness, the metrics of acceptance [51] and fairness [52] were proposed to facilitate environmental decision making, an explainability metric [53] was proposed to interpret AI medical diagnosis systems, and a trustability metric [54] to assess trust in cloud computing. This paper presents an extended version of the trust management framework that includes the trustability metric, which helps to take action when an external attack or an internal event occurs in an autonomous device equipped with sensors or in a service running on the cloud. First, a sampling subsystem is explained as part of an autonomous system consisting of one decision maker and two sensors; an attack on one of the sensors is simulated, and the change in the trustability of the sensor and the entire system is shown. Then, the simulation is repeated with increased redundancy by adding another sensor. Finally, another scenario is simulated, where the extra sensor can be activated later, for instance, when the sensor lifetime is essential.
The findings illustrate the utilization of the trust management framework and the trustability metric in multiple incident scenarios within a sample cloud structure. The sample cloud consisted of three nodes, where the trust of one of the nodes declined relatively, continually, or sharply for both a short and extended period of time. It was shown that the trustability metric captures the decline in trust in the entire service. Then, additional scenarios were explored, where extra nodes could be added to each task in order to keep the service trustability high with an increased expense. Furthermore, results were shown for the cloud that had the option to remove nodes, specifically the ones with low trust. Finally, the net utility metric was illustrated to compare these two scenarios with and without the node removal option. The main contributions of this paper are as follows.
•
The trustability metric was demonstrated using a sampling subsystem with a sensor activation option, where an external attack occurs on a sensor; • Different possible outcomes of internal incident scenarios were presented in a sample cloud environment, where the trustability of the service is tracked by the framework for each scenario; • The trustability metric captured the trustability of the service whenever the cloud architecture allowed for the addition and removal of extra nodes for each task; • The net utility function captured the need for additional nodes and helped to decide when to remove nodes in order to optimize the utility of the service; • Overall, this paper proposes the use of a trust management framework with a trustability metric and a net utility function on a variety of external and internal incident scenarios in order to help take timely actions to keep the service alive and optimize the utility.
The rest of the paper is organized as follows. In Section 2, the trust management framework is introduced, which is tailored to measure the trust of sensors and nodes to capture overall trustability. In Section 3, the results of utilizing the framework and how it captures trustability are presented and discussed in (i) a subsystem with sensors, where an external attack occurs to a sensor, and (ii) a sample cloud, where internal incidents happen to a node. In Section 4, findings and contributions are summarized, and future work is discussed.
Materials and Methods
This section presents the trust management framework and how it is adapted to capture the trustability of a system or service while considering its cost and utility.
Trust Management Framework
In [24], a measurement theory-based trust management framework was proposed for online social communities. This framework has since been proven to facilitate decisionmaking in multiple areas such as online social networks [25], the food-energy-water nexus [44], crime detection [29], and cancer diagnosis [53]. It is a very flexible yet robust framework that can be adapted to different scenarios to capture trust.
The framework has two main components: impression, represented by m, and confidence, represented by c. The impression is the level of trust one party shows the other, and confidence is the degree of certainty of the impression. Although different formulations are possible [24], we selected the intuitive ones, as shown in Equations (1) and (2), in order to focus on the framework. In these equations, m A:B , c A:B , and r A:B i represent the impression, confidence, and a measurement from A to B.
Trust measurements are context-dependent, which means that measurement needs to be precisely defined and specific to the context. In this study, the alternative ways of obtaining measurements [35] were combined, and predefined trust measurements were used to reflect the incidents better and to be able to concentrate on the framework and decisions. Furthermore, measurements were always normalized to be in [0-1], which caused the impression to remain in the same interval of [0-1] as an arbitrary unit.
Trust Management in Systems and Cloud
The proposed trust management framework can be adapted to scenarios where trust is assessed for parts of an autonomous system or nodes of a cloud. There are several proposed approaches [54] to measure trust, such as by measuring node flows, dividing them into incoming and outgoing, assigning different weights to such flows, and considering the trust of tasks inside of a node. Equation (3) shows that the trust of a node, m node , can be measured as the weighted average of trust of the flow, m f low , and tasks running on it, m task . As shown in Equation (4), just as the trust of the tasks running on a node can affect its trust, the node itself can affect the trust in those tasks.
In this study, trust measurements were simulated for different scenarios, such as a normal condition where the trust stays the same, with the exact high measurements of around 0.9 for 10 time intervals where time has an arbitrary unit for the absence of complication. Then, scenarios where the node received lower measurements were explored, reflecting an anomaly in either the flow or the task activity, which has two types: a shortterm and a continuous decline in trust. Subsequently, additional scenarios where the node received very low measurements were explored. Equation (5)
Redundancy, Cost, and Utility
The trust of the entire system was also explored, whether it be an autonomous system already deployed in the field or a cloud system that could be managed later on. The overall trustability of the system was measured by considering the individual trust of the nodes and their hierarchy, such that the nodes that were connected in series in logical representation were all required to have high trust, whereas the nodes connected in parallel compensated for each other's abnormalities.
As discussed in [35], individual trustability was calculated using an exponential formula with two different lambdas, λ 1 and λ 2 . This is because impression, m, and confidence, c, needed to be merged in order to reach a high trustability only when m and c were both high. Moreover, a threshold, φ, was provided for trustability to be adjusted for the application. In this study, the sample threshold used for demonstration was 0.5. In other words, the scenarios where m went below 0.5 had much lower trustability by using λ 2 , whereas λ 1 was used otherwise. First, m and c were normalized and merged, as shown in Equation (6), where φ is the threshold. Then, trustability, τ, was calculated using the formula given in Equation (7), with the appropriate λ, which were assigned as λ 1 = 4 and λ 2 = 8. Trustability calculation is also shown in Algorithm 1.
Algorithm 1: Trustability, τ, is calculated as an exponential function, where λ is decided by comparing the impression, m, with the threshold, φ.
After calculating individual trustability, the entire system's trustability was calculated. First, the trustability of the nodes that were connected in parallel was aggregated, as shown in Equation (8). Then, the transitive trustability was calculated using the formula given in Equation (9). The final value reflects the entire system's trustability, whether an active cloud system or an autonomous system that had been deployed in the field.
Moreover, a formula to calculate the net utility of the service, including the cost of resources and the probabilities of success and failure, was developed using the trustability of the service. As shown in Equation (10), trustability, τ, was used for success, while 1 − τ was for failure; G represents the gain, and L denotes the loss. Then, the cost of resources was deducted, representing the sum of the cost of all nodes for the cloud scenario.
In Section 3, trustability results are presented for the scenarios wherein such systems with different configurations experience different incidents, thus causing node and systemlevel trustability declines.
Results and Discussion
This section presents the results for two types of attacks on systems. First, the effects of external attacks on systems already deployed in the field are presented as well as the possible actions to take afterward. For instance, a sensor could lose its trustability by providing either inadequate or unreliable data. Furthermore, results and potential cautions are discussed after a decline in the trustworthiness of a service running on the cloud due to internal anomalies. In such a scenario, a service runs on multiple nodes deployed on the cloud, and a subset of the nodes loses trustability due to various internal factors such as high usage of processing power, memory, or bandwidth or a compromised task running on a node.
External Attacks
This section explores the external attacks and related factors on the sensors and subsystems of a system running in the field. In addition, the trust management system is demonstrated by capturing such conditions and facilitating decision making. In Figure 3, a sample diagram of an active safety system is shown in a vehicle that has multiple sensors and two decision-making mechanisms. In real systems, higher levels of hierarchies among decision makers and sensors are predicted [13]. For example, a vehicle with an active safety system is expected to make decisions based on the information it receives from the sensors. However, such information may not always be reliable due to compromised sensors or altered sensor data. In such cases, the decision-making mechanism should be able to take the necessary actions for the safety and reliability of the decision. For instance, a dead battery in a tire pressure sensor could cause an incorrect tire condition measurement that would then affect the braking decision. Similarly, malware in a camera system that measures the distance to the vehicles in front could cause an erroneous distance measurement, which is crucial for emergency braking and adaptive cruise control.
The systems of sensors and decision-making mechanisms could quickly get complicated with multiple layers and hierarchies. We illustrate our trustworthy approach by using a simple system that can be considered a subsystem of the entire mechanism. In the first system, System-1, one decision-making element (DM) relies on two sensors, S1 and S2. It receives information from these sensors and makes a decision. This system can also be represented as a logical system, where the sensors are connected in series. Figure 4 shows the sample system and its logic representation. We explored a scenario wherein the trust of the first sensor, S1, declines. In the beginning, both sensors were assumed to have high trust, 0.90 and 0.95, respectively, because of consecutive measurements with the same values, as shown in Equation (11). However, due to a hypothetical external attack, the value of S1 became 0.1, which caused trust to decline, as shown in Figure 5. This also caused a decline in the trustability of the system, which was calculated using Equation (7).
One option could be to include another sensor for the same task in order to overcome the adverse effects of losing the ability to make trustworthy decisions as a result of one sensor failure. The system was updated such that the DM could rely on two sensors, S1-A and S1-B, for the information that previously required its reliance only on S1. Figure 6 shows the diagram of System-2 and its logic representation, where S1-A and S1-B are connected in parallel.
When the previous scenario occurred and S1-A's trust declined, the overall trustability of System-2 did not decrease as it did in System-1. The initial trustability of System-2 was also higher than that of System-1 due to the fact that it had the additional sensor, S1-B, in the system from the beginning. Figure 7 shows the change in the overall trustability of System-2 and the trust of the sensors over time.
Another scenario is to have the additional sensor, S1-B, in the system, but to have it activated only when needed. When the trustability of the system is below a specific threshold, another sensor is activated to bring the overall trustability back to an acceptable level, as shown in Figure 8. A grace period for sensor activation was added to reflect a more realistic scenario. This scenario could happen when a sensor has a short lifetime and is only activated when the other sensor does not satisfy the requirements anymore. One drawback of such an approach is the lower initial trustability as compared to when all sensors activated, which can also be observed when Figures 5 and 7 are compared. Sample system (System-2) with one decision maker (DM) and three sensors, S1-A, S1-B, and S2. (a) System diagram as DM relies on three sensors, where S1-A and S1-B are used for the same information. (b) Logic representation, where S1-A and S1-B are connected in parallel and are connected to S2 in series. The overall trustability of System-3 and the sensors. S1-B is only activated after distrust of S1-A decreases the system's trustability below a threshold. Activation takes time.
Internal Incidents
This section explores the incidents affecting some cloud nodes that run a service. The trust management framework captured the decline in the trustability of the service and nodes and assisted in taking timely actions to keep the service reliable. Although today's cloud infrastructures can be very complicated and highly hierarchical, the scenarios were built on a sample cloud that had three nodes, with each running a different task.
Internal incident scenarios on cloud services differ from the system scenarios with sensors explained in Section 3.1. While the number of sensors should be decided before the device's production, cloud scenarios have more flexibility. A task can be migrated to another node, or alternative nodes can be launched. This also brings dynamic cost optimization into the picture since the nodes can be dynamically added and removed.
The sample cloud architecture consisted of three nodes connected in series, which means that the service needed three tasks deployed on different nodes. Since the reliability of the service depends on all three nodes, those nodes can be considered connected in series, as shown in Figure 9. Four different scenarios were explored, as introduced in Equation (5), each of which caused other declines in the trustability of the service. For each scenario, only the first node, N1, was affected. In the first scenario, N1's trust measurements declined slowly until time 5 and then stayed the same, as shown in Equation (12). This caused a slow decline in the trust of N1 until time 5, which subsequently started settling gradually. However, since the nodes were connected in series, even a slight decrease in trust for one of the nodes caused a considerable decline in the trustability of the service deployed on the cloud. Figure 10 shows the change in the trustability of the service and the nodes. N1 = {0.9, 0.8, 0.7, 0.6, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5} (12) When the decline in trust of N1 continued, as shown in Equation (13), it placed more significant stress on the overall trustability of the service. It even decreased the overall trustability to almost zero, as shown in Figure 11, which is a sign that the system is not trustworthy anymore due to the low trust of N1. . When trust of N1 continuously declines, its effects on the overall trustability is more severe.
Furthermore, a sudden decline in the trust measurements of N1 was explored, which returns to normal after some time, as shown in Equation (14). For example, in this case, while the regular trust measurements were at 0.9, it dropped to 0.1 at times 3 and 4 due to an internal anomaly, such as high processing power, memory, or bandwidth usage. As shown in Figure 12, the trust of N1 gradually recovered; however, the overall trustability dropped close to zero at time 4 and recovered slowly due to historic trust measurements. N1 = {0.9, 0.1, 0.1, 0.1, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9} (14) Figure 12. If the trust declines rapidly for a short period of time, overall trustability also declines but does not recover quickly.
In the final scenario, the sudden decline in the trust of N1 was set to be permanent. When the trust of N1 did not recover, as shown in Equation (15), the trustability metric did not recover either, as shown in Figure 13. As in the previous scenario, it declined close to zero, indicating the service's low trustability. N1 = {0.9, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1} (15) In response to the decline in the trustability of the service, some measures can be taken. One example is to use alternative nodes for each task. It provides the service to utilize both nodes for the specific task. However, this option comes with a cost since the service provider would be charged for each node operated. As we mentioned previously, a more sophisticated approach would be to use the nodes when needed.
As shown in Figure 14, Cloud-2 can add a node for each task and remove them on demand. A new scenario was explored where the trust of each primary node, N1, N2, and N3, declined gradually, starting at different times. After a grace period of 1, a new node was launched for each task to increase the overall trustability.
Once N1's trust started to decline at time 2, the overall trustability of the service also decreased. After a grace period, N4 was activated at time 4, which increased the trustability of the service again. The new trustability was more than the initial trustability since N1 was still active despite the lower trust. At the same time, N2's trust started to decrease, which was compensated by launching N5 at time 6. Finally, N3's trust started to decline, and N6 was launched. As seen in Figure 15, the overall trustability surpassed the initial value since the old nodes were still active and had nonzero trust values. Another conclusion is that the system became more resilient to individual trust declines once the nodes started having alternatives, such as having additional nodes that were connected in parallel. Since the continuous decline in trust of the initial nodes keeps decreasing the overall trustability, deactivating those nodes can be considered. Deactivating a parallel connected node with a nonzero trust would also cause a decrease in trustability; however, it is worth exploring the degree of decline resulting from the cost of the nodes. Figure 16 represents each node's overall trustability and trust when a node is deactivated after a decline in trust. Compared to Figure 15, where the initial nodes are not deactivated, the overall trustability is lower. However, the advantage appears when the cost of the service is considered. Figure 16. Deactivating a node that is losing trust is one way to keep costs low with the least sacrifice in the overall trustability of the service.
The net utility of the system was calculated using Equation (10). The gain of a running service was assumed to be 500 units, the loss of a down system was considered to be 100 units, and the cost of an active node was assumed to be 50 units. Figure 17 shows the trustability of services running on Cloud-1 and Cloud-2 and their net utilities. Cloud-1 has higher trustability starting at time 4, when the nodes' deactivation started in Cloud-2. The final trustability of Cloud-1 and Cloud-2 are 0.64 and 0.55, respectively, showing a 14% decrease. However, looking at the net utility, which also considers trustability, Cloud-1 has a negative net utility of -16, whereas Cloud-2 has 79.
The result of the net utility function is highly dependent on the choice of gain, loss, and node cost values. In this paper, we identified the importance of employing a trust management framework to actively observe the trustability of the service and each node in order to take the necessary actions that would proactively keep the service alive.
Conclusions
This paper explored the external and internal attacks and incidents occurring in systems with connected sensors and a service deployed on multiple nodes on a cloud. First, two systems consisting of a decision maker and sensors were compared, where the second system had an extra sensor for redundancy. The proposed trust management framework successfully captured the trustability of the sensors and the entire system in both scenarios. An alternative scenario was explored wherein the extra sensor could be activated when necessary, and the results were compared with the prior scenarios.
Furthermore, a sample cloud was simulated with three nodes, where the trust of a node decreased due to some internal incident. The trustability metric captured the overall trustability decline for different incident types. Then, the cloud was updated to add and remove nodes on demand. Each task was supported by an extra node when the individual trust declined in order to keep the overall trustability of the service high. Finally, the cloud systems with and without node deactivation were compared in terms of their trustability measurements and the net utility of the comprehensive service.
The results showed that the utilization of our trust management framework helps in deciding how to mitigate severe consequences of external attacks on sensors or internal incidents on a cloud while considering the net utility of the system. One of the difficulties in this work could be the measurement of the trust of the system parts, such as a sensor or a node, which requires field knowledge for different scenarios; however, it is out of the scope of this paper.
This work can be further extended by considering more realistic and complex scenarios, which include multi-level hierarchies of sensors or nodes. Moreover, a decision maker may rely on other decision makers in addition to sensors. Similarly, a service may depend on other services in addition to its tasks. These scenarios may require the trustability metric formula to be adjusted to specific situations. Future work could also include the cost of trustability measurement operations in the utility function since the historic computation of trust could surge as the number of individual measurements and components increases. Consequently, shifting the defense mechanism from classical, perimeter-based approaches to system resilience is considered to be a long-term objective.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 7,035 | 2022-12-01T00:00:00.000 | [
"Computer Science"
] |
Charge-state-dependent collisional energy-loss straggling of swift ions in a degenerate electron gas
In order to characterize the statistical aspect of the energy loss in particle penetration, Bohr developed a kinetic theory and applied it to a beam of fast (cid:1) particles interacting with free electrons. The present study rests on this classical theory of collisional straggling, and it is implemented by using a partially screened Coulomb potential to model the electron-projectile interaction. The deflection angle of electron scattering in this long-ranged field is calculated analytically within the framework of classical mechanics. The transport fluctuation cross section, which is the basic quantity to the collisional straggling in Bohr’s modeling, is determined numerically. By varying the number of bound electrons around the swift He ions, the effect of prefixed charge states in the collisional energy-loss straggling is quantified. An incoherent weighted summation of different fixed charge-state channels is discussed as well, by using normalized probabilities.
The statistical nature of the energy dissipation of heavy projectiles interacting with different targets requires the consideration of the energy-loss straggling which characterizes the broadening of the beam in the penetration phenomenon, i.e., it describes ͓1͔ the fluctuation in the energy-loss process. Apart from its intrinsic theoretical interest, the straggling is an important quantity in ion-beam based techniques for material structuring, and in hadron therapy also. Clearly, the quantitative role of different charge-states is a fundamental issue ͓2-8͔ in a proper description of the energy-loss phenomenon. The present theoretical study on the collisional energy-loss straggling is devoted to this problem.
Based on Bohr's classical theory, which seems to constitute one of the most lasting results of the theory of particle penetration, the mean square fluctuation in the energy loss ͓⍀ 2 = ͗͑⌬E − ͗⌬E͒͘ 2 ͘ where ⌬E is the energy loss in a twobody collision͔ is as follows: for an electron gas target ͓9͔. Here ⌬x is the penetration length, p = mv the momentum of electrons in relative motion, and n 0 the density of the electron gas. The maximal energy transfer is T 0 = m e ͑2v͒ 2 / 2 in an elastic two-body collision between the swift heavy projectile moving with velocity v and an electron. Atomic units, ប = m e = e 2 = 1, are used below. The transport fluctuation ͑trf͒ cross section to Eq. ͑1͒ is defined as follows:
͑2͒
in terms of the differential cross section ͑d͒ and the scattering angle ͑͒. This integrated cross section contains those details which are determined by the potential form in an elastic two-body collision. Bohr's theory rests on the series of such, statistically independent events. Notice that, since ͑1 − cos ͒ 2 = 4 sin 4 ͑ / 2͒, the straggling depends on the square of ͑⌬E / T 0 ͒ϳsin 2 ͑ / 2͒ which is prescribed by standard conservation laws.
Recently, the above cross section was determined ͓10͔ within the framework of classical mechanics for a wellknown ͓11,12͔ finite-range attractive ͑Z Ͼ 0͒ potential form with the V s ͑r͒ = 0 constraint for r Ն R s . The analytic result obtained is the following where A = ͓v 2 ͑R s / Z͒ −1͔ 2 is a convenient shorthand. The Coulombic case, treated by Bohr in his pioneering work on straggling, corresponds to the R s → ϱ mathematical limit. It was pointed out ͓10͔ that the ratio ͓ trf s ͑R s , v͒ / trf s ͑R s = ϱ , v͔͒ could reach a maximal value of 4/3 under the ͓͑Z / R s ͒ / v 2 ͔ =1/ 2 condition which is, in fact, also the condition ͓12͔ behind the so-called giant Coulomb glory effect ͓see the discussion at Eq. ͑9͒ below͔.
As the most natural extension of the previous results, which also allows a physical transition to the Coulombic limit treated by Bohr, we shall use a partially screened form for r Ͻ R ps , and take V ps ͑r͒ = ͑Z − N͒ / r for r Ն R ps . The parameter R ps will be specified below. With N = Z, we get back a finite-range ͑R s = R ps ͒ potential discussed above. Note that the bare Coulomb case is defined ͑when Z and N are fixed͒ by the R ps → ϱ limit, at which we have the V͑r͒ =−Z / r form applied ͓1,9͔ earlier to the collisional energy-loss straggling. In the present work the different channels, which are due to electron capture and loss processes, are modeled via the N and R ps ͑N͒ input parameters.
In the determination of the scattering angle based on classical mechanics with our potential in Eq. ͑5͒, there are impact parameters ͑b͒ with b Ն b ء where only the V ps ͑r͒ =−͑Z − N͒ / r potential term govern the trajectory contributing to the final rotation angle. This important b ء value is determined from the r min Ն R ps constraint, where r min is the ͑b-dependent͒ closest approach in our potential. From this constraint one gets This informative expression shows the role of interplay between kinetic and potential energies. After a straightforward but long algebra, the impact-parameter-dependent scattering angle ͓͑b͔͒ becomes where ⌰ is the Heaviside ͑generalized͒ function.
In the completely screened case for the allowed b ͓0,R s ͔ values ͓13,14͔. Finally, the Coulomb case is defined, as we mentioned above, by the R ps → ϱ limit in Eq. ͑7͒ with Eq. ͑6͒, which results in The strongest restriction to get single-valued ͑b͒ functions from Eqs. ͑7͒ and ͑8͒ is ͑R s v 2 / Z͒ Ն 1. This constraint is in a perfect harmony with a recent statement ͓12͔. When it is satisfied, one has only one classical trajectory which contributes to the scattering at a given . It is very useful to discuss the above expressions at small values of the impact parameter. A two-term Taylor expansion gives the Ӎ − ͑2bv 2 / Z͒ limit for the Coulomb case. The expansion of Eq. ͑8͒ results in Clearly, the effect of screening ͑complete or partial͒ extends the domain of effective backscattering, i.e., at small impact parameters the scattering angle is larger when there is screening. This is the phenomenological explanation behind the above-mentioned Coulomb glory effect.
We note at this point that the special dependence of the integrand function in Eq. ͑2͒ on the scattering angle makes our description with a classical differential cross section a well-balanced one. It is well known ͓15͔ that the classical scattering angle decreases faster at sufficiently large impact parameters than does the wave-mechanical diffraction angle. The special weighting removes, to a large extent, this range of the b variable. In addition, by decreasing the screening one approaches the Coulombic values for the scattering angle ͑b͒, but in an interesting manner. Namely, from above at small impact parameters, and from below ͑N Z͒ at large impact parameters. This remarkable behavior in the screened case, modulated by its intrinsic velocity dependence, makes the deviation from Bohr's straggling a challenging theoretical problem. Qualitatively, there is a nontrivial interplay between a decreasing glory effect and a growing impact-parameter range in integration.
Using Eqs. ͑7͒-͑9͒, and standard addition and transformation rules in trigonometry, one can easily derive the important sin 4 ͑ / 2͒ weighting factor to Eq. ͑2͒ in terms of the input parameters ͕Z , N , v , R ps ͖, and the b variable. For the b ͓0,b ء ͔ range we derive For the so-called Coulombic part, b ͓b ء , ϱ͒, the result is more simple We stress the point ͑see, above͒ that in the present treatment the bare Coulomb limit is defined from Eq. ͑10͒ by taking R ps → ϱ, which results in Finally, in the completely screened ͑s͒ case one gets from Eq. ͑10͒ the simple expression, since K 2 = 0 and b ء = R s when N = Z.
After the above details with prefixed charge states for intruders, we turn to the question of weighting. Of course, only some statistical concept ͓2-5͔ for a weightedsummation of channel-terms obtained above ͑at integer N values͒ for straggling can provide a reasonable approximation. On the other hand, an accurate treatment of higherorder effects, which cause the jumps between various charge states in a statistical energy-loss process, is a complicated issue. In order to quantify the equilibrium-charge state fractions we need an experimental input or a separate quantum treatment. Our weighting in this work rests on equilibrium charge-state fractions applied earlier ͓5͔ in stopping power calculations for He ions. In the rest of the paper this particular case of projectiles will be considered.
The practical implementation of our classical theory requires an estimation for the screening parameter R ps . We will use for it the radius at which the radial density of a 1s state around the ␣ particle has its maximum; R ps =1/ 2. With d =2bdb in Eq. ͑2͒, we performed the integration for the partially screened ͑N Z͒ cases numerically. In the R s → ϱ limit we recover from Eq. ͑4͒ via expansion ͓or, at R ps → ϱ, from our numerics which rests on Eqs. ͑10͒ and ͑11͒ to the definition in Eq. ͑2͔͒ the well-known simple form trf C ͑v͒ =4Z 2 / v 4 based on the bare Coulomb potential ͓1,9͔. The corresponding energy-loss straggling, Bohr's ⍀ B 2 , is used below as a natural measure to discuss deviations from it.
In Fig. 1, different ratios ͓⍀ 2 / ⍀ B 2 ͔ = ͓ trf ͑v͒ / trf C ͑v͔͒ are plotted as a function of projectile-energy given in MeV units. The conversion to ion velocity in atomic units ͑a.u.͒ is given by E͑MeV͒ = 0.1͓v͑a.u.͔͒ 2 for the case ͑Z =2͒ investigated in the present work. The dotted and dash-dotted curves are based on the potential-form given by Eq. ͑3͒, i.e., on a completely screened two-body potential energy. They are calculated via Eq. ͑4͒ for the E ͓0.2, 3͔MeV ion-energy range, with R s =1/ 2 and R s = 1, respectively. The value of R s =1/ 2 refers to a neutral-atom-unscreened-electron picture.
The other curves are based on the potential-form given by Eq. ͑5͒, i.e., on a partially screened two-body interaction energy. The long-dashed curve refers to the singly-ionized ͑He + ͒ case. It is calculated under the prefixed N = 1 and R ps =1/ 2 conditions. Finally, the solid curve is based on an incoherent weighted-averaging ͑av͒ of results obtained ͑separately͒ for the different ͑He, He + , and He 2+ ͒ chargestate channels where P͑He 2+ ͒, P͑He + ͒, and P͑He͒ are ͓5͔ normalized probabilities for the corresponding charge states, and they satisfy the P͑He 2+ ͒ + P͑He + ͒ + P͑He͒ = 1 constraint. Clearly, in our incoherent modeling the first product on the right-hand-side is just P͑He 2+ ͒.
The curves calculated with a completely screened potential energy for the two-body interaction, show maxima whose positions depend ͓see the discussion at Eq. ͑4͔͒ on R s . Of course, all these curves ͑Z = N =2͒ tend ͓10͔ to the Bohr result at high velocities as In the singly-ionized ͑N =1͒ case, which is one of the usual charge state of incoming projectiles, the straggling is still higher than the Bohr result in an appreciable range of energies; asymptotically we have ͑⍀ / ⍀ B ͒ He + 2 Х 1+2͓N / ͑R ps v 2 ͔͒.
Finally, the solid curve suggests that after an incoherent weighting of allowed charge states, the averaged straggling differs from Bohr's prediction with bare ␣ particles only a little over a surprisingly broad energy range. This statement is important from a general point of view on the energy-loss phenomenon, since it is this energy range in which Rutherford performed his famous experiments with ␣ particles and established his atomic model of condensed matter. At this point of presenting numerical results, we would like to mention an other possibility of an average characterization of straggling. Such a treatment would rest still on the screened form in Eq. ͑3͒ but with the Z → Z ef f and R s → R s ͑Z ef f ͒ parametrization. This could mimic, in a meanfield picture, the variation of an effective charge ͑and thus of the screening͒ with ion velocity. A way of fixing the parameters could be to apply a constraint via ͑solely͒ that contribution to the stopping power which is due to single-particle scattering in a dielectric medium, including ͓16͔ the socalled Z 1 3 Barkas term. The closed expression derived earlier ͓10͔ for the transport ͑tr͒ cross section tr ͑Z,R s ,v͒ = R s could allow ͓A is given at Eq. ͑4͔͒ a future extension along this line, since it contains ͑in a limiting case, where ͓͑Z / R s ͒ / v 2 ͔ Ӷ 1͒ the mentioned ͑perturbative͒ Z 1 3 charge-sign effect.
In conclusion, the charge-state dependence of the collisional energy-loss straggling has been investigated in this work by implementing Bohr's classical method using a partially screened two-body interaction potential. We solved the classical scattering problem exactly for such an interaction, thus considerably extending the proper phenomenology beyond the classic work of Bohr. We quantified the nontrivial interplay between the Coulomb glory effect and the effective impact-parameter range by varying the velocity of the swift attractive ion. The statistical weighting of contributions, calculated for different charge-state channels, is briefly discussed. As a future direction, we propose to investigate in more detail the space-time aspects ͓17,18͔ of an effective two-body interaction by combining quantum-statistical and semiclassical ͑mean-field͒ charge-polarization ͓19͔ pictures. The present result obtained for He + suggests ͓see the discussion at Eq. ͑15͔͒ that a further ͑dynamical͒ screening may act to enhance the Barkas effect, in agreement with an earlier ͓20͔ forecast. Eq. ͑3͒ with R s =1/ 2 and R s = 1, respectively. The long-dashed curve rests on Eq. ͑5͒ for a partially screened ͑Z =2,N =1͒ potential, with R ps =1/ 2 to model an He + ion. The solid curve is calculated via Eq. ͑14͒. It describes a weighted average of channel contributions. | 3,322.2 | 2009-12-14T00:00:00.000 | [
"Physics"
] |
Regulation of Mitochondrial NADP+-dependent Isocitrate Dehydrogenase Activity by Glutathionylation*
Recently, we demonstrated that the control of mitochondrial redox balance and oxidative damage is one of the primary functions of mitochondrial NADP+-dependent isocitrate dehydrogenase (IDPm). Because cysteine residue(s) in IDPm are susceptible to inactivation by a number of thiol-modifying reagents, we hypothesized that IDPm is likely a target for regulation by an oxidative mechanism, specifically glutathionylation. Oxidized glutathione led to enzyme inactivation with simultaneous formation of a mixed disulfide between glutathione and the cysteine residue(s) in IDPm, which was detected by immunoblotting with anti-GSH IgG. The inactivated IDPm was reactivated enzymatically by glutaredoxin2 in the presence of GSH, indicating that the inactivated form of IDPm is a glutathionyl mixed disulfide. Mass spectrometry and site-directed mutagenesis further confirmed that glutathionylation occurs to a Cys269 of IDPm. The glutathionylated IDPm appeared to be significantly less susceptible than native protein to peptide fragmentation by reactive oxygen species and proteolytic digestion, suggesting that glutathionylation plays a protective role presumably through the structural alterations. HEK293 cells and intact respiring mitochondria treated with oxidants inducing GSH oxidation such as H2O2 or diamide showed a decrease in IDPm activity and the accumulation of glutathionylated enzyme. Using immunoprecipitation with anti-IDPm IgG and immunoblotting with anti-GSH IgG, we were also able to purify and positively identify glutathionylated IDPm from 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine-treated mice, a model for Parkinson's disease. The results of the current study indicate that IDPm activity appears to be modulated through enzymatic glutathionylation and deglutathionylation during oxidative stress.
Recently, we demonstrated that the control of mitochondrial redox balance and oxidative damage is one of the primary functions of mitochondrial NADP ؉ -dependent isocitrate dehydrogenase (IDPm). Because cysteine residue(s) in IDPm are susceptible to inactivation by a number of thiol-modifying reagents, we hypothesized that IDPm is likely a target for regulation by an oxidative mechanism, specifically glutathionylation. Oxidized glutathione led to enzyme inactivation with simultaneous formation of a mixed disulfide between glutathione and the cysteine residue(s) in IDPm, which was detected by immunoblotting with anti-GSH IgG. The inactivated IDPm was reactivated enzymatically by glutaredoxin2 in the presence of GSH, indicating that the inactivated form of IDPm is a glutathionyl mixed disulfide. Mass spectrometry and site-directed mutagenesis further confirmed that glutathionylation occurs to a Cys 269 of IDPm. The glutathionylated IDPm appeared to be significantly less susceptible than native protein to peptide fragmentation by reactive oxygen species and proteolytic digestion, suggesting that glutathionylation plays a protective role presumably through the structural alterations. HEK293 cells and intact respiring mitochondria treated with oxidants inducing GSH oxidation such as H 2 O 2 or diamide showed a decrease in IDPm activity and the accumulation of glutathionylated enzyme. Using immunoprecipitation with anti-IDPm IgG and immunoblotting with anti-GSH IgG, we were also able to purify and positively identify glutathionylated IDPm from 1-methyl-4-phenyl-1,2,3,6tetrahydropyridine-treated mice, a model for Parkinson's disease. The results of the current study indicate that IDPm activity appears to be modulated through enzymatic glutathionylation and deglutathionylation during oxidative stress.
The initial cellular response to oxidative stress is often a reduction in the levels of GSH, which represents the major low molecular weight antioxidant in mammalian cells, and a corresponding increase of GSSG, the oxidized form of GSH (1)(2)(3). It is well established that GSH plays a central role in the cellular defense against oxidative damage (4). Thus, the oxidation of a limited amount of GSH to GSSG can dramatically change this ratio and affect the redox status within the cell.
Under these conditions of moderate oxidative stress, thiol groups of intracellular proteins can be modified by the reversible formation of mixed disulfides between protein thiols and low molecular mass thiols such as GSH, a process known as S-glutathionylation (5). Glutathionylation, which is reversible by the actions of the enzyme glutaredoxin (thioltransferase) (6,7), may serve as a means of protection by preventing the irreversible oxidation of cysteine to cysteine sulfinic and sulfonic acid.
One proposed mechanism leading to protein glutathionylation in vivo is the thiol/disulfide exchange mechanism (8), which occurs when an oxidative insult changes the GSSG/GSH ratio and induces GSSG to bind to protein thiols. GSSG/GSH ratio is an indicator of the redox status of the cell, and the extent of protein glutathionylation will vary accordingly; a higher ratio will promote glutathionylation, and a lower ratio will result in deglutathionylation of glutathione (9). Therefore, the regulated formation of mixed disulfides between protein thiols and glutathione redox changes has the potential to act as a reversible switch in much the same way as phosphorylation (10). A growing list of enzymes, including glyceraldehyde-3phosphate dehydrogenase (11), protein kinase C (12), and guanylate cyclase (13), and glucocorticoid receptors (14) are potentially influenced by the formation of protein adducts with glutathione. Also transcription factors such as c-Jun appear to be redox-regulated by mechanisms that include protein S-thiolation (10,15), and ubiquitin-activating enzymes become glutathionylated, with a concomitant decrease in the ubiquitinination pathway, when cells are exposed to oxidants (16).
The isocitrate dehydrogenases (ICDHs 1 ; EC 1.1.1.41 and EC 1.1.1.42) catalyze oxidative decarboxylation of isocitrate to ␣-ketoglutarate and require either NAD ϩ or NADP ϩ , producing NADH and NADPH, respectively (17). NADPH is an essential reducing equivalent for the regeneration of GSH by glutathione reductase and for the activity of the NADPH-dependent thioredoxin system (18,19), and both are important in the protection of cells from oxidative damage. Therefore, ICDH may play an antioxidant role during oxidative stress. In mammals, the following three classes of ICDH isoenzymes exist: mitochondrial NAD ϩ -dependent ICDH, mitochondrial NADP ϩdependent ICDH (IDPm), and cytosolic NADP ϩ -dependent ICDH (IDPc) (17). We reported recently (20) that ICDH is involved in the supply of NADPH needed for GSH production against cytosolic and mitochondrial oxidative damage. Hence, the damage of IDPm may result in the perturbation of the balance between oxidants and antioxidants and subsequently lead to a pro-oxidant condition. Because cysteine residues serve as an essential role in the catalytic function of IDPm (21,22), the highly reactive sulfhydryl groups in IDPm could be potential targets of nitric oxide, peroxynitrite, 4-hydroxynonenal (HNE), H 2 O 2 , and diamide. Based on the reactive nature of cysteine residue(s) in IDPm at physiological pH and the fact that sulfhydryl modification results in inactivation of enzyme, we hypothesized that IDPm is a candidate for glutathionylation-mediated regulation.
In this study, we report that IDPm is inactivated by the formation of a mixed disulfide between Cys 269 , the active site cysteine, and GSH, and that this inactivation is reversed not only by dithiothreitol (DTT) but also more importantly by thioltransferase, a thiol:disulfide oxidoreductase that is specific for glutathionyl mixed disulfide substrates and specifically utilizes GSH as cosubstrate. This mechanism suggests an alternative modification to the redox regulation of cysteine in IDPm and suggests a possible in vivo mechanism in the regulation of IDPm activity.
Cell Culture-HEK293, a human embryonic kidney cell line, was purchased from the American Type Culture Collection and maintained in Dulbecco's modified Eagle's medium, supplemented with 10% fetal bovine serum, 100 units/ml penicillin, and 100 g/ml streptomycin sulfate, respectively. Cells were incubated in a humidified atmosphere of 5% CO 2 and 95% air at 37°C.
Measurement of IDPm Activity-IDPm (6.5 g) was added to 1 ml of 40 mM Tris buffer, pH 7.4, containing NADP ϩ (2 mM), MgCl 2 (2 mM), and isocitrate (5 mM). Activity of IDPm was measured by the production of NADPH at 340 nm at 25°C (21). One unit of IDPm activity is defined as the amount of enzyme catalyzing the production of 1 mol of NADPH/min.
Immunoblot Analysis-Proteins were separated on 10% SDS-polyacrylamide gel, transferred to nitrocellulose membranes, and subsequently subjected to immunoblot analysis by using appropriate antibodies. Immunoreactive antigen was then recognized by using horseradish peroxidase-labeled anti-rabbit IgG and an enhanced chemiluminescence detection kit (Amersham Biosciences). For glutathionylatin detection by immunoblot, IDPm was mixed with 5ϫ SDS sample buffer, without reducing reagents, supplemented with 5 mM NEM to block unreacted thiol groups, and then subjected to SDS-PAGE followed by electroblotting onto nitrocellulose membranes. Polyclonal anti-GSH antibody was a kind gift of Dr. Ole P. Ottersen (University of Oslo, Norway).
Immunoprecipitation-Mitochondrial fractions were precleared with protein A-Sepharose (Amersham Biosciences) and preserum for 1 h at 4°C. Supernatants were then incubated with rabbit polyclonal anti-IDPm (5 g) for 12 h at 4°C followed by protein A-Sepharose incubation for 1 h at 4°C. Immunoprecipitated proteins were washed, separated by SDS-PAGE, and visualized by Western blotting with anti-GSH antibody.
Mass Spectrometry-Positive ion electrospray ionization-mass spectrometry (ESI-MS) was performed on HP1100 Series LC/MSD triple quadrupole mass spectrometer (Hewlett-Packard, Palo Alto, CA) equipped with an atmospheric pressure ion source. Control and GSSG (5 mM, 1 h)-treated IDPm samples were subjected to gel filtration and mixed with 0.1% trifluoroacetic acid. Aliquots of IDPm samples (5 g of protein) were directly infused into the ESI source of the mass spectrometer. For the tryptic digestion and peptide mapping, IDPm samples were digested with trypsin for 24 h at 37°C, at an enzyme/substrate ratio of 1:10 (w/w). Mass analysis was performed with a MALDI-TOF MS (Voyager DE-STR, Biospectrometry W/S) using a nitrogen laser (337 nm) with delayed extraction. The extraction voltage was 20 kV.
Samples were prepared for mass analysis in a matrix of 4-hydroxy-␣cyanocinnamic acid. Saturated matrix solution was prepared in 50% (v/v) solution of acetonitrile, 0.1% trifluoroacetic acid.
Structural Analysis-For CD spectroscopy, samples of IDPm were desalted on Econo-Pac 10 DG column (Bio-Rad) equilibrated in 20 mM Tris buffer, pH 7.4, and fractions containing the protein were pooled. CD spectra were recorded on a temperature-controlled spectropolarimeter (Jasco, J-810). Spectra were recorded at 25°C in 0.05-cm quartz cells from 190 to 250 nm with protein concentrations of 0.05 mg/ml at a digital resolution of 0.5 nm, with scan speed of 5 nm/min for wavelength above and below 190 nm, respectively. Multiple spectra were recorded for duplicated samples. These spectra were averaged and corrected for base-line contribution from the buffer. Intrinsic fluorescence steadystate fluorescence measurements were performed on a Shimadzu RF-5301 PC spectrofluorophotometer with the sample compartment maintained at 22°C. A 150-watt xenon source was used The slit-width was fixed at 5 nm for excitation and emission. Unless otherwise stated, samples were excited at 278 nm, and the emission was monitored between 300 and 400 nm. ANSA (100 M) was incubated with the various forms in 25 mM potassium phosphate buffer, pH 7.0, 50 mM KCl. The fluorescence emission spectra (excitation, 370 nm) of the different mixtures were monitored on spectrofluorometer. Binding of ANSA to the protein was evidenced by subtracting the emission spectrum of ANSA from that of ANSA in the presence of the enzyme.
Vector Construction-For the construction of the His-tagged IDPm purification vector pET14b-IDPm, a 1.3-kb DNA encoding the IDPm gene was amplified from LNCX containing the cDNA insert for mouse IDPm (a gift from Dr. T. L. Huh, Kyungpook National University, Korea) by PCR techniques. Briefly, the 5Ј-primer oligonucleotide (5Ј-GGAATTCCATATGGCTGAGAAGAGGA-3Ј), which annealed to the 5Јend of the IDPm gene and introduced an NdeI site, and the 3Ј-primer oligonucleotide (5Ј-CAGGATCCCTACTGCTTGCCCA-3Ј), complementary to the 3Ј terminus of the IDPm gene but inserting an BamHI site, were used as primers.
Site-directed Mutagenesis and Preparation of Recombinant Proteins-Site-directed mutagenesis was performed using the Quickchange TM site-directed mutagenesis kit (Stratagene). The following mutagenic primer was used: 5Ј-GCTTTGTGTGGGCTTCCAAGAAC-TATGATG-3Ј for C269S, in which the underlines indicate the substituted serine codon. In order to prepare recombinant proteins, Escherichia coli transformed with pET14b containing the cDNA insert for mouse IDPm or mutant IDPm (K212T) construct was grown and lysed, and His-tagged proteins were purified on nickel-nitrilotriacetic acidagarose as described elsewhere (23). The mouse glutaredoxin2 (Grx2) expression construct (pET21-Grx2) was a kind gift of Dr. Vadim N. Gladyshev (University of Nebraska), and recombinant Grx2 was prepared as a His-tagged protein.
Isolation of Mitochondria from Rabbit Heart-Rabbits obtained from Hoychang Science (Taegu, Korea) were anesthetized with sodium pentobarbital and decapitated. Hearts were removed and immediately immersed and rinsed in ice-cold homogenization buffer containing 180 mM KCl, 5.0 mM MOPS, and 2.0 mM EDTA at pH 7.4. Hearts were then minced and homogenized in homogenization buffer with a Polytron homogenizer (low setting, 2 s). The homogenate was centrifuged at 500 ϫ g for 5 min at 4°C, and the supernatant was filtered through cheesecloth. The mitochondrial pellet was obtained upon centrifugation of the supernatant at 5,000 ϫ g for 10 min at 4°C. After two rinses with ice-cold buffer, the mitochondria were resuspended in homogenization buffer to a final concentration of ϳ35 mg/ml.
Incubation of Intact Mitochondria with H 2 O 2 and
Diamide-Mitochondria were diluted to a concentration of 0.25 mg/ml in buffer composed of 125 mM KCl and 5.0 mM KH 2 PO 4 at pH 7.25. Respiration was initiated by the addition of 15 mM ␣-ketoglutarate and allowed to proceed for 1 min. State 3 respiration was then induced by addition of 2.0 mM ADP. One minute after initiation of state 3 respiration, H 2 O 2 (50 M) or diamide (20 M) was added. All incubations were performed at room temperature.
Mice and MPTP Injection-ICR mice obtained from Hyochang Science (Taegu, Korea) were allowed free access to standard rodent chow (Harlan Teklad 7001) and tap water. Male mice aged 6 -8 weeks were used in this study. Mice were injected intraperitoneally with MPTP dissolved in saline (15-30 mg/kg, 5 times every 2 h). Control animals received the same volume of vehicle alone. Mice were sacrificed 1 week after the last injection (n ϭ 6 per group), and brains were rapidly removed.
Control Experiments-In order to evaluate artifactual intrapreparative glutathionylation, IDPm was purified from tissue/cell homogenates treated with 3 Ci of [ 3 H]GSH (45-50 Ci/mmol, PerkinElmer Life Science) by using immunoprecipitation and subsequently resolved by SDS-PAGE. The gels were soaked in Amplify (Amersham Biosciences) and then dried, and finally the dried gels were placed in direct contact with x-ray film. For more accurate quantitation, excised gel slices were dissolved by the NCS solubilizer (Amersham Biosciences). The resulting solutions were assayed by liquid scintillation counting.
Replicates-Unless otherwise indicated, each result described in the paper is representative of at least three separate experiments.
RESULTS
Inactivation of IDPm by GSSG-Inactivation of IDPm with GSSG at pH 7.4 at 37°C resulted in a time-and concentrationdependent loss of enzyme activity as shown in Fig. 1, A and B. At 10 mM, activity was inhibited completely with half-maximal inhibition occurring at ϳ4 mM. On the other hand, 5 mM GSH did not cause any noticeable inhibition of IDPm activity. In the resting cells, the ratio of GSH to GSSG exceeds 100, whereas in various models of oxidative stress, this ratio was reported to decrease to values between 10 and 1 (8). When IDPm was incubated with the mixtures of GSH and GSSG with the ratios between 10 and 1, 15-25% of inhibition was achieved. It has been shown that a polyclonal anti-GSH antibody is very useful for the detection of glutathionylation. When IDPm was incubated with various concentrations of GSSG and subjected to Western blot analysis with a polyclonal anti-GSH antibody, the intensity of the immunoreactive 45-kDa band was increased in a concentration-dependent manner (Fig. 1C). It has been proposed that cysteine residue(s) in IDPm could be potential targets of sulfhydryl-modifying agents (21,22). To determine whether glutathionylated cysteine(s) in IDPm are susceptible to sulfhydryl-modifying agents, IDPm was allowed to react simultaneously with GSSG and various concentrations of NEM, diamide, and HNE, lipid peroxidation product. As shown in Fig. 2, the dose-dependent decrease of glutathionylated IDPm was observed. As shown in Fig. 3A, the addition of 5 mM DTT completely reversed inhibition, suggesting that GSSG is modifying susceptible cysteine(s) on the protein through the formation of a mixed disulfide. Grx2, a thioltransferase, is a 12-kDa mitochondrial protein that is shown to specifically reverse protein-glutathione mixed disulfides by utilizing GSH as an electron donor. Grx2 is therefore a key component of the cellular machinery in maintaining and reversing glutathionylation of susceptible protein thiols (7). Thus it can be hypothesized that a glutathionylated IDPm could be reactivated by Grx2 in the presence of GSH. As depicted in Fig. 3A, more than 80% of the original IDPm activity was recovered by the enzyme-catalyzed disulfide exchange with Grx2 (5 g) in the presence of 0.5 mM GSH. However, this concentration of GSH alone had no effect on the recovery of IDPm activity. The correlation between the recovery of IDPm activity and the reduction of the level of glutathionylated IDPm, which was evaluated by Western blotting with anti-GSH antibody, was observed (Fig. 3B). To confirm further how many cysteine residues are targeted for glutathionylation, IDPm was treated with 5 mM GSSG for 1 h at 37°C and subjected to ESI-MS. Molecular masses of unmodified and GSSG-modified IDPm samples were 47,622 and 47,927 Da, respectively (Fig. 4A) IDPm displayed the expected molecular mass of 1042 Da, and the GSSG-treated IDPm induced the formation of a peak with a molecular mass of 1349 Da, corresponding to the addition of one glutathione molecule. The other fragments do not contain the glutathione addition. The result indicates that Cys 269 of IDPm is a target site for glutathionylation. The C269S mutant containing a cysteine to serine mutation at position 269 lost 40% of its catalytic activity. As shown in Fig. 4, B and C, the remaining activity of the C269S mutant was not affected by GSSG, and no glutathionylated IDPm was observed with 10 mM GSSG, further confirming that Cys 269 is a target of glutathionylation of IDPm. The total number of cysteine residues in the mammalian IDPm has been reported to be 8.2-8.8 mol/mol subunits (24). In these cysteines, Cys 269 and Cys 379 were regulated by NEM, and the main site of modified cysteines (21), Cys 269 , has a catalytic role, most likely in the strengthened binding of Mn 2ϩ in the presence of isocitrate, whereas Cys 379 is not essential for catalysis and the NADP ϩ -binding site. To determine the protective effect of substrates on the glutathionylation of IDPm, IDPm was incubated with GSSG in the presence of each substrate for 1 h at 37°C. As shown in Fig. 5A, although the incubation of IDPm with 10 mM GSSG completely inhibited the IDPm activity, the addition of 4 mM isocitrate significantly protected IDPm from inactivation. The exclusive protective effect of isocitrate compared with NADP ϩ suggests that GSSG reacts more readily with the isocitrate-binding site than the NADP ϩ -binding site. Immunoblotting with polyclonal anti-GSH antibody to detect IDPm also indicated that only isocitrate, but not NADP ϩ nor MnCl 2 , exhibited a protective effect against glutathionylation. Under the conditions of oxidative stress to cells, the reactive oxygen species (ROS) can cause oxidative damage to proteins, including fragmentation of peptide. The fragmentation of IDPm by oxidative damage was measured by the disappearance of the native IDPm band at 45 kDa in denaturing electrophoresis gels. As shown in Fig. 6A, glutathionylated IDPm was protected from peptide fragmentation caused by rose bengal/light, which generates singlet oxygen, and diamide. We also examined whether or not glutathionylated IDPm becomes less susceptible to proteolytic digestion. The results indicated that glutathionylated IDPm appeared to be significantly less susceptible than the native protein to the proteolysis by trypsin, chymotrypsin, or Pronase (Fig. 6B). It can be proposed that glutathionylation plays an important protective role in the degradation of IDPm by ROS or proteases, presumably through the structural changes that may cause IDPm to be less susceptible to the attack by ROS or proteases.
Structural Changes in Modified IDPm-To examine the secondary structure of the IDPm species after modification with GSSG, far UV-CD spectra of nontreated and GSSG-treated IDPm were recorded and analyzed for specific elements of secondary structure. The CD spectrum of IDPm is very similar to that of the protein after modification with 5 mM GSSG, suggesting that glutathionylated IDPm does not appreciably change the secondary structure of the protein (Fig. 7A). To reveal increases in flexibility of a partial unfolding of glutathionylated IDPm, the binding of the fluorescent probe ANSA was used to detect the accessibility of the hydrophobic regions on the protein. When IDPm was exposed to GSSG for 1 h, it bound the hydrophobic probe ANSA more efficiently than does the native protein. The representative result with GSSG is shown in Fig. 7B. In an attempt to determine the effects of glutathionylation on the conformation of IDPm, the intrinsic fluorescence of the aromatic amino acids in each of the various forms of the enzyme was determined. Native IDPm exhibited a fluorescence emission spectrum typical for tryptophan residues in proteins. Upon excitation of native IDPm at 278 nm, an emis-sion spectrum with a maximum at 333 nm was observed. The fluorescence spectra of native and GSSG-treated IDPm, normalized to the protein content, showed that modified IDPm displays a dose-dependent decrease in the quantum yield of the emission spectra and a blue shift of the maximum emission wavelength (Fig. 7C).
Glutathionylation of IDPm in Intact Cells and Mitochondria-Because GSSG readily glutathionylates IDPm in vitro, we examined IDPm activity and glutathionylation in HEK293 cells, an embryonic kidney cell line, after treatment with H 2 O 2 or diamide. It has been reported that chemical oxidants such as H 2 O 2 or diamide can serve as a catalyst in promoting the formation of protein-mixed disulfides with glutathione (26). As shown in Fig. 8A, a concentration-dependent decrease of IDPm activity in H 2 O 2 -or diamide-treated cells was observed. Mitochondrial fractions from both control and H 2 O 2 -or diamide-treated cells were subjected to immunoprecipitation with anti-IDPm antibody followed by separation by SDS-PAGE. Western blot analysis of purified IDPm with anti-GSH IgG revealed a concentration-dependent increase of immunoreactive bands in H 2 O 2 -or diamidetreated cells, whereas no increase was found in the control cells (Fig. 8B). The control incubation of cell homogenates with [ 3 H]GSH did not yield any significant amounts of protein-bound radiolabeled GSH. Furthermore, the purified IDPm from cell homogenates treated with GSH were analyzed by mass spectrometry, and no generation of the glutathionylated products was observed. These results confirm that artifactual intrapreparative glutathionylation has not occurred. To evaluate the reversibility of glutathionylation in mitochondria, intact respiring mitochondria were treated with 50 M H 2 O 2 or 20 M diamide for 10 min, which caused a 40 -50% decline in IDPm activity. Mitochondria were solubilized and treated with DTT or Grx2/GSH. The DTT or Grx2 system was capable of reactivating IDPm inactivated by FIG. 6. Effect of glutathionylation on the oxidative fragmentation and proteolytic digestion of IDPm. A, effect of glutathionylation on the fragmentation of IDPm caused by oxidants. IDPm treated with 10 mM GSSG for 1 h at 37°C and untreated IDPm were further incubated with 10 M rose bengal/light for the indicated times or various concentrations of diamide for 1 h. Samples were subjected to SDS-PAGE for immunoblotting with anti-GSH IgG. B, effect of glutathionylation on the proteolytic digestion of IDPm. Native and IDPm treated with 10 mM GSSG for 1 h at 37°C were treated with various concentrations of trypsin, chymotrypsin, or Pronase for 15 min at 4°C. Samples were separated by SDS-PAGE, and protein bands were visualized by Coomassie staining. treatment of mitochondria with H 2 O 2 or diamide. GSH at 0.5 mM had no effect when added in the absence of Grx2 (Fig. 9, A and B). When HEK293 cells were exposed to 2 mM H 2 O 2 or 1 mM diamide for 30 min and subsequently washed with PBS, the activities IDPm in treated cells were significantly reduced and gradually recovered to near control levels during further incubation (Fig. 9, C and D).
Glutathionylation of IDPm in Vivo-Parkinson disease (PD) is a progressive neurodegenerative disorder that affects primarily the dopamine neurons projecting from the substantia nigra pars compacta to the putamen and caudate regions of the brain (30). Exposure to MPTP induces PD-like symptoms in humans and causes degeneration of dopaminergic neurons in several animal species (28). MPTP is known to generate oxidative stress that leads to formation of GSSG, which forms disulfide linkages (Pr-SSG) with cysteine residues of proteins in mitochondria (29). As shown in Fig. 10, Western blot analysis of IDPm in brains from MPTP-treated mice, which were purified by immunoprecipitation with anti-IDPm antibody, with anti-GSH IgG showed pronounced increase of glutathionylated IDPm. Control experiments confirm that artifactual intrapreparative glutathionylation has not occurred. DISCUSSION NADPH is an essential cofactor for the regeneration of GSH, the most abundant low molecular mass thiol in most organisms, by glutathione reductase in addition to its critical role for the activity of the NADPH-dependent thioredoxin system (18,19). IDPm is a key enzyme in cellular defense against oxidative damage by supplying NADPH in the mitochondria, needed for the regeneration of mitochondrial GSH or thioredoxin. Elevation of mitochondrial NADPH and GSH by IDPm in turn suppressed the oxidative stress and concomitant ROS-mediated damage. It is well established that mitochondrial dysfunction is directly and indirectly involved in a variety of pathological states caused by genetic mutations as well as exogenous compounds or agents (30). Mitochondrial GSH becomes critically important against ROS-mediated damage because it not only functions as a potent antioxidant but is also required for the activities of mitochondrial glutathione peroxidase and mitochondrial phospholipid hydroperoxide glutathione peroxidase (31), which removes mitochondrial peroxides. NADPH is a major source of reducing equivalents and cofactor for mitochondrial thioredoxin peroxidase family/peroxiredoxin family including peroxiredoxin III/protein SP-22 (32)(33)(34) and peroxiredoxin V/AOEB166 (35). Therefore, any mitochondrial NADPH producer, if present, becomes critically important for cellular defense against ROS-mediated damage. In this regard, the inactivation of IDPm may result in the disruption in regulating the mitochondrial redox balance by providing NADPH.
It has been reported that IDPm contains reduced cysteinyl residues that play an essential role for enzyme activity (21,22). The sulfhydryl groups of IDPm are susceptible to modification by ROS, reactive nitrogen species, various oxidants, and lipid peroxidation products (36 -39). In the meantime, cysteine-containing proteins are susceptible to protein S-glutathionylation, the reversible covalent addition of glutathione to cysteine residues on target proteins. In this study, we present evidence indicating that IDPm can be inhibited by reversible glutathionylation. The glutathionylation of IDPm was competed with sulfhydryl-modifying agents such as NEM, diamide, and HNE. Because IDPm inactivation was prevented by adding its substrate isocitrate, we conclude that GSSG-binding sites are likely to include a cysteine residue near the active site. Treatment of glutathionylated IDPm with DTT or Grx2 in the presence of GSH resulted in the recovery of IDPm activity, indicating the formation of the protein-SSG species. Grx2 containing a mitochondrial leader sequence was identified in human and mouse tissue (40,41). Unlike other glutaredoxin isoforms, Grx2 is relatively insensitive to oxidative inactivation, making it an effective enzyme for an oxidatively dynamic environment like the mitochondria (41). Using mass spectrometry and site-directed mutagenesis results revealed that Cys 269 , a residue which presumably resides in the isocitrate-binding site, is a target for glutathionylation.
There are several lines of evidence obtained from the present study indicating that glutathionylation of IDPm results in structural alterations. These findings are reflected in the changes in intrinsic tryptophan fluorescence and the binding of ANSA. However, the CD spectrum and, therefore, the secondary structure content of IDPm were not altered by glutathionylation, which suggests that only subtle, not drastic, conformational changes may occur in modified protein. The lower fluorescence quantum yield documents the alteration of the conformational integrity in glutathionylated IDPm. Modification of IDPm by glutathionylation may lead to a slight disruption of protein structure, which is presumably responsible for the inactivation of enzymes at least in part. Among the techniques aimed at following conformational changes of proteins, binding of the fluorescent probe ANSA has been used to detect the accessibility of the hydrophobic regions on protein upon increases in flexibility or partial unfolding. Binding can be easily monitored because it is accompanied by an increase in fluorescence associated with the transfer of the ANSA from a hydrophilic to a hydrophobic environment (42). A change in ANSA fluorescence at 490 nm in IDPm modified by glutathionylation indicates conformational changes of protein.
Mitochondria are responsible for generating the ATP required for all cellular functions, for detoxifying ROS produced via mitochondrial respiration, for controlling the cellular redox state, and for regulating cytoplasmic calcium levels by acting as the major intracellular sink for this ion. Oxidative damage to the mitochondria might interfere with all of these functions (43). To maintain mitochondrial and cellular viability, the mitochondria must respond to a dynamic redox environment. Regulation of biological activity by the reversible modification of protein thiol is a growing concept in cellular defense, such as glutathionylation (44). The duration and extent of IDPm glutathionylation may be regulated by Grx2 and by the redox state of the mitochondrial glutathione pool. The inhibition of IDPm activity by glutathionylation would likely occur under conditions where oxidant production exceeds antioxidant capacity and the mitochondrial glutathione content declines. The cysteine sulfenic (RSOH) derivative can easily oxidize to its irreversible sulfinic (RSO 2 H) and sulfonic (RSO 3 H) forms and hinder the regulatory efficiency if it is not converted to a more stable and reversible end product such as a glutathione derivative (6). Glutathionylation of the cysteine sulfenic derivative will prevent IDPm from further oxidation to its irreversible forms. Furthermore, glutathionylated IDPm is less susceptible to fragmentation and protease attack, presumably through a slight conformational change in glutathionylated IDPm. These constitute an efficient regulatory mechanism. Therefore, it is tempting to speculate that glutathionylation of IDPm is, presumably, at least in part responsible for the maintenance of cellular redox status against oxidative stress in order to avoid irreversible inactivation of important antioxidant enzymes.
The ready formation of glutathione mixed disulfide on IDPm will likely have biological and medicinal significance. There are numerous oxidative stress-induced pathophysiologic conditions during which redox status and, in particular, the GSH/GSSG ratio is perturbed (43,(45)(46)(47). An important role for glutathione has been proposed for the pathogenesis of PD, where a decrease in GSH concentrations in the substantia nigra was observed in preclinical stages of the disease (48). Furthermore, mitochondrial dysfunction appears to play a major role in the neurodegeneration associated with the pathology of PD (25). We observed the presence of the GSS-protein adduct of IDPm purified by immunoprecipitation in brain samples from the PD mouse model. The possibility that regulation of IDPm by glutathionylation in many diseases related to oxidative stress is worthy of further consideration.
In conclusion, under conditions favoring protein glutathionylation such as oxidative stress, the inactivation of key antioxidant enzymes would further deteriorate cell homeostasis. Therefore, glutathionylation of IDPm could be considered an adaptation of the cell to severe oxidative stress. | 7,140 | 2005-03-18T00:00:00.000 | [
"Biology",
"Chemistry"
] |
A Repeated Median Filtering Method for Denoising Mammogram Images
In the medical field, mammogram analysis is one of the most important breast cancer detection procedures and early diagnosis. During the image acquisition process of mammograms, the acquired images may be contained some noises due to the change of illumination and sensor error. Hence, it is necessary to remove these noises without affecting the edges and fine details, achieving an effective diagnosis of beast images. In this work, a repeated median filtering method is proposed for denoising digital mammogram images. A number of experiments are conducted on a dataset of different mammogram images to evaluate the proposed method using a set of image quality metrics. Experimental results are reported by computing the image quality metrics between the original clean images and denoised images that are corrupted by different levels of simulated speckle noise as well as salt and paper noise. Evaluation quality metrics showed that the repeated median filter method achieves a higher result than the related traditional median filter method. Keywords—Mammogram images; image denoising; median filter; repeated median filtering; speckle noise; salt and paper noise
I. INTRODUCTION
Nowadays, image processing methods have been applied for diagnosis in several medical applications, such as liver image analysis [1,2], brain tumor classification [3,4], breast image enhancement, and cancer diagnosis [5][6][7], and so on. Image denoising process is used to eliminate noises from noisy images and improve their quality. However, it faces difficulty to distinguish between noises and other important images' components such as edges and textures due to they have approximately the same high frequencies, which might lead to lose some details of the images [8]. Therefore, image denoising without losing significant information from a noisy image is still a vital problem in the image processing field [8]. In recent years, great achievement has been accomplished in the field of image denoising [9][10][11][12].
In medical imaging systems and applications, image denoising plays an important role as a pre-processing step to enhance the quality of digital images and improving the process of medical diagnosis [13]. Even though the medical image denoising process has been studied in many research types for a long time, it is still a challenging issue and an open task. One of the key reasons for this is that the medical image denoising is an inverse problem from a mathematical perspective, and its solution is not unique and not flexible.
The rest of the paper is organized as follows: Section II presents a literature review about the methods used for medical and breast image denoising. Section III describes the applied research methods. Section IV presents the experimental results and discussion. Finally, Section V concludes and discusses the research work.
II. LITERATURE REVIEW
Currently, there are many approaches to image denoising for medical imaging systems. Some of the common approaches are median filtering, Wiener filtering, morphological filtering, wavelet-based filtering, and curvelet transform, among other significant approaches. The median filter [14,15] is a statistical approach for noise reduction in images with blurred edges. Wiener filter is another statistical approach that calculates unknown signals using a related known signal as input [16]. Morphological filtering is a local non-linear transformation of geometric features; its fundamental operations are closing, opening, erosion, and dilation. It has been applied in different areas, especially image denoising [17]. Wavelet-based filtering has also been used for the d-noising of images of all kinds, specifically for medical image systems. It is a mathematical calculation that is able to perceive local features of the image.
Additionally, it is used to decompose 2D signals into diverse resolution levels [18]. For a functional performance of image denoising, an adaptive procedure for image discontinuities are applied. Accordingly, a multi-resolution approach is adapted. Here, the curvelet transform can be used to improve image resolution [19]. Breast cancer is currently the most common type of cancer with the highest mortality cause among women in the world 1 . The number of deaths from breast cancer has doubled in 22 years, affecting both industrialized and less developed countries. Its main known risk factors are associated with prolonged exposure to estrogens, are indicators of lifestyle and reproductive patterns, and therefore are difficult to modify and reducing mortality, then, it requires improving early detection and treatment strategies. Among screening procedures, which also include self-examination and clinical examination, mammography is the only technique that can offer sufficiently timely detection. In which a low energy X-rays is used to screen breast in order to assist breast detection. To ensure an accurate diagnosis, breast x-rays images should be of high quality. In this www.ijacsa.thesai.org direction, multiple approaches are used for de-noising the mammogram is one approach to improve quality. In [20,21], convolutional neural networks (CNNs) are applied to minimize the noise in mammograms. Recently, Total Variation (TV) and Non-Local Mean (NLM) algorithms are developed to mitigate some shortages of repeatable noise elimination in medical images [22].
In summary, the previous solutions in-depth related studies are still not flexible and need to be improved in terms of developing a method to remove the noise from a noisy image at different levels for getting a high-quality image depending on the selected level. Thus, this paper proposes a repeated median filtering (RMF) method that applies a median filter at a different number of iterations with different filter sizes making it more flexible for user choice.
A. Median Filter (MF)
In image processing, before further processing, such as edge detection, it is usually necessary to first perform a certain degree of noise reduction. The filtering process using MF is a common step in image processing. It is especially useful for speckle noise and salt-and-pepper noise. Preserving the edges makes it useful in situations where edge blur is not desired.
Median filter (MF) is a non-linear digital filter technology that is often used to remove noise from images or other signals. The design idea is to check the samples in the input signal and determine whether it represents the signal. Use an observation window composed of an odd number of samples to achieve this function. The values in the observation window are sorted, and the median value in the middle of the observation window is used as the output. Then, the oldest value is discarded, new samples are obtained, and the above calculation process is repeated. The main idea of the MF is to traverse the signal entry through the entry and replace each entry with the median of the neighbor entry. The neighbor's pattern is called a "window," and it slides through the entrance to cover the entire signal. For one-dimensional signals, the most obvious windows are only the front and back items, while 2D (or higher-dimensional) signals (such as images) may have more complex window modes (such as "box" or "cross" modes). Note that if there is an odd number of entries in the window, the median is easy to define: after all entries in the window are sorted numerically, this is only the middle value. For even entries, there is more than one possible median.
Median filtering is a smoothing technique similar to linear Gaussian filtering. All smoothing techniques can effectively remove noise in smooth or smooth areas of image signal but have an adverse effect on the edges. In general, it is essential to maintain edges while reducing noise in the signal. For example, edges are critical to the visual appearance of an image. For small to medium levels of Gaussian noise, the MF is significantly better at removing noise than Gaussian blur while preserving edges for given fixed window size. However, for high noise, its performance is not better than the Gaussian blur, and it is particularly effective for speckle noise and salt and pepper noise (impulse noise). Therefore, MF is widely used in digital image processing [3].
The naive implementation described above sorts each entry in the window to find the intermediate value; however, since only the intermediate value in the list is needed, the selection algorithm can be more efficient. In addition, some types of signals (usually the case of images) are represented using integers: in these cases, the histogram is simple because updating the histogram from window to window and finding the median of the histogram is not incredibly tedious that makes it to be much more efficient. Fig. 1 illustrates an example of a median filter calculation.
B. Proposed Repeated Median Filtering (RMF) Method
The repeated median filtering (RMF) method is a nonlinear median-based processing approach that applies an MF on an image times to remove noises at different levels for getting a high-quality image depending on the suitable selected level. The main idea of the RMF method is very simple but more effective. In this method, the number of iterations and filter size should be defined that makes it flexible for user choice. To illustrate how the method works, Algorithm 1 describes the steps of the RMF process. In the application of the RMF method, the user needs to initialize the method's parameters, such as the number of iterations and the filter size. The large size of MF used in the approach is not suitable due to a large set of pixels that makes the MF values deviate from the values of pixels. The number of iterations makes the method is more flexible to perform the filtering process at different times. The filter size forms a 2D window that is a central symmetric shape that replaces the pixel at the center by the median value of pixels values inside that window.
IV. EXPERIMENTS AND DISCUSSION
In this section, a set of experiments are conducted on a number of breast mammogram images taken from the database of mini-Mammogram Image Analysis Society (MIAS) to evaluate the proposed method. In addition, the results of the proposed method will be compared with the results of MF on the same images. The proposed method is implemented by using the MATLAB R2016b programming tool.
The implementation was performed on a laptop that has an Intel CPU I7 2.2 GHz with 16 GB of RAM and a Windows 10 operating system. With these experiments' configurations, the evaluation results on the test images are assessed using the peak signal-to-noise ratio (PSNR) and mean squared error (MSE) performance metrics. The experimental results and comparisons will be introduced in the following subsections.
A. Dataset Mammogram Images
The dataset mammogram images are five samples, selected from the Mini-mammogram Image Analysis Society (MIAS) database and shown in Fig. 2. All test images are in PGM format. In the experiments, these images are converted into PNG format and resized to 256×256 pixels.
B. Image Quality Evaluation Measures
The evaluation measures used to assess the proposed image denoising method are quantitative image quality measures include Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). These measures are computed based on original and denoised images. MSE is a cumulative value of squared errors between an original image (O) and a denoised image (D) with 2D matrices with m rows and n columns. MSE has a small value if the method performs well and can be computed as [23]: The second measure is the PSNR that can give a good indication of the capability of the method to remove the noises. The small value of PSNR for the denoised image means it has a poor quality [23]. PSNR can be calculated as in the following equation.
( ) The variable in the previous equation is the maximum fluctuation of image's pixels if the image has a data type of double floating-point, then is one, and if the image has a data type of 8-bit unsigned integer, then is 255.
C. Results and Discussion
To validate the proposed denoising method, all evaluation images are degraded artificially using speckle noise with a different ratio of 0.5% and 1%. Besides, they are also degraded using salt and pepper noise with a different ratio of 10% and 20%, respectively. The experimental results of the proposed filtering method on noised images are assessed based on the adopted image quality evaluation measures. During the experiments, the number of iterations and filter size are initialized. Initializing the number of iterations is critical to determine the quality of denoised image. Therefore, a set of experiments are conducted to select the best value of the number of iterations. Tables I and II list From Tables I and II, the best quantitative results of MSE and PSNR are when the value of N is 2. Thus, this value is selected for the method to remove the images' noise. Tables III and IV exhibits the experimental results of MSE and PSNR for the proposed method at different noise levels of test images degraded with speckle noise. Tables III and IV, the proposed method achieves high values of PSNR and low values of MSE. These results validate the effectiveness of the method to remove the speckle noises from the test images. Fig. 3 visualizes an example of noised and denoised images. Fig. 2 shows how the noisy images are improved by using the RMF method that removes the speckle noise that has a ratio of 1%. Tables V and VI demonstrate the quantitative results of MSE and PSNR for the proposed method on the test images that are degraded with salt and pepper noise at different noise levels of 10% and 20%. Fig. 4 visualizes an example of noised and denoised images.
To compare the proposed RMF method with the MF method, Tables V and VI show the results of PSNR of denoised images using RMF and MF methods. From comparison results in Tables V and VI, as well as Fig. 4 and 5, it is clear that the proposed RMF method outperforms the MF method in terms of PSNR for all test images. For the speckle noise, an improvement of the denoised images is greater than 1%. Furthermore, for the salt and pepper noise, the proposed method achieves a significant high PSNR result.
V. CONCLUSIONS AND FUTURE WORK
In the medical field, mammogram analysis is one of the most important procedures for breast cancer detection and early diagnosis. During the image acquisition process of mammograms, these images may be contained some noises due to the change of illumination and sensor error. Hence, it is necessary to remove these noises without affecting the edges and fine details, achieving an effective diagnosis of beast images. Therefore, in this paper, a repeated median filtering (RMF) method is proposed for denoising mammogram images. This method is able to enhance the digital mammogram images in a special domain to preserve the useful information of images. To evaluate the proposed method, a number of experiments are conducted on a dataset of different mammogram images to evaluate the proposed method using a set of image quality metrics. Experimental results are reported by computing the image quality metrics between the original clean images and denoised images that are corrupted by different levels of simulated speckle noise as well as salt and paper noise. Evaluation quality metrics showed that the repeated median filter method achieves a higher result than the related traditional median filter method. | 3,516.8 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Direct observation of mono-layer, bi-layer, and tri-layer charge density waves in 1T-TaS2 by transmission electron microscopy without a substrate
Charge-density-waves, which occur mainly in low-dimensional systems, have a macroscopic wave function similar to superfluids and superconductors. Kosterlitz–Thouless transition is observed in superfluids and superconductors, but the presence of Kosterlitz–Thouless transition in ultra-thin charge-density-waves systems has been an open problem. We report the direct realspace observation of charge-density-waves with new order states in mono-layer, bi-layer, and tri-layer 1T-TaS2 crystals using a low voltage scanning-transmission-electron-microscopy without a substrate. This method is ideal to observe local atomic structures and possible defects. We clearly observed that the mono-layer crystal has a new triclinic stripe charge-density-waves order without satisfying the triple q condition q1 + q2 + q3 = 0. A strong electron-phonon interaction gives rise to new crevasse (line) type defects instead of disclination (point) type defects due to the Kosterlitz–Thouless transition. These results reaffirm the importance of the electron-phonon interaction in mono-layer nanophysics.
INTRODUCTION
Dimensionality and topology are the most important parameters characterizing physical systems. For example, the integral and fractional quantum Hall effects (QHE) 1, 2 are observed only in two-dimensional systems such as metal-oxide-semiconductor field-effect transistors, GaAs/AlGaAs interfaces, and graphene. 3 Moreover, the conductivity of the QHE is characterized by topological numbers. Dimensionality and topology are also of importance in the Kosterlitz-Thouless (KT) transitions. 4,5 The KT transition is a phase transition that occurs only in a twodimensional XY model, and vortices or vortex-pairs (topological defects) are known to play an important role. KT transitions are observed in systems that have macroscopic wave functions (MWF) such as two-dimensional superconductors 6 and ultrathin film superfluids.
Charge density wave (CDW) 7, 8 is a phenomenon in which dimensionality and topology are of particular importance. A CDW is a modulation of electric charge in low-dimensional conductors caused by electron-phonon coupling and has a MWF 9 such as superconductors, superfluids and quantum Hall liquids. Additionally, the properties of a CDW are changed depending on its topology. [10][11][12] Tantalum disulfide with hexagonally packed TaS 6 (1T-TaS 2 ( Fig. 1(a)) is a typical CDW material. 1T-TaS 2 is a layered compound called a transition metal dichalcogenides (MX 2 ), which induces two-dimensional CDW with wave-number vectors q i (i = 1, 2, 3) termed triple q ( Fig. 1(b)). MX 2 belongs to the group of van der Waals materials characterized by their layered crystalline structures. A CDW in this material takes four different states depending on temperature (Fig. 1(c)).
How do the CDW properties change when we use thin samples? There is a possibility that the properties differ from those in higher layers (bulks). [13][14][15] There has been a recent debate on whether the commensurate CDW phase exists in ultrathin films of 1T-TaS 2 . [16][17][18][19][20] However, the samples considered in these studies either contain a substrate or interacts with the surrounding environment, which inevitably alter the properties of the material. However, it is possible that different CDW phases emerge in standing-free ultrathin 1T-TaS 2 as the thickness decreases down to a single layer. For example, the KT transition may occur in CDW systems.
In this paper, we report the properties of 1T-TaS 2 ultra-thin films including a mono-layer. We obtained images of 1T-TaS 2 using scanning transmission electron microscopy (STEM). Figure 2 clearly shows an image of the crystal structure. STEM is a measurement method that does not require a substrate and causes little or no damage to the samples. Therefore, STEM is well suited for the measurement of ultrathin samples. We were able to make mono-, bi-, and tri-layer 1T-TaS 2 samples using the exfoliation method (c.f. Method section) for the first time. Therefore we used this technique to observe formation of different CDW phases in tri-layer, bi-layer, and mono-layer samples. Figure 3(a) shows an STEM image of 1T-TaS 2 at room temperature. Figure 3(b) is a three-dimensional intensity plot of the area shown by the yellow frame in Fig. 3(a). As seen from the figure, the brown circles form hexagonal lattices. It is a feature of Commensurate (C-) CDW that super-lattices of C-CDW form such large hexagonal lattices (Figs. 3(d) and 3(e)). Figure 3(c) shows a Fourier transformed image (FTI) of Fig. 3(a), where certain satellite peaks are present. The appearance of powerful satellites is a feature of C-CDW state. The q vectors are calculated from Fig. 3(c) with the values of |q| = (0.280(8) ± 0.003)|a * | that is rotated by ψ = 13.3(7) ± 1.2°with respect to the Bragg spots. The C-CDW satellites were confirmed in at least 10 other STEM images of 1T-TaS 2 samples (Supplementary material 1). These results are in good agreement with the reported values for the C-CDW state: |q| = (0.277 ± 0.003)|a * |, ψ = 13.90°. 15 From the above evidence, we surprisingly discovered that C-CDW occurs in trilayer 1T-TaS 2 at room temperature. Figure 3(f) shows the satellite patterns of the tri-layer sample. The triple q condition is clearly preserved.
Tri-layer CDW
Bi-layer CDW Figure 4(a) shows a STEM image of the bi-layer 1T-TaS 2 at room temperature. The CDW super-lattices in the bi-layer are different from those in the tri-layer. Figure 4(b) shows a FTI of Fig. 4(a). The CDW satellites are largely diffused. Thus, a super-lattice is also present in the bi-layer (Figs. 4(c) and 4(d)) but it is not as clearly defined as in the case of the tri-layer. However, it is obvious that there are triple q vectors (Table 1(b)). This CDW has anisotropic triple q vectors such as T-phase 14, 15, 21 and stripe phase. 22 For these types, the CDWs form domains with domain walls corresponding to discommensuration. 23 Consequently, the system minimizes its energy by making super-lattices in these domains similar to those in the C-phase. For this reason, it is valid to consider that domains are formed in this sample. Figure 4(c) shows an enlarged view of the yellow frame in Fig. 4(a) with inverted contrast. The size of Fig. 4(c) is 100 Å squared. The size is similar to the C-CDW domain size in the NC or T-phase. 14,15,24 However, the C-type super-lattices such as those in the tri-layer crystal are not observed in Fig. 4(c). Figure 4(d) shows the FTI of Fig. 4(c) and the result is similar to Fig. 4(b). We performed Fourier transforms (FT) at different locations in Fig. 4(a) and confirmed similar satellite patterns. The sizes of domains that exist in the bilayer crystal appear to be smaller than those in the bulk material. Unlike the tri-layer samples, clear super-lattices and honeycomb lattices are not seen. In view of the q vector's length, the CDW state in bi-layer is similar to the stripe phase in 2H-TaSe 2 . 22 Figure 4(f) shows the satellite patterns of the bi-layer samples. It is shown that the triple q conditions start to be broken. Figure 5(a) shows an image of mono-layer 1T-TaS 2 at room temperature. The mono-layer sample is different from the tri-layer and bi-layer samples. There are shade lines (crevasses) visible in the figure, and super-lattices are present in the yellow square marked D. These crevasses were not found in the bi-layer and trilayer crystals. Figure 5(b) shows the intensity profile along the red line in D. The ordered structures do not have long-range 12 The nesting vector (green vectors) are mutually equivalent. The sum of the q vectors is 0 (Triple q). c CDW states depend on temperature in bulk 1T-TaS 2 . The blue arrow represents the cooling cycle. The red arrow represents the heating cycle. C: Commensurate state, 12 T: Triclinic state 14, 15, 21 which has stretched honeycomb lattices. Between 220 and 280 K the T-satellites are observable but not the NC-satellite or the C-super-lattice reflection, even after thermal cycling. NC: Nearly commensurate state, 12 IC: Incommensurate state, 12 N: Normal metal state Table 1(c). The triple q vectors in the mono-layer break the threefold symmetry similar to the T-phase (Fig. 5(e)). 14, 15
DISCUSSION AND CONCLUSION
Here, we summarize the results presented in the previous sections. In the tri-layer samples, C-CDW clearly occurs even at room temperature. This result is surprising because the C-CDW phase does not appear in the bulk samples above 220 K (Fig. 1(c)).
In the bi-layer samples, satellites similar to the stripe phase in 2H-TaSe 2 are observed. Moreover, satellites similar to the T-phase in 1T-TaS 2 are present in the mono-layer samples. However, these phases were not observed at room temperature in the bulk samples. In particular, the latter two have never been observed in the bulk samples without the cooling-heating cycle (Fig. 1(c)). To explain these results, we consider the following mechanisms. First, the CDW in the tri-layer sample is C-type. This can be explained by the negative pressure effect. The CDW transition temperature (T CDW ) of MX 2 shows an increasing tendency with increasing interlayer distance. 26 In fact, the T CDW is increased by mechanical exfoliation in TiSe 2 . 27 The C-phase was not observed in refs [16][17][18] probably because their samples were not standing-free and the negative pressure was lost. However, evidences of the Cphase were reported in mono-layer at low temperature (not standing-free) 19 and in the surface of a thick 1T-TaS 2 sample 20 where the negative pressure effect takes a dominant role.
Second, we consider the bi-layer and mono-layer results. It is difficult to apply the above discussion to bi-layers and monolayers because these phases should not exist under the conditions of this measurement (Fig. 1(c)). Previous studies have considered that the stabilization of structures with anisotropic charge configurations such as the stripe phase (bi-layer) and the Tphase (mono-layer) can be explained by the inter-layer Coulomb interaction. The conventional Ginzburg-Landau approaches 23, 24 support this idea. However, the stabilization of a structure such as the T-phase in mono-layer cannot be explained by the inter-layer Coulomb interaction. Therefore, we require a new mechanism.
Rotational-symmetry-breaking point defects such as vortices or disclinations were not discovered during a detailed survey of Fig. 5 (a). There is a fundamental difference between defects in CDW and defects in superconductivity. In superconducting systems, a point defect such as an Abrikosov vortex attempts to be formed. On the other hand, in CDW systems, an in-plane line defect such as a domain wall attempts to be formed. In fact, domains are inevitably generated in the T-phase and the stripe phase. There is a difference between two domains in the CDW phase θ of the Fig. 3 Tri-layer 1T-TaS 2 a STEM image of tri-layer 1T-TaS 2 at room temperature. Surprisingly, super-lattices of C-CDW are seen throughout the entire sample and are particularly obvious in the yellow frame in the lower left of this image. The super-lattices form hexagonal structures as seen in Fig. 3(e). b Top-view of the three-dimensional intensity plot of the area in the yellow frame in Fig. 3(a). Higher scattering magnitudes are shown in red. Brown circles show super-lattices. Purple vectors are super-lattice vectors (A, B). c Fourier transformed image (FTI) of Fig. 3(a). q 1 , q 2 , q 3 are C-CDW satellites. Satellite peaks are clearly present. This is due to an overlap of different diffraction orders. 25 The super-lattices form a new hexagonal lattice with basic vectors A and B. This is called a √13 × √13 structure. f Satellite patterns of the tri-layer sample. The orange points are Bragg peaks of the lattice. The blue arrows are nesting vectors. The green arrow is -q 1 -q 2 , which overlaps with q 3 , so the sum of q vectors is 0 CDW MWF Ψ = |Ψ| exp(iθ). Therefore, the entropy increases because the degree of freedom of the phase θ becomes large. In other words, a CDW does not need to generate vortices because it has topological defects equivalent to vortices in the KT phase from the start. Accordingly, we can conclude that there are no vortices or vortex-pairs in 1T-TaS 2 .
To explain the structures in the bi-layer and mono-layer samples, it is necessary to reconsider triple q. This condition exists for the three-dimensional order of CDW. 28 If triple q is broken, the discrepancy between conventional studies and the biand mono-layer results is resolved, because traditional theories are based on triple q and three-dimensional crystals (bulk). An examination of Fig. 3(f), Fig. 4(e) and Fig. 5(e) reveals that triple q is broken with decreasing dimensionality. Moreover, the stripe structure in the mono-layer emerges because triple q is broken (Fig. 5(f)).
In conclusion, CDWs occur in tri-layer, bi-layer and mono-layer crystals. In a mono-layer sample, the CDW does not exhibit the KT phase that accompanies disclination type defects. Instead, we found CDW with domain wall type defect structures. This structure is a new triclinic stripe state without satisfying condition q 1 + q 2 + q 3 = 0. It is not necessary to maintain triple q in pure twodimensional CDW systems (Table 1(c), Table 1(d)). Consequently, new states are created in the mono-layer and bi-layer samples. The stripe structure formed by breaking the triple q condition in the mono-layer may be useful for understanding other stripe structures such as copper oxides 29 and iron-based superconductors 30 in terms of anisotropic charge order. Moreover, a strong electron-phonon interaction forms shadow crevasses (Figs. 5(a) and 5(f)). This shows that the electron-phonon interaction is of central importance in thinned samples. This idea is applied to systems in which an electron-electron interaction plays the most important role such as charge order in organic conductors. 31,32 New equations and models are needed if we are to realize pure two-dimensional systems. In addition, it is suggested that breaking of three-dimensional order causes new structures to be formed.
Sample preparation
The single crystals of 1T-TaS 2 were grown in excess sulphur by the usual iodine vapor transport method. The prereacted powder of 1T-TaS 2 and a certain amount of excess sulphur were put in one end of a quartz tube and the tube was sealed in vacuum. The ampule was heat-treated in such a way that the mixture at one end of the quartz tube was at 950-830°C and the temperature of the other end was 70-80°C lower. It was found that single crystals were grown not only in the lower temperature end but also in the hotter one. The quartz tube was rapidly quenched into water to insure the retention of the 1T-phase. Fig. 4(a). The CDW satellites shown by the yellow arrows are largely diffused. c Enlarged view of the yellow frame in Fig. 4(a) with inverted contrast. The brightest circle in Fig. 4(c) corresponds to the darkest circle in Fig. 4(a). Black hexagons correspond to super-lattices. Unlike in the tri-layer sample, super-lattices do not have clear hexagonal structures in the bi-layer. Local order structures, which are shown by red circles, are present in the bi-layer. d FTI of Fig.4(c). The CDW satellites shown by the yellow arrows are largely diffused. These are similar to those shown in Fig. 4(b). e Satellite patterns of the bi-layer sample. The orange points are Bragg peaks of the lattice. The blue arrows are nesting vectors. The breen arrow is -q 1 -q 2 slightly deviates from q 3 , so the sum of q vectors is approximately 0 Direct observation of mono-layer, bi-layer, and tri-layer D Sakabe et al. Fig. 5(c). c An enlarged image of D in Fig. 5(a) with inverted contrast. The black hexagons shown by the blue ellipses are super-lattices. Super-lattices do not form honeycomb lattices. d FTI of D in Fig. 5(a). The transformed region is tiny, so the peaks are broad. However, we confirm the presence of triple q vectors. e Satellite patterns of the mono-layer sample. The orange points are Bragg peaks of the lattice. The blue arrows are nesting vectors. The green arrow is -q 1 -q 2 which clearly differs from q 3 , so triple q condition q 1 + q 2 + q 3 = 0 is broken. f The charge configuration is reproduced from the satellites in Fig. 5 (e). The structures correspond to shadow crevasses (red frame) and stripe structure (blue arrow) Direct observation of mono-layer, bi-layer, and tri-layer D Sakabe et al.
Exfoliation
The thin samples are made using the exfoliation method. That is, the presence of the van der Waals gap with bonding makes it possible to exfoliate films of MX 2 with various thicknesses from its bulk in a similar manner to graphite. Some of the flakes were randomely chosen and cleaved using Scotch tapes and then transferred to transmission electron microscope microgrids following the method developed by Meyer and co-workers. 33 Scanning transmission electron microscope We used low voltage scanning transmission electron microscope (STEM). This method does not require a substrate and causes little or no damage to the samples. 34 Therefore, STEM is well suited for the measurement of ultrathin samples. We used this technique to observe formation of different CDWs in tri-layer, bi-layer, and mono-layer samples. | 3,962 | 2017-04-25T00:00:00.000 | [
"Physics"
] |
Infrared spectroscopic study of hydrogen bonding topologies in the water octamer: The smallest ice cube
The water octamer, with its cubic structure consisting of six four-membered rings, presents an excellent system in which to unravel the cooperative interactions driven by subtle changes in the hydrogen-bonding topology. Although many distinct structures are calculated to exist, it has not been possible to extract the structural information encoded in their vibrational spectra because this requires size-selectivity of the neutral clusters with sufficient resolution to identify the contributions of the different isomeric forms. Here we report the size-specific infrared spectra of the isolated cold, neutral water octamer using a scheme based on threshold photoionization using a tunable vacuum ultraviolet free electron laser. A plethora of sharp vibrational bands features are observed for the first time. Theoretical analysis of these patterns reveals the coexistence of five cubic isomers, including two with chirality. The relative energies of these structures are found to reflect topology-dependent, delocalized multi-center hydrogen-bonding interactions. These results demonstrate that even with a common structural motif, the degree of cooperativity among the hydrogen-bonding network creates a hierarchy of distinct species. The implications of these results on possible metastable forms of ice are considered.
As the most vital matter on the earth, water and its interactions with other substances are essential in the life of our planet. Understanding the structure of bulk water and its hydrogenbonding networks, however, remains a grand challenge 1,2 . Spectroscopic investigation of water clusters provides a quantitative description of hydrogen-bond motions that occur in ice and liquid water 3,4 . Currently, cationic or anionic forms of water clusters have been extensively investigated because of relative ease in size-selection and detection [5][6][7][8][9] . These studies have provided essential knowledge on the structure and dynamics of the ionic water clusters.
Inasmuch as hydrogen-bonding networks in neutral water clusters are substantially different from those in ionic ones, to investigate neutral water clusters is a prerequisite to gain fundamental insights into the structures and properties of ice and liquid water. Previous experimental and theoretical studies demonstrated that the water trimer, tetramer, and pentamer all have cyclic minimum-energy structures with all oxygen atoms in a two-dimensional plane, while the hexamer and heptamer have rather complex three-dimensional noncyclic structures [10][11][12][13][14][15][16][17][18] . Of particular interest is the water octamer, which was proposed to represent the transition to cubic structures dominated in larger systems and display behavior characteristic of a solid liquid phase transition [19][20][21][22] . Experiments strongly suggest the presence of ice nanocrystals [23][24][25][26] .
The hydrogen bonds within the mostly crystalline subsurface layer are found to be stretched by the interaction with the diverse component 25 . The water octamer has thus become a superb benchmark for accurate quantification of the hydrogen-bonding interactions that govern the surface and bulk properties of ice.
Experimental characterization of the water octamer has been awkward due to the difficulty in size-selection and detection of neutral water clusters in general. Only a few gasphase studies have been achieved [27][28][29][30] , and two nearly isoenergetic structures with D2d and S4 symmetry are found. Here we report the well-resolved infrared (IR) spectra of confinementfree, neutral water octamer based on threshold photoionization using a tunable vacuum ultraviolet free electron laser (VUV-FEL). Distinct new features observed in the spectra identify additional cubic isomers with C2 and Ci symmetry, which coexist with the globalminimum D2d and S4 isomers at finite temperature of the experiment. Analysis of the electronic structure reveals a remarkable stability of these cubic water octamers arising from extensively delocalized multi-center hydrogen-bonding interaction. Multiple coexisting cubic octamers provide a coherent picture of structural diversity of bulk water and a cluster-scale precursor to the phase transition between solid and liquid water.
The vibrational spectra were obtained using a VUV-FEL-based IR spectroscopy apparatus described in detail in the supplementary information (SI) 31 . In the experiment, neutral water clusters were generated by supersonic expansions of water vapor seeded in helium using a high-pressure pulsed valve (Even-Lavie valve, EL-7-2011-HT-HRR) that is capable of producing very cold molecular beam conditions 32 . For the IR excitation of neutral water clusters, we used a tunable IR optical parametric oscillator/optical parametric amplifier system Table S1. The comparison of present and previously measured spectra is given in Fig. S1. From Fig. S1, the present spectrum displays three distinct new absorptions at 2980, 3002, and 3378 cm −1 ; the 3460 cm −1 band is now observed with high intensity, which was not observed in the helium-scattering IR spectrum 27 and only appeared with low intensity in the IR-UV spectra of benzene-tagged (H2O)8 28 .
Strikingly, the OH stretch spectra in the 3516−3628 cm −1 region include many absorptions spanning multiple vibrational bands, which are considerably more complex than the spectra contributed by high-symmetry D2d and S4 cubic octamers 27,28 , suggesting the presence of more low-symmetry minima of the water octamer. As pointed out previously 27,30 , direct comparison between theory and experiment for the relative intensities of vibrational bands is very difficult, owing to the complexity of experiment (infrared absorption combined with dissociation, saturation effects) as well as the limitation of theoretical calculation (implicit description of intermolecular zero-point motions).
Here, the stick spectra of calculated harmonic vibrational frequencies are utilized to compare with the experimental data. Fig. 1 shows the comparison of experimental spectrum of the water octamer and calculated spectra of isomers I−V. The harmonic OH stretch vibrational frequencies of isomers I−V are listed in Tables S2−S5 and the animation of vibrational modes responsible for the experimental bands is given in the Additional information. and H-donor-free OH groups. As noted previously 11,24,[27][28][29] , the AAD → ADD hydrogen bonds are remarkably shorter than ADD → AAD hydrogen bonds and the corresponding frequency of single H-donor OH stretch is typically lower than that of double H-donor OH stretch (vide infra). Due to the high symmetry of the cubic structures, the normal modes of vibrational stretch of a given type of OH group differ from the other type. As a result, the vibrational frequencies of the single H-donor OH, double H-donor OH, and H-donor-free OH groups are well separated in the OH stretch spectra ( Fig. 1 and Tables S2−S5).
In the calculated spectrum of isomer I (D2d) (Fig. 1, trace I (Table S3), which might be responsible for the broad band observed at 3106 cm −1 . The calculated IR spectra of isomers I and II are much too simple to explain the newly observed absorptions at 2980, 3002, and 3378 cm −1 , but these features match rather well with those of isomers III, IV, and V ( Fig. 1) that are energetically low-lying. Moreover, the III, IV, and V isomers yield various double H-donor OH stretch vibrational fundamentals that cover the spectral range of 3487−3599 cm −1 (Tables S4 and S5), which are consistent with the experimentally congested bands in the 3516−3628 cm −1 region. The agreement of the calculated spectra with experiment is reasonably good to confirm the assignment of the I−V isomers responsible for the experimental spectra.
In addition, the two well-separated free OH bands at 3698 cm −1 (labeled F) and 3726 cm −1 (marked with an asterisk) can be related to two distinct AD and AAD sites, because the H-donor free OH groups of the AAD sites generally appear at ~3700 cm −1 and those of the AD sites at a higher-frequency range 6,9 . The asterisk-labeled band likely originates from a noncubic isomer of water-solvated heptamer (Fig. S3). Under the pulsed supersonic expansion condition in the present work, the presence of all five cubic isomers is quite surprising, indicating that our VUV-FEL spectroscopic technique is apt to explore low-lying neutral isomers unknown before.
The five isomers I−V all have interesting cubic structures. The fact that the five cubic isomers I−V lie within 3 kcal/mol indicates that they can possibly coexist according to Boltzmann distribution. The interconversion barrier among them is larger than 4 kcal/mol at the MP2/AVDZ level (Fig. S4). For instance, the interconversion between the two enantiomeric isomers III and IV need go through four transition states and three intermediates, with the largest barrier of about 5 kcal/mol. Such interconversion barrier might be sufficiently large so that the ultra-high-pressure supersonic expansion cooling is capable of kinetically quenching the nonequilibrium octamer system prior to its rearrangement to the global-minimum energy structure 22 . To evaluate the temperature effect on the distribution of the isomers, Gibbs free energies G of isomers I−V were calculated for the temperature from 0 K to 1000 K (Fig. S5).
Clearly, the free energy difference GII-I, GIII-I, GIV-I, and GV-I does not alter significantly below room temperature, indicating that the population of the five isomers changes little at low temperature.
To understand the electronic structure of the water octamer, we have analyzed the hydrogen-bond (HB) network of the cubic isomers using delocalized and localized molecular orbital theory. Theoretical approaches were applied of natural bond orbital (NBO) 34 , adaptive natural density partitioning (AdNDP) 35 , energy decomposition analysis-natural orbitals for chemical valence (EDA-NOCV) 36 , and principal interacting orbital (PIO) analysis 37 . Hydrogen bonding between an O-H antibonding orbital (denoted σ*(O−H)) and an adjacent oxygen lonepair (LP) donor can be viewed as a three-center two-electron (3c-2e) interaction, which features the O lone-pair delocalizing to the H−O antibonding region (Fig. S6) 15,38 . As exemplified by water dimer, the contribution of 3c-2e HB energy to the intrinsic total binding energy (EHB/Etotal) is about 81.4% from EDA-NOCV analysis, whereas the PIO contribution from the interaction between the lone pair and the σ * (O−H) antibond is about 88.7% for each 3c-2e HB (Fig. S6).
As shown by the bond distances (Tables S7−S11), bond orders, and hybrid orbitals (Table S12) For the water octamer, the EHB/Etotal values of isomers I−V are all around 89% (Table S6), which are considerably larger than that in the water dimer (81%). This enhanced HB interaction can be partially attributed to the extensively delocalized HB network (vide infra).
In isomer I (D2d), the AAD → ADD hydrogen bonds (1.698 Å) are much shorter than ADD → AAD hydrogen bonds (1.904 Å) ( Table S7). The NBO second-order perturbation energy (E2) analysis of the D2d isomer I (Table S7) (Tables S8−S11) and benefit the formation of water cubes as well as the stacking of cubic and hexagonal layers that occur in the condensed phase 23,25 .
Since the 3c-2e interaction dominates in each HB, we have constructed a secular equation using the Hückel molecular orbital theory. It follows that the 3c-2e HBs are not isolated but highly correlated by delocalized interaction, which leads to extra stabilization when comparing with isolated HBs. This is reminiscent of the aromatic electron delocalization between the single-and double bonds in Kekule structures of benzene. The calculated stabilization energy of delocalized HB network for the D2d isomer I is calculated to be more stable by 4.06 kcal/mol as compared to the isolated HB network (Fig. S7), indicating that the HB network forming an unexpected aromatic delocalization plays a non-negligible role in stabilizing water clusters.
The five water octamer isomers adopting pseudo-cubic structure is highly remarkable.
As each O−H· · · O HB is dominated by the 3c-2e interaction from O lone-pair delocalizing onto the H−O antibonding region, the pseudo-cubic structure can be viewed as consisting of one pair of electron between every two apex oxygen atoms. Interestingly, this bonding pattern is akin to that in the famous cubane (Oh-C8H8) 39 , where each C−C bond contains two localized electrons, as shown in Fig. 3. While the cubane structure lies much higher in energy than its ring isomer, the D2d cubic isomer of (H2O)8 lies much lower in energy than the ring isomer, by 11.64 kcal/mol at the ab initio DLPNO-CCSD(T)/AVTZ level. Consistent with the extensively delocalized HB interaction, the cubic isomer of water has remarkable thermodynamic stability.
ter has remarkable thermodynamic stability. It is interesting to note that phase transitions between solid and liquid water have been observed in simulations of water clusters as small as the octamer, which is supported by the calculated free energy as a function of temperature [19][20][21][22] . The present study has identified the unexpected coexistence of five water octamer cubes that are stabilized by extensive delocalized HB interaction. These findings provide crucial information for understanding the processes of cloud, aerosol, and ice formation, especially under rapid cooling [41][42][43] . It is hoped that the present results will both provide a benchmark for accurate description of the water intermolecular potentials to understand the macroscopic properties of water and stimulate further study of intermediate-ice structures formed in the crystallization process of ice.
Data availability
The authors declare that all data supporting the findings of this study are available within the paper and its Supplementary information files. | 2,979.2 | 2020-08-12T00:00:00.000 | [
"Physics"
] |
The effects of TPACK and facility condition on preservice teachers ’ acceptance of virtual reality in science education course
The effects of TPACK and facility condition on preservice teachers ’ acceptance of virtual reality in science education course
INTRODUCTION
Virtual reality (VR) is rapidly developing as a future interactive medium with various advantages in the educational field (Fussell & Truong, 2021;Tsivitanidou et al., 2021).Aligning with the organizational change of Facebook to Meta regarding its investment in Metaverse (Kraus et al., 2022), VR is reportedly becoming more popular in integrating classroom learning.This is a consequence of the metaverse as a fully or partially virtual medium, including systems in VR or augmented reality (AR) (Hwang & Chien, 2022).The medium also provides realistic 3D experiences (Xiong et al., 2021), real-time activities (Mahalil et al., 2020), and social communication (Hwang & Chien, 2022).Moreover, VR influences self-efficacy, knowledge (Meyer et al., 2019), motivation,
Research Article
Thohir et al.
/ 15
Contemporary Educational Technology, 15(2), ep407 increased learning outcomes, and cognitive processes (Kemp et al., 2022).These advantages show that prospective teachers need to determine and understand the patterns by which the medium is used for learning.The challenge also supports readiness with teacher candidates' acceptance of new technologies (Kaushik & Agrawal, 2021;Lin et al., 2007).This allows them to adapt by integrating VR into inquiry-supported learning.Irrespective of these merits, an encountered challenge still emphasizes the patterns by which this technology is accepted by prospective teachers in designing future learning.
Technology acceptance model (TAM) has become a popular measuring tool for modelling human acceptance or rejection of new technologies (Barrera-Algarín et al., 2021;Granić & Marangunić, 2019).The extraordinary work of this model was introduced by Davis (1989), where usefulness-usage and ease-of-use had a strong relationship.This explained that designers should identify friendly users and the usefulness of new technology, toward goal achievement (Davis et al., 1989).According to Granić and Marangunić (2019), teachers' acceptance of more specific technologies such as VR, should be explored.The exploration of preservice teachers' acceptance of VR was also carried out by several previous reports, such as Altarteer and Charissis (2019), Fussell and Truong (2021b, p. 1), Jang et al. (2021a), and Lee et al. (2019).For example, Fussell and Truong (2021) provided some external variables such as expectation, self-efficacy, and enjoyment, to students' acceptance of VR in training.Kemp et al. (2022) also emphasized acceptance regarding cognitive involvement, social influence, system attributes, and facility conditions (FCs).However, how will preservice teacher acceptance of VR adoption be adapted to future learning contexts?Technology integration also requires adequate facilities, such as hardware and software infrastructure.Therefore, the integration of VR into learning is unstructured when it is transformed into content learning accordingly.
Irrespective of these results, a few external factors of VR-TAM were prioritized concerning pre-service teachers as designers (Alalwan et al., 2020;Jang et al., 2021).To describe the acceptance of appropriate technology, TAM is capable of influencing the curriculum and assessment of digital competencies, teacher virtual adoption, and technological external variables (Scherer et al., 2019).In this case, teachers should specifically accept TAM to integrate VR into learning (Jang et al., 2021).This subsequently leads to the utilization of technological pedagogical content knowledge (TPACK), which is a concept often used to measure preservice teachers' integration of digital learning technology (Schmid et al., 2021;Thohir et al., 2021;Valtonen et al., 2019).This framework has reportedly been cited for more than 14000 works since its proposal by Mishra and Koehler (2006).Therefore, this study aims to determine the effects of FC, TAM, and TPACK on preservice teachers' use of VR in Indonesian science education courses.This condition emphasizes the description of these teachers' readiness in designing VR for learning and teaching integration.
THEORETICAL REVIEW Virtual Reality
VR is not a new technology due to its long-term existence since 1994, where its definition emphasized "anywhere a user is effectively immersed in a responsive digital world" (Brooks, 1999).Based on previous reports, VR was originally implemented as a flight training simulator with large and expensive equipment (Page, 2000).In this context, the Simulator was considered the first immersive VR capable of combining display, sound, and motion (Araiza-Alba et al., 2022).This provided an immersive and interactive real environment and digital world experience (Sukotjo et al., 2021).It was also carried out using 3D goggles and data gloves, leading to its consideration as second life (Rospigliosi, 2022).Therefore, VR reportedly improves students' cognitive development, procedures, and affective domains, especially in science learning (Hamilton et al., 2021;Liu et al., 2020;Parong & Mayer, 2018, 2021).
Technology Acceptance Model and Facilities Condition
TAM is one of the psychological theories of human behavior, which is widely applied and empirically tested to show people's acceptance of ICT (Rahimi et al., 2018).Based on some reports, this model was used to predict the integration patterns of technology (Joo et al., 2018;Scherer et al., 2019;Sukendro et al., 2020).It was also initially proposed by Davis (1985), regarding its development from the theory of reasoned action (TRA), which contained three variables, namely behavioral intention (BI), attitude (AT), and subjective norm (SN) (Davis et al., 1989).Firstly, BI focuses on a person's intensity in performing a specific activity.Secondly, attitude is a person's positivity towards the target behavior.Thirdly, SN is the perception of an individual or most people, which motivates expected and unexpected performance.In this case, the SN variable was replaced with PU (perceived usefulness) and PEOU (perceived ease of use) (PEOU), which have a strong relationship with BI.According to Davis (1989), PEOU was also affected by PU.In addition, AT has reportedly been dispensed for more complex analysis in other studies (Venkatesh & Davis, 2000), although some retained SN and AT (Alshurafat et al., 2021;Ibili et al., 2019;To & Tang, 2019).For this present report, essential TAM is emphasized, regarding the exploration among PU, PEOU, and BI, as well as the provision of external variables (Fagan et al., 2012;Jang et al., 2021a;Kemp et al., 2022b).This leads to the proposition of the following hypotheses, 1. H1: PU significantly affects BI.
H3: PEOU significantly affects PU.
TAM is an emerging model used to examine teachers' acceptance of new technology and external variables (Eraslan Yalcin & Kutlu, 2019;Hsu & Lu, 2004;Mailizar et al., 2021).From this context, over 7000 citations of original articles (Davis, 1989) searches are obtained from Google Scholar, with more reports retaining original works than modified TAM (Granić & Marangunić, 2019).Some studies also evaluated the acceptance of VR as a technology requiring learning application (Fagan et al., 2012;Fussell & Truong, 2021;Jang et al., 2021;Kemp et al., 2022;Manis & Choi, 2012).2019; Sagnier et al., 2020;Vallade et al., 2021).However, other reports regarding the acceptance of preservice teachers on the adoption of VR as a learning technology are limited.According to Jang et al. (2021), the relationship between the utilization of TPACK to TAM was identified due to the different conditions of VR adoption in various countries.The external variable is also an important component to be explored, for example, disability conditions (Ranellucci et al., 2020), immersion, imagination (Barrett et al., 2020), presence, experience value, and object costuming (Altarteer & Charissis, 2019).Among the various external TAM variables, the conditional factor needs to be considered toward the adjustment of technology adoption readiness (Kamal et al., 2020;Pal & Vanijja, 2020;Salloum et al., 2019;Sukendro et al., 2020).
FC is often included as an important external variable, to indicate extended TAM (Beldad & Hegner, 2018;Kamal et al., 2020;Natasia et al., 2022;Sukendro et al., 2020).This is specifically an important variable for the acceptance of technology, through PU and PEOU.According to Sukendro et al. (2020), FC was part of the appropriate, usable, and easy facilities whose environment was good.However, several studies only emphasized the significance between FC and PU (Natasia et al., 2022).These conditions lead to the proposition of the following hypotheses, 4. H4: FC significantly affects PU.
Virtual Reality-Technological Pedagogical Content Knowledge
As future educators, preservice teachers need to have technological, pedagogical, and content knowledge (TK, PK, and CK) competencies, which were proposed by Mishra and Koehler (2006) and integrated into TPACK.This concept was developed from the pedagogical content knowledge (PCK) of Shulman (1987), which subsequently produced additional frameworks, namely TCK (technological content knowledge) and TPK (technological pedagogical knowledge).TPACK is also commonly used to adopt or design the planning and implementation of learning technology (Dong et al., 2020;Murgu, 2021;Ozgur, 2020;Thohir et al., 2022).This enables the patterns by which VR is adopted through the pedagogical and content concepts, according to the learning context.For example, Marks and Thomas (2022) explored students' VR design for laboratory learning.The VR acceptance of preservice teachers also prioritizes the integration of this technology into the lesson plan and implementation (Eutsler & Long, 2022;Farrell et al., 2022).
Irrespective of these descriptions, an encountered challenge focused on the patterns by which TAM associates with preservice teachers' TPACK, toward VR learning adoption.In this context, some previous studies showed that TPACK was associated with PU and PEOU, although only a few were observed (Jang et al., 2021a;Yang et al., 2021).Furthermore, prospective teachers adopted the integration of VR acceptance into strong learning.This describes that different contexts enabled the performance of VR learning adoption through various acceptance outcomes.Regarding these results, the difficulty in designing VR affected usage acceptability or ease of acceptance.This mitigates the patterns by which VR is integrated into planning, implementation, and evaluation (Hayes et al., 2021).The conditions for designing this technology also require high-recommendation facilities, such as 3D and game software applications (Solmaz & Van Gerven, 2022).This confirms the existence of an influence on the facilities and TPACK.Based on this theory, the following hypotheses are proposed, 6. H6: TPACK significantly affects PU.
METHOD Procedure and Participants
Based on Figure 1, this analysis was conducted by distributing an online survey through a google form, to specifically identify individual beliefs and attitudes (Creswell, 2020).This data collection process lasted for 28 weeks, through the acquisition of permission from the university lecturers to distribute the survey.
Table 1 shows the demographics of the participants selected from department of elementary school teacher education in 12 Indonesian universities, between semesters 1-7.There were 406 participants who were invited to fill out the survey with details, 14.3% (n=58) were male and 85.7% (n=348) were female.They have been taking educational technology courses for science for elementary school.For example, prospective teachers have taken technology development courses in first semester.They have been introduced to VR with the eventual goal of adopting it in the next semester's lesson plan, especially science content.The table also shows that 74.6% (n=303) prospective teachers knew about VR before the course, while the rest did not.This represents that the majority of preservice teachers already have knowledge about VR in the metaverse.Then, most of these preservice teachers were laymen in using VR, leading to the identification of the novices yet to design and implement the technology.
Instrument
An instrument designed with the following two important parts was used: 1.The demographics describing the participants' characteristics and VR knowledge.This includes the email, gender, age, university origin, and VR knowledge of the participants.
2. The variables of VR and TPACK acceptance obtained from the literature review.This includes PU (five items), PEOU (five items), BI (four items), TPACK (four items), and FC (four items).
The items statement was also modified from multiple literature reviews (Davis, 1989;Fussell & Truong, 2021;Jang et al., 2021;Park, 2009).Besides being modified from the original Davis (1989) TAM, the questionnaires of PU and PEOU were also developed from other reports (Granić & Marangunić, 2019;Jang et al., 2021).These were accompanied by the instruments of BI, TPACK, and FC, which were developed by Davis (1989), Fussell and Truong (2021), and Park (2009), Schmidt et al. (2009), and Jang et al. (2021), as well as Kemp et al. (2022) and Park (2009), respectively.Moreover, each item had a response scale of 1 (strongly disagree) to 5 (strongly agree).This instrument was subsequently consulted with four lecturers and three pre-service teachers, to validate the content and language.Several revisions were also observed, such as the replacement of appropriate sentences, inappropriate instructions, and typographical errors.After these replacements, the survey instrument was then distributed to obtain data, with the validation and reliability of the results empirically derived.For completeness, the instrument is attached to Appendix A in the translated version.
Data Analysis
This was conducted to determine the validity, reliability, structural, and model fit.In this process, EFA (exploratory factor analysis) and CFA (confirmatory factor analysis) were initially used to explore possible variables, as well as determine the number of factors and items obtained regarding an LF (loading factor) of more than 0.5 (Beauducel & Herzberg, 2006).Using SPSS 25 software from IBM, these experimental methods were subsequently analyzed.Cronbach's alpha, and correlation between variables were also calculated using this software.In addition, the data obtained were analyzed using SmartPLS 4 and PLS-SEM, to determine the most appropriate structural fit model, such as standardized root mean square residual (SRMR) and NFI (normed fit index).
Descriptive Data
Based on the results, the acceptance of preservice teachers in integrating VR into learning provided an average value greater than three, as shown in Table 2.This indicates that PU (M=4.10,SD=.69) and FC (M=3.37,SD=.72) had the highest and lowest average scores, respectively.
/ 15
Contemporary Educational Technology, 15(2), ep407 TPACK (M=3.70,SD=.65) was also observed with the second lowest value after FC.This proved that a relationship was found between infrastructure and knowledge of VR adaptation in learning activities.
Validity and Reliability
Using the EFA principal components rotation method, the data obtained were explored to obtain the preservice teachers' acceptance factor for VR, regarding the eigenvalues greater than one.Bartlett sphericity test also showed a value of .92,with 2/df=16.92 and p=.000.This means that the variances were equal between the samples, indicating a cumulative value of 70.73%.However, only four components were observed with AT and PU in one factor.From these results, CFA was applied by establishing six possible factors through the elimination of AT, regarding previous studies.In this case, the loading factor was greater than .6,with the total shown in Table 3.For construct reliability, all items were subsequently tested using Cronbach's alpha, CR (composite reliability), and average variance extracted (AVE), as presented in Table 3.For example, the table also presents all values of components reached over .8,fairly high reliability (Taber, 2018).
Table 4 shows the correlation of TPACK and the acceptance component of VR technology, indicating all variables were significantly and positively associated with values greater than r=.3.The strongest relationship was also observed between PU and AT (r=.85, p=.001), with the lowest association found between FC & PU (r=.33, p=0.001) and FC & AT (r=.33,p=.001).This confirmed that the perceived usefulness of teachers towards VR was closely related to positive attitudes.Based on the results, FC was slightly associated with the usefulness and positive attitudes of VR.This was due to the lack of facilities for designing VR, indicating no relationship between the benefits and uses.In addition, determination validation was carried out to determine the different types of factors.This was carried out by squaring the root of AVE against the other correlation criteria (Fornell & Larcker, 1981).For example, PEOU had a higher AV root value (0.84) than its correlation with BI (r=.76, p=.001), FC (r=.45,p=.001),PU (r=.78,p=.001),and TPACK (r=.54,p=.001).This showed that each TPACK component with TAM was significantly realized.
Structural Equation Model
Using smart PLS software, SEM was carried out to determine the impact of VR-TPACK on the acceptance of VR usage, as shown in Figure 2. The fit model of this test was also obtained from the SRMR at 0.06, indicating a value below the required average (0.06<0.08) (Hu & Bentler, 1998).Moreover, the NFI obtained a value of 0.9, which was close to one.Figure 2 also shows R-squared measures (R2) as a regression coefficient (RC), which exhibits the contribution to the latent variable.For example, the BI variable had an RC of 66%, which was contributed by PU and PEOU.
Based on this correlation, mapping was carried out for the need for VR integration into pre-service teacher admissions.Using PLS-SEM, the analytical results showed the relationship between TPACK, facility support, and VR acceptance as a learning technology.From Figure 2 and Table 5, careful identification of the model yielded the following important outcomes, • H1, H2, H3, H5, H6, H7, and H8 were accepted, except for H4.This indicates that the adoption of VR and facilities was related to VR ease of use in science learning.However, the facilities condition is not related to the acceptance of VR usefulness.
• The TAM-VR relationship was expressed as a TAM-extended development.
• The strongest relationship was PEOU->PU (β=.74) and FC->TPACK (β=.55).This was because the relationship between PEOU and PU influenced VR acceptance, due to the technology's ease of use.For FC and TPACK, strong relationship indicates that FC encouraged prospective teachers' VR adaptability.
DISCUSSION
This study aimed to determine the relationship between the prospective teachers' VR adoption, through TPACK and its acceptance.Based on the results, the survey instrument was valid and reliable to measure the relationship between TPACK and TAM.This was however the modification of several previous reports (Davis, 1989a;Fussell & Truong, 2021b;Jang et al., 2021a;Park, 2009), with the statement items adjusted for the acceptance of VR and VR-TPACK.Changes were also conducted to the context and characteristics of FCs and preservice teachers, respectively.This led to its utilization in determining the readiness of VR adoption before implementation (Iqbal & Ahmed Bhatti, 2015;Lin et al., 2007).In general, the identification of important The results also showed a relationship between the acceptance of VR and TPACK.Furthermore, most hypotheses represents a significantly positive relationship between TAM and TPACK, proving that H1 (PU->BI), H2 (PEOU->BI), and H3 (PEOU->PU) had significant effects.This was in line with Davis (1989), which emphasized the acceptance of VR in preservice teachers (Legris et al., 2003;Yang & Wang, 2019).Irrespective of this condition, several reports still maintained their perceptions about SN and AT (Alshurafat et al., 2021;Ibili et al., 2019;To & Tang, 2019).From these findings, the preservice teachers perceived that technology adoption needs to consider the perceived usefulness and ease of using VR.In this case, POEU is likely to possess precedence over PU, to integrate VR into learning and teaching.This was in line with Jang et al. (2021), where TPACK affected TAM, regarding the PEOU of multimedia applications.However, Mayer & Girwidz (2019) greatly emphasized PU in the relationship between TPACK and TAM, indicating the need for subsequent future exploration.The findings imply that the instructor should also make sure to provide motivation on the utility of VR adoption, and how to novice preserve teachers can easily adopt VR in science learning courses.
FCs also significantly and positively influenced PEOU and TPACK, although not PU.This indicates that the ease of use of VR was affected by supporting various facilities, such as 3D modelling applications, game engines, and HMD (Safikhani et al., 2022;Sukendro et al., 2020).However, prospective teachers likely assumed that these tools did not impact the VR usability of VR (Gurer, 2021).These results were not in line with Natasia et al. (2022), where the FC->PEOU and FC->PU hypotheses were accepted and rejected in e-learning applications, respectively.The prospective teachers also perceived that design facilities affected the integration of VR into learning.This allowed VR to become a technology developing with the number and ease of learning design.The development of a strategy was also recommended for VR integration, using easy stages and supporting facilities.VR design templates might be used by implementing TPACK for affordable outcomes.
Several limitations on the survey participants and methods were also observed irrespective of the results obtained.Firstly, only the pre-service teachers from various Indonesian universities were selected for this study.This shows that subsequent future analysis should involve the participants from various countries, for broader and more formidable results.It should also be carried out at an international university, to facilitate identification of participants.Secondly, a quarter of the participants were not familiar with VR.Although an introductory video was provided on this technology, these teachers were still observed as novice users.From this context, the involvement of participants in VR training/course design is highly recommended.The PLS-SEM method should also be compared with CB-SEM, using AMOS, Lisrel, or MPlus software.In addition, a qualitative analysis should be considered an alternative in subsequent future reports, through interviews, observations, and documental evaluation of the prospective teachers designing VR.
CONCLUSION
The relationship between TPACK, TAM, and FC was assessed regarding the Indonesian preservice teachers' utilization of VR.This proved that the acceptance of VR was in line with Davis (1989), where a significant positive relationship was observed between PU, PEOU, and BI.Based on the results, TPACK also affected PEOU.Another contribution was the relationship between FC and TPACK/PEOU.These results had various
Table 1 .
Descriptive of participants in the VR acceptance survey
Table 2 .
Descriptive statistics of TPACK, FC, and TAM
Table 3 .
Loading factor and reliability of TPACK, FC, and TAM components
Table 4 .
Correlation between VR acceptance items with TPACK and Fornell-Larcker criterion
Table A1 .
Item survey instrument to identify connection between TAM, TPACK, and FC | 4,800.6 | 2023-04-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
Intelligent grid load forecasting based on BERT network model in low-carbon economy
In recent years, the reduction of high carbon emissions has become a paramount objective for industries worldwide. In response, enterprises and industries are actively pursuing low-carbon transformations. Within this context, power systems have a pivotal role, as they are the primary drivers of national development. Efficient energy scheduling and utilization have therefore become critical concerns. The convergence of smart grid technology and artificial intelligence has propelled transformer load forecasting to the forefront of enterprise power demand management. Traditional forecasting methods relying on regression analysis and support vector machines are ill-equipped to handle the growing complexity and diversity of load forecasting requirements. This paper presents a BERT-based power load forecasting method that leverages natural language processing and image processing techniques to enhance the accuracy and efficiency of transformer load forecasting in smart grids. The proposed approach involves using BERT for data preprocessing, analysis, and feature extraction on long-term historical load data from power grid transformers. Multiple rounds of training and fine-tuning are then conducted on the BERT architecture using the preprocessed training datasets. Finally, the trained BERT model is used to predict the transformer load, and the predicted results are compared with those obtained based on long short-term memory (LSTM) and actual composite values. The experimental results show that compared with LSTM method, the BERT-based model has higher short-term power load prediction accuracy and feature extraction capability. Moreover, the proposed scheme enables high levels of accuracy, thereby providing valuable support for resource management in power dispatching departments and offering theoretical guidance for carbon reduction initiatives.
Intelligent grid load forecasting based on BERT network model in low-carbon economy Peng Tao, Hao Ma*, Chong Li and Linqing Liu State Grid Hebei Marketing Service Center, Shijiazhuang, China In recent years, the reduction of high carbon emissions has become a paramount objective for industries worldwide. In response, enterprises and industries are actively pursuing low-carbon transformations. Within this context, power systems have a pivotal role, as they are the primary drivers of national development. Efficient energy scheduling and utilization have therefore become critical concerns. The convergence of smart grid technology and artificial intelligence has propelled transformer load forecasting to the forefront of enterprise power demand management. Traditional forecasting methods relying on regression analysis and support vector machines are ill-equipped to handle the growing complexity and diversity of load forecasting requirements. This paper presents a BERT-based power load forecasting method that leverages natural language processing and image processing techniques to enhance the accuracy and efficiency of transformer load forecasting in smart grids. The proposed approach involves using BERT for data preprocessing, analysis, and feature extraction on long-term historical load data from power grid transformers. Multiple rounds of training and fine-tuning are then conducted on the BERT architecture using the preprocessed training datasets. Finally, the trained BERT model is used to predict the transformer load, and the predicted results are compared with those obtained based on long short-term memory (LSTM) and actual composite values. The experimental results show that compared with LSTM method, the BERT-based model has higher short-term power load prediction accuracy and feature extraction capability. Moreover, the proposed scheme enables high levels of accuracy, thereby providing valuable support for resource management in power dispatching departments and offering theoretical guidance for carbon reduction initiatives.
Introduction
With the continuous development of the global economy, energy consumption continues to increase, leading to rising greenhouse gas emissions, which intensifies the global climate crisis and the speed of climate change. Therefore, achieving carbon neutrality and reducing carbon emissions has become a common goal of all countries. As an important part of it, power system management is particularly important for realizing the goal. Power system scheduling can achieve the energy manage and utilizing more efficiently, both renewable and non-renewable. Moreover, power system resource management can also encourage and promote the development and application of renewable energy.
Through technological innovation and cooperation and sharing, power companies can achieve more intelligent, efficient and sustainable power system resource management, and make greater contributions to carbon neutrality and carbon reduction actions Yuan et al. (2023). Figure 1, distribution transformers which are distributed in urban and rural areas, are the most important terminal equipment in the power system with a large number and complex structure. Their safe and economic operation is an essential condition to ensure the high-quality development of the power grid. With the deep integration of big data, artificial intelligence technology and power grid business, power distribution transformer load prediction has become an important basis to support power grid production and operation, and the accuracy of its load prediction is of great significance for transformer key monitoring, active rush repair, bearing capacity analysis and other businesses.
The current technologies for load forecasting mainly include artificial neural networks, support vector regression, decision trees, linear regression, and fuzzy set Jahan et al. (2020). These methods are all based on the correlation mining of historical coincident data and forecast data. In other words, the regression analysis method is based on the causality model to predict the corresponding data, and the time series forecasting method completes the load forecasting through the curve fitting and parameter estimation of the load historical data. These methods are due to their simple structure and poor flexibility. It is difficult to meet the prediction accuracy Hou et al. (2022) required by practice. The above methods have the following problems 1) Usually the historical load data series are short, and usually face the challenge of insufficient data samples; Zhao et al. (2021). 2) The prediction accuracy of traditional forecasting methods is not high, and it is difficult to be applied in production practice. 3) The performance and universality of the prediction method are poor.
Motivation
Recently, due to the advantages of artificial intelligence methods in data forecasting and intelligent analysis, power system load forecasting based on machine learning (ML) Liao et al. (2021); Yuan et al. (2023) has gradually emerged. For example, the load forecast based on the time series model can be extended to a multiclass regression model to predict the power load by establishing a time series model for the grid power or the method based on the support vector machine (SVM) can use its own excellent binary classification characteristics.
However, these traditional ML prediction schemes, such as LSTM, convolutional neural network (CNN), and recurrent neural networks (RNN), all have problems of poor prediction accuracy and low efficiency. The bidirectional encoder representations from transformers (BERT) model innovatively encodes the sequence by using self-attention and position embedding Qu et al. (2023), which has nothing to do with timing and can be calculated in parallel, thereby achieving higher load prediction accuracy. However, the traditional forecasting algorithm is a structure improved by RNN, which has the concept of time series and cannot be calculated in parallel. Second, traditional model structures (such as LSTM, CNN, and RNN) are not interpretable Wang et al. (2021), while the self-attention mechanism of BERT's transformers can produce more comprehensive models. This is very beneficial to the smart grid system. In view of BERT's powerful ability to process and understand high-dimensional data, it can achieve high load forecasting effect. This paper proposes a BERT-based power system load forecasting architecture. That is, through reasonable load forecasting, the utilization rate of renewable energy can be maximized, the proportion of renewable energy in the power system can be increased.
Related work
Load forecasting is a crucial component of power system operation and plays an indispensable role in achieving carbon neutrality and reducing emissions. It enables power system managers to accurately predict future power demand, facilitating optimal planning of power production and distribution Ahmad et al. (2022). By employing scientific load forecasting, excessive or insufficient power production can be avoided, resulting in reduced power waste and carbon emissions. For instance, to address the challenges of power system emergency control and uncertainty, Huang et al. (2019) proposed an adaptive emergency control scheme based on deep reinforcement learning (DRL). They also developed an open-source platform called reinforcement learning grid control (RLGC), which provides various power system control algorithms and benchmark algorithms, supporting and enhancing the field. In Gasparin et al. (2022), deep learning techniques are applied to power load forecasting. The study evaluates the impact of different architectures, such as feedforward and recurrent neural networks, sequence-tosequence models, and time convolutional neural networks, on load forecasting performance. Architecture variant experiments are conducted to compare the robustness and effectiveness of these models for load forecasting. From a power system network security perspective; Liu et al. (2020) proposes a network security assessment method based on deep Q-network (DQN). This approach approximates the optimal attack migration strategy by determining the required number of migrations, leading to improved power system security. The authors of Biagioni et al. (2022) introduce a flexible modular extension framework that serves as a simulation environment and experimental platform for various agent algorithms in power systems. They validate the framework's performance using the multi-agent deep deterministic policy gradient algorithm, addressing a gap in power system agent training. In Tan and Yue (2022), a BERT-based time series forecasting model is utilized to predict the wind power generation load in the power grid. This method effectively forecasts future load patterns.
In the realm of user electricity consumption behavior, various studies have been conducted. For instance, in Barman and Choudhury (2020), the authors analyze the demand-compliant behavior of electricity consumers and propose a hybrid parameter selection strategy that combines the gray wolf optimization algorithm and support vector machine. This approach considers changes in user demand to predict power system load. In Wang (2017), multiple factors that commonly influence load, such as weather conditions, are taken into account. The authors employ multiple linear regression analysis to determine regression coefficients and standard deviations, enabling load prediction under different weather conditions. Considering the power load scenario in a city, Li et al. (2018) introduces a data-driven linear clustering strategy. This strategy involves data preprocessing and modeling to construct an optimal autoregressive integrated moving average model. The method demonstrates efficient error forecasting and improved accuracy for predicting the city's power system load. In Saviozzi et al. (2019), the authors address the business needs of distribution system operators and propose an integrated artificial neural network-based load forecasting and compliance modeling method for modern distribution management systems. The method exhibits better adaptability and higher performance, as validated through practical usage.
In contrast to the aforementioned methods, Du et al. (2020) addresses the limitations of traditional large-scale nonlinear time series load data forecasting methods. They propose an attention-BiLSTM network that utilizes BiLSTM to predict short-term power load sums. The attention mechanism employed in this method leads to improved prediction accuracy and stability. To enhance the temporal characteristics of composite data, Yin and Xie (2021) introduces a multi-time-spatial scale method for data processing and proposes a short-term rich deterioration prediction approach. In Chapaloglou et al. (2019), a coincidence prediction algorithm is designed using a feedforward artificial neural network. The algorithm performs predictions based on the current load curve shape classification. Combining feedforward deep neural networks and recursive deep neural networks, Din and Marnerides (2017) predicts the short-term load of the power system. This approach effectively identifies the primary factors influencing load and power consumption, enabling accurate short-term load prediction. Research conducted in Yin et al. (2020) focuses on the deep forest regression method, utilizing two complete random forests for effective training and data learning. This method improves prediction accuracy while mitigating the impact of deep learning method parameter configuration. To address the low prediction accuracy of traditional methods, Rafi et al. (2021) proposes a prediction method based on convolutional neural networks and long short-term memory networks, achieving high prediction accuracy. In Kong et al. (2017), behavioral analysis is conducted on the scope of residents' activities. The authors propose a deep learning framework based on longterm and short-term memory of device consumption sequences, enabling accurate prediction of electricity load in the smart grid.
Contribution and organization
The main contributions of this paper is organized as following: • Different from the existing literature on power load forecasting, this paper proposes a Bert-based short-term forecasting method for transformer load data. The method is suitable for unpredictability and long sequence prediction Zhao et al. (2021). It can maximize the mining of hidden relationships behind sequences and related variables.
• Datasets abnormal values detection, processing and feature extraction are executed to establish the formal datasets. For all observation periods, the corresponding verification datasets and test datasets were divided according to a fixed proportion. During the formal training process of data, the efficiency of model learning is ensured by normalizing and standardizing all datasets. • Leveraging the powerful data feature extraction capabilities of BERT, our proposed algorithm excels in extracting features from composite data over time, enabling accurate prediction of composite data within a specific future time range. Through rigorous experimentation and data analysis, the proposed model has demonstrated remarkable load prediction accuracy and performance for power system transformer load forecasting compared to the LSTM.
Organization
The paper is structured as follows: Section 2 delves into the BERT-based load forecasting scheme. Section 3 provides a detailed description of the datasets used in this study. In Section 4, the experimental setup and analysis of results are presented. Finally, Section 5 concludes the paper.
BERT-based load forecasting method 2.1 Bidirectional encoder representation from transformers (BERT)
BERT, a deep learning-based natural language processing technology, is utilized in this paper for data processing Devlin et al. (2018). The BERT model typically involves pre-training and finetuning stages. It has found widespread applications in question answering systems, sentiment analysis, and language reasoning. In this study, the BERT model is employed to extract power load characteristics from composite power system data. These characteristics include transformer ID, date, time stamp, wind speed, flight direction, ambient average temperature, maximum temperature, minimum temperature, humidity, reactive power, and active power. Subsequently, these extracted features and time series data are fed into the forecasting model for training. The steps involved in the BERT-based power load data processing are as follows:
Data preprocessing
In our study, we performed data preprocessing on the load data of 52 transformer sets spanning multiple years. This preprocessing involved handling missing values, outliers, and type conversion. Finally, we applied normalization and standardization techniques to the data. For the normalization and standardization process, we employed the following method on the input data x: where δ is the variance of the sample. The data normalized by normalization is scaled between [0,1].
Coding
In the load prediction task, BERT encodes the input data using transformer. The input to BERT consists of token embeddings, segment embeddings, and position embeddings. These three vectors are combined to form the final input vector. Additionally, BERT is capable of encoding the input data from multiple perspectives, enhancing its understanding of transformer load data. The position code, as described in Kazemnejad (2019), comprises a pair of sine and cosine functions, each with a decreasing frequency along the vector dimension. Mathematically, it can be represented as follows: where ω k represents the position index of the token in the sequence, taking integer values ranging from 0 to the maximum sequence length minus 1 (MLen-1). The variable d denotes the dimensionality of the position vector, which is equal to the hidden state dimension of the entire model. The variable i is an integer ranging from 0 to d/2-1, specifically 0, 1, 2, … , 383. ⃗ p t refers to a matrix with MLen rows and d columns, denoted as [MLen, d], where MLen represents the maximum sequence length and d represents the dimension.
Based on the aforementioned position encoding, BERT facilitates the model in performing calculations involving relative positions more effectively. The rectified linear unit (ReLU) function is employed as the activation function, which can be expressed as: In the context of activation functions, the input is denoted as z.
When sigmoid or other functions involving exponential operations are used to calculate the activation function, the computational load tends to be high. Moreover, when reverse propagation is applied to compute the error gradient, the derivative often involves division, resulting in a relatively large computational burden. In contrast, the computation involved in ReLU is significantly reduced. The utilization of ReLU leads certain neurons to output zero, thereby promoting network sparsity. This reduction in interdependence among parameters helps alleviate the occurrence of overfitting issues.
Training
BERT utilizes historical power grid load data from a specific time period as input. This data is fed into a fully connected layer Franco et al. (2023) to generate forecast outputs. The model is trained using the mean square error as the loss function. Through training, the model adjusts its parameters to minimize the loss, continuously improving its accuracy. This iterative process continues until convergence is achieved.
Prediction
By training the BERT model, load values of specific transformers in the near future can be predicted. These predictions serve as valuable references and guidance for power grid enterprises in terms of power demand. To ensure higher prediction accuracy, we adopt training samples with the same prediction length during the training
FIGURE 1
The typical architecture of transformers and power monitoring center.
Q15
Frontiers in Energy Research 04 frontiersin.org stage. This is achieved through the use of sliding windows, allowing for the construction of training and test sets. Figure 2 illustrates the utilization of transformer-based bidirectional encoding in BERT. Unlike the full transformer model, BERT exclusively employs the encoder part. Each encoder unit consists of multi-head attention, layer normalization, feedforward layers, and additional layer normalization, stacked and combined across multiple layers. Self-attention, a crucial component of BERT, is integrated with position encoding to address temporal correlation in the training data. Its primary function is to dynamically calculate weights during information propagation. Multi-head attention aggregates the outputs of multiple distinct units and subsequently combines them through fully connected dimensionality reduction and output. Layer normalization plays a role in regularization. It gathers the outputs of self-attention, applies layer normalization, and then normalizes each row within the batch.
During the model training process for specific load data, as depicted in Figure 2, we take into account various factors that influence the power system load. These factors include transformer ID, date, time stamp, wind speed, flight direction, ambient average temperature, maximum temperature, minimum temperature, humidity, reactivated power, activated power, and more. Since these data are interrelated, we utilize the Embedding + Positional method to incorporate the correlation between historical load data attributes into the data. Embedding involves mapping the training data to corresponding dimensions. By employing BERT, we can train the model, extract features from the input rich and deteriorating data, and ultimately achieve short-term power compliance prediction.
BERT's attention mechanism
The core component of BERT is the Transformer, and the theoretical foundation of the Transformer lies in the attention mechanism. The attention mechanism enables the neural network to focus on specific parts of the input. It involves three main concepts: Query, Value, and Key. The Query represents the target word, the Value represents the original representation of each word in the context, and the Key represents the vector representation of each word in the context. BERT calculates the similarity between The architecture of proposed predicting model based on BERT.
TABLE 1
The detailed description about the semantic meaning of each related variables which included in the datasets.
Number
Parameter Specification The self-attention mechanism has certain limitations, such as being overly focused on its own position. To address this issue, BERT employs the multi-head attention mechanism. This mechanism allows BERT to mitigate the self-attention's excessive self-focus and promotes a more balanced attention across the input sequence. Additionally, the use of the multi-head attention mechanism in BERT enhances the model's expressive power. It enables the attention layer's output to contain encoding representation information from different subspaces. By performing multiple sets of self-attention processing on the original input sequence and combining the results through linear transformations, BERT improves its feature understanding capability. This enhancement contributes to a more comprehensive representation of the input data, thereby improving the model's overall performance.
The architecture of proposed model
The prediction network model, as depicted in Figure 3, is composed of the integration of historical loads and extracted variables through feature extraction. These upper-layer feature vectors, along with the historical loads, are fed into the BERT network. The BERT network processes the inputs and generates hidden features, which are then passed through a fully connected network with sizes of 512, 1024, and 96. Dropout functions with a probability of 0.3 are applied after the first and second fully connected layers to mitigate overfitting. The final output consists of load predictions for different transformers.
To obtain more accurate predictions of future loads, we utilize Mean Squared Error (MSE) as the loss function during model training. The MSE can be expressed as: where N represents the discrete vector samples, where y i denotes the actual load value andŷ i represents the corresponding predicted value. However, when using the gradient descent method to learn the MSE loss function, the learning rate may be very slow at the beginning of the model training. We can represent the training set as x1, x 2 … x m and their corresponding outputs as y i . Additionally, the
FIGURE 4
The corresponding temperature (A), humidity and rainfall (B).
FIGURE 5
The corresponding wind speed/direction (A), average system load of different transformer (B).
network model gradient can be calculated by where g represents the gradient of the current batch, and θ represents the model parameters. To obtain the optimized weight update, the biased estimation of the first moment can be represented as: where s represents the moment vector and ρ 1 is the decay rate. The biased estimation of the second moment can then be written as: where q is the second moment vectors. ρ 2 denotes the decay rates. Furthermore, the bias corrected moment estimation can be denoted asŝ Then, the model parameters can be updated by where θ denotes the model parameters, and β represents the learning rate. The primary advantage of using the adaptive moment Frontiers in Energy Research 07 frontiersin.org
FIGURE 6
The description related features' distribution of the observation period.
estimation (Adam) optimizer is its ability to adaptively select the update step size. This approach can achieve the goal of annealing the learning rate while also minimizing the impact of the gradient scale on optimization.
Basics and preprocessing of datasets
In order to accurately predict the average load, comprehensive datasets were constructed using load data from 52 different transformers in the same area, collected over a period of 584 days. The data was collected at a 15-min interval for each transformer, resulting in approximately 96 discrete samples per day for each transformer. The total dataset size is approximately 52 × 96 × 584. The objective of this study is to predict future load values based on historical data and related variables, aiming to achieve intelligent power scheduling, improve energy efficiency, reduce carbon dioxide emissions, and enable efficient and intelligent scheduling. Each sample in the dataset includes eight environmental parameters: maximum temperature, minimum temperature, average temperature, maximum humidity, minimum humidity, extreme wind speed, wind direction, and 24-h precipitation. Table 1 provides further details on these parameters. On the right-hand side of Figure 4, the variations in humidity over different sampling intervals are depicted. The blue curve represents the maximum humidity recorded during the day, while the red curve represents the minimum humidity. This metric partially indicates the need for dehumidification in the area and contributes to the system load. The black curve represents the amount of rainfall recorded during the day, which directly affects environmental humidity. It can be observed from the chart that the overall humidity of the system is high during periods of heavy rainfall. This aspect reflects local weather conditions and the availability of photovoltaic power generation to supplement household and factory power consumption. Therefore, these three factors illustrated in the figure play a significant role in system load fluctuations and are considered as relevant impact factors. Figure 5 presents the statistics of environmental wind speed, wind direction, and the total load of the 52 transformers in the region during the data collection period. On the left side of the figure, the maximum daily wind speed shows both local fluctuations and long-term periodic changes that correspond to the data collection cycle. Short-term fluctuations are influenced by the measurement location, while long-term changes are related to larger cycles, similar to the temperature variations mentioned earlier. Moreover, the maximum wind direction exhibits a strong correlation with the maximum wind speed. Considering wind speed in load scheduling is crucial as it is associated with wind power generation, which can be integrated into the grid for intelligent scheduling purposes. On the right side of the figure,
FIGURE 7
Heat map of related feature correlation matrix. the overall load of the different transformers throughout the entire collection cycle is analyzed using summation statistics. Transformer number 4 has the highest load based on the statistical analysis. Classifying transformers based on their overall load statistics can lead to more intelligent maintenance and scheduling strategies. Additionally, factors such as photovoltaic (PV) systems, wind turbines (WT), gas turbine generators (GTG), and energy storage systems (ESS) are important components of the power system and influential factors in transformer load. PV and WT have experienced rapid deployment and development in recent years, contributing to diversified power supply systems. These power sources are influenced by environmental factors such as wind speed and sunlight, which can affect power supply in the system. GTG, on the other hand, is a stable and controllable power source that enables intelligent scheduling and maximizes energy utilization by predicting future regional loads. ESS, as an emerging technology, facilitates energy storage and release in the power grid. It helps achieve more precise intelligent scheduling, reducing the inherent variability of wind and photovoltaic power generation and ensuring optimal energy utilization throughout the scheduling system.
Features of datasets
A comprehensive statistical analysis was conducted to analyze the distribution of the variables discussed in the previous chapter, aiming to provide a deeper understanding of their feature engineering. Figure 6 presents the results of this analysis. The analysis reveals that air temperature follows a basic normal distribution. However, due to large periodic changes, the distribution can be divided into three distinct intervals, each reflecting different characteristics. On the other hand, the distribution of maximum humidity is more dispersed and cannot be accurately described by mathematical models. Nonlinear neural networks based on machine learning models are better suited for capturing the complex characteristics and relationships of this variable. Wind speed variables conform to a Gaussian distribution,
FIGURE 8
The corresponding training and validation strategy.
FIGURE 9
Training loss (A) and load prediction bar chart (B).
indicating a more regular pattern. The direction of the wind is strongly linked to the magnitude of the wind speed. Most wind directions exhibit oscillations to the left and right, while a wide range of wind directions corresponds to the maximum wind speed. In contrast, rainfall indicators exhibit sparse distribution characteristics. Careful characterization of this variable is necessary to fully understand its role in the system's compliance and accurately capture its impact. Overall, the statistical analysis provides valuable insights into the distribution patterns and characteristics of the variables, guiding the subsequent modeling and feature engineering processes.
To explore the interrelationships among different variables, a correlation analysis was performed on all variables and the system statistical load. The results are presented in Figure 7, where the correlation coefficient ranges from −1 to 1. A negative value indicates a negative correlation, a positive value indicates a positive correlation, and a value closer to 1 indicates a stronger correlation. The analysis reveals several key findings. Firstly, there is a strong correlation among the three temperature-related variables, indicating their close interdependence. Temperature and humidity also exhibit a high correlation, suggesting a relationship between these two factors. Furthermore, wind speed and direction are highly correlated, indicating that they influence each other. Precipitation shows a common correlation with other variables, suggesting its influence on the overall system. Notably, there is a strong negative correlation between the system load and temperature as well as humidity. This implies that higher temperatures and humidity levels are associated with lower system loads, indicating a potential inverse relationship. Overall, the correlation analysis highlights the complex and hidden relationships among the system statistical load and the various variables. Given these intricate relationships, large-scale neural networks can be employed to model nonlinear patterns and facilitate accurate load forecasting for the future.
Experiment setup and results analysis 4.1 Experiment setup
The feature vectors consist of temperature, humidity, wind speed, wind direction, and rainfall, represented by numerical values. These feature vectors serve as the input to the BERT network, which predicts the load of the transformer in the next time period. During training, the BERT network is trained using a loss function and optimized using gradient descent to adjust the weights. After training, the model is applied to predict the load of multiple transformers in the test datasets. The accuracy of the predictions is evaluated using metrics such as mean absolute percentage error (MAPE), mean absolute error (MAE), and root mean squared error (RMSE). By comparing the predicted values with the actual values, the performance of the model is assessed, and model parameters can be adjusted accordingly. Experimental results demonstrate that the proposed model effectively predicts the load of multiple transformers with high accuracy. The model exhibits good robustness and generalization capabilities, indicating its ability to handle various scenarios and generalize well to unseen data.
The BERT-based algorithm for load data prediction in this experiment was developed using Python 3.8 and TensorFlow 2.7. The dataset used comprised 584 days of load data from 52 transformers belonging to the Hebei Electric Power Company. During the training process, the algorithm underwent 200 epochs with a batch size of 512 and a learning rate of 5e-3. The dataset was split into a training set covering days 1-451 and a validation set covering days 452-500 which is displayed in Figure 8. The Adam optimizer was utilized, and the parameter values used in the experiment are provided in Table 2. Figure 9 illustrates the comparison of the loss function and prediction accuracy curves during the training of the BERT-based load forecasting algorithm. The results demonstrate that initially, the algorithm exhibits unstable fluctuations in the loss function, which is expected due to the limited number of training epochs. However, as the BERT model iteratively adjusts its parameters, the loss function gradually converges and stabilizes, indicating the algorithm's superior convergence properties. Furthermore, Figure 9 compares the predicted load values generated by the BERT algorithm with the actual load values. The comparison shows that the BERT load forecasting algorithm effectively learns from historical transformer load data, captures relevant features from multiple factors influencing power load, and adapts to the characteristics of load changes, leading to higher prediction accuracy.
Results analysis
In Figure 10, the load prediction results for the last 10 transformers are presented, and the predicted results are visually depicted through a histogram of the error rates. The results demonstrate that our proposed BERT-based transformer power load forecasting algorithm generally achieves better forecasting accuracy, with most error values kept within a small range of 0.2 and the majority of error rates maintained below 10%. However, a small portion of the error rates fluctuate significantly, as observed for the No. 47 and No. 48 transformers. This could be attributed to the training data collected with high fluctuations, resulting in a slight increase in the prediction error rate for these transformers. Overall, our proposed BERT-based transformer power load forecasting algorithm exhibits satisfactory forecasting results and demonstrates superior feature extraction and expansion capabilities compared to traditional CNN and LSTM models. These findings have practical implications and are valuable for addressing the needs and making adjustments in the power system industry.
FIGURE 14
Relative error of the BERT (A) and LSTM (B) algorithms with same batch size. Figure 11 presents the load prediction results for all transformers in the LSTM-based area. It is evident that there is a certain difference between the actual transformer load and the prediction based on LSTM. However, through careful observation and leveraging historical data, a certain level of compensation can be achieved to achieve accurate load predictions for the future. The right side of the figure shows the predicted values of different transformers for the next time period. It can be observed that the majority of transformer predicted values closely match the actual load values, with only a few predictions showing slight deviations. To further evaluate the difference between the predicted values and the actual values, Figure 12 displays the absolute error and relative error. The left side of the figure shows the absolute error values, indicating that only transformers numbered 43, 45, and 46 exhibit relatively large absolute errors, while the absolute errors of other transformers remain low. However, when considering the relative error, transformer number 49 stands out with a significant relative error. This can be attributed to the current load of the transformer being negative, leading to a substantial relative error. Overall, the results demonstrate the performance of the LSTM-based load forecasting model. While there may be some differences between the predicted and actual values, the model achieves accurate predictions for the majority of transformers, with only a few exceptions. Figure 13 presents the comparison of the BERT and LSTM models using the same training data and batch size. The results demonstrate that, under the same training parameters, the BERT network achieves a lower average absolute error compared to the LSTM network. This can be attributed to the BERT network's ability to effectively learn from the entire training set using a deep network model, while the LSTM network relies on time series relationships and may not achieve optimal predictive performance through global comprehensive learning. Figure 14 provides a comparison of the relative error between the two models. It shows that the BERT network achieves a smaller relative error than the LSTM model, indicating better stability in its prediction results. The average error results, as presented in Table 3, further support the superiority of the BERT network model. Across different evaluation indicators, the BERT network consistently demonstrates better performance gains. Overall, the transformer load prediction based on the BERT network model exhibits high accuracy and stability. It can be effectively applied to existing power systems, enhancing the intelligent dispatch capability of regional electricity.
Conclusion
This paper introduces a novel BERT-based transformer power load forecasting algorithm that surpasses existing algorithms in order to enhance energy utilization efficiency and significantly reduce carbon dioxide emissions within power dispatching departments. The proposed algorithm leverages BERT's powerful model extraction capabilities by preprocessing, encoding, and training historical load data obtained from the power grid. Consequently, it exhibits improved data understanding and achieves more accurate load forecasting compared to traditional LSTM approaches. Unlike conventional time series algorithms, our experimental results demonstrate that the BERT-based load forecasting method exhibits superior accuracy and robustness. The empirical analysis is based on actual power load data collected over a 2-year period from a power grid company, encompassing the composite data of 52 transformers. The dataset employed in this study includes various influential factors such as transformer ID, date, time stamp, wind speed, wind direction, ambient average temperature, maximum temperature, minimum temperature, humidity, reactive power, and active power. Our BERT-based method employs multiple preprocessing techniques and dataset analyses, leading to accurate load change predictions across different time periods and identification of key factors influencing power load. In contrast to traditional time series algorithms, our approach can effectively capture all relevant factors impacting power load. The proposed BERT-based power load forecasting algorithm serves as a valuable reference for power grid enterprises in terms of power demand planning and operation. Optimized training parameters enable the majority of transformers to achieve an average error rate of less than 10%. In comparison, the LSTMbased load forecasting model yields an average relative error of approximately 53.52%, indicating inferior performance compared to the BERT-based method with the same training parameters. Thus, the BERT-based scheme facilitates precise energy scheduling and utilization, maximizing energy efficiency, and offering valuable insights for the digital low-carbon transformation of power dispatching departments. Future work will focus on exploring distributed federated learning algorithms to enhance the model's robustness and adaptability.
Data availability statement
The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author.
Author contributions
HM contributed to the research concept and design, code development, data collection, data analysis, and interpretation. PT was responsible for writing the paper and providing critical revisions. CL contributed to data analysis and paper writing, while LL contributed to data processing and paper writing. HM contributed to the manuscript revision. PT was responsible for adding the relative experiments. All authors contributed to the article and approved the submitted version. | 8,598.8 | 2023-06-15T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Candidate Gene Expression in Bos indicus Ovarian Tissues: Prepubertal and Postpubertal Heifers in Diestrus
Growth factors such as bone morphogenetic proteins 6, 7, 15, and two isoforms of transforming growth factor-beta (BMP6, BMP7, BMP15, TGFB1, and TGFB2), and insulin-like growth factor system act as local regulators of ovarian follicular development. To elucidate if these factors as well as others candidate genes, such as estrogen receptor 1 (ESR1), growth differentiation factor 9 (GDF9), follicle-stimulating hormone receptor (FSHR), luteinizing hormone receptor (LHR), bone morphogenetic protein receptor, type 2 (BMPR2), type 1 insulin-like growth factor receptor (IGFR1), and key steroidogenic enzymes cytochrome P450 aromatase and 3-β-hydroxysteroid dehydrogenase (CYP19A1 and HSD3B1) could modulate or influence diestrus on the onset of puberty in Brahman heifers, their ovarian mRNA expression was measured before and after puberty (luteal phase). Six postpubertal (POST) heifers were euthanized on the luteal phase of their second cycle, confirmed by corpus luteum observation, and six prepubertal (PRE) heifers were euthanized in the same day. Quantitative real-time PCR analysis showed that the expression of FSHR, BMP7, CYP19A1, IGF1, and IGFR1 mRNA was greater in PRE heifers, when contrasted to POST heifers. The expression of LHR and HSD3B1 was lower in PRE heifers. Differential expression of ovarian genes could be associated with changes in follicular dynamics and different cell populations that have emerged as consequence of puberty and the luteal phase. The emerging hypothesis is that BMP7 and IGF1 are co-expressed and may modulate the expression of FSHR, LHR and IGFR1, and CYP19A1. BMP7 could influence the downregulation of LHR and upregulation of FSHR and CYP19A1, which mediates the follicular dynamics in heifer ovaries. Upregulation of IGF1 expression prepuberty, compared to postpuberty diestrus, correlates with increased levels FSHR and CYP19A1. Thus, BMP7 and IGF1 may play synergic roles and were predicted to interact, from the expression data (P = 0.07, r = 0.84). The role of these co-expressed genes in puberty and heifers luteal phase merits further research.
Growth factors such as bone morphogenetic proteins 6, 7, 15, and two isoforms of transforming growth factor-beta (BMP6, BMP7, BMP15, TGFB1, and TGFB2), and insulin-like growth factor system act as local regulators of ovarian follicular development.
To elucidate if these factors as well as others candidate genes, such as estrogen receptor 1 (ESR1), growth differentiation factor 9 (GDF9), follicle-stimulating hormone receptor (FSHR), luteinizing hormone receptor (LHR), bone morphogenetic protein receptor, type 2 (BMPR2), type 1 insulin-like growth factor receptor (IGFR1), and key steroidogenic enzymes cytochrome P450 aromatase and 3-β-hydroxysteroid dehydrogenase (CYP19A1 and HSD3B1) could modulate or influence diestrus on the onset of puberty in Brahman heifers, their ovarian mRNA expression was measured before and after puberty (luteal phase). Six postpubertal (POST) heifers were euthanized on the luteal phase of their second cycle, confirmed by corpus luteum observation, and six prepubertal (PRE) heifers were euthanized in the same day. Quantitative real-time PCR analysis showed that the expression of FSHR, BMP7, CYP19A1, IGF1, and IGFR1 mRNA was greater in PRE heifers, when contrasted to POST heifers. The expression of LHR and HSD3B1 was lower in PRE heifers. Differential expression of ovarian genes could be associated with changes in follicular dynamics and different cell populations that have emerged as consequence of puberty and the luteal phase. The emerging hypothesis is that BMP7 and IGF1 are co-expressed and may modulate the expression of FSHR, LHR and IGFR1, and CYP19A1. BMP7 could influence the downregulation of LHR and upregulation of FSHR and CYP19A1, which mediates the follicular dynamics in heifer ovaries. Upregulation of IGF1 expression prepuberty, compared to postpuberty diestrus, correlates with increased levels FSHR and CYP19A1. Thus, BMP7 and IGF1 may play synergic roles and were predicted to interact, from the expression data (P = 0.07, r = 0.84). The role of these co-expressed genes in puberty and heifers luteal phase merits further research. inTrODUcTiOn Ovarian activity and hormones are paramount for pubertal development and normal reproductive performance (1). Improving the reproductive performance of Bos indicus cattle is an industry priority in tropical and subtropical regions of the world, because of its impact on farm productivity (2). B. indicus breeds have a later onset of puberty (16-40 months of age), which has a negative impact on their overall reproductive performance (3)(4)(5)(6). Although the heritability of age at puberty measured by first detected corpus luteum (CL) has been reported to be moderate (0.52-0.57) (7), the phenotypic identification of animals that undergo puberty at an early age is expensive. Enhancing our comprehension of ovarian genes and their interactions involved in bovine puberty could have practical implications in the animal breeding context. The regulation of ovarian activity is an integrated process that involves FSH and LH, their receptors, ovarian steroids, and intraovarian factors (8). Some of the most important intraovarian factors are members of the transforming growth factor-beta (TGFB) superfamily, such as bone morphogenetic proteins (BMPs) 6, 7, and 15 (BMP6, BMP7, and BMP15) and two isoforms of TGFB (TGFB1 and TGFB2), which are expressed by ovarian somatic cells and oocytes in a stage-specific manner throughout folliculogenesis (9,10). These genes function as local regulators of ovarian follicular development and subsequently affect fertility (11)(12)(13). Experiments in mice indicate that the different isoforms of TGFB are responsible for diverse physiological functions (14,15). An in vivo study showed that BMP7 promotes the "recruitment" of primordial follicles into the growing follicle pool while inhibiting progesterone production and ovulation (16). Similar to BMP7, BMP15, and BMP6 are part of a group of luteinization inhibitors (17). In vitro, BMP7 induced expression of follicle-stimulating hormone receptor (FSHR) mRNA in human granulosa cells (18). In contrast, BMP15 suppressed FSHR and luteinizing hormone receptor (LHR) expression (19). Given the discussed roles affecting ovulation, these intraovarian factors could modulate or influence the onset of puberty that leads to the first ovulation event.
Growth differentiation factor 9 (GDF9) gene expression is relevant for oocyte competence, and GDF9 follows a similar expression pattern to BMP15 in cows stimulated with FSH treatment (20). The relevance of GDF9 expression in ovarian tissues of peripubertal B. indicus heifers is unclear, but its link to BMP15 and FSH pathways merits investigation.
The insulin-like growth factors (IGFs) system plays a key role in follicular development and female fertility (21)(22)(23). Insulinlike growth factor 1 (IGF1) and insulin-like growth factor 2 (IGF2) have been reported to act in synergy with gonadotropins LH and FSH to stimulate growth and differentiation of ovarian follicles and subsequent synthesis and secretion of estradiol and progesterone production (24)(25)(26)(27). Moreover, experiments in cattle (28,29) showed that IGF2 was the main intrafollicular IGF ligand regulating follicular growth and highly expressed in theca cells. All of these previous reports suggest that the IGFs and TGFB superfamily genes may have different roles in ovarian activity, related to the cycle phase and possible to endocrine regulation of puberty.
Estrogen produced by ovarian tissue in relevant for GnRH release from the hypothalamus and the expression of its receptor, ESR1, has been associated with puberty in mice (30). In cattle, ESR1 expression is required for normal follicular development and follicular dominance (31). Given the relevance of steroid hormones for pubertal development and ovarian function, key steroidogenic enzymes such as cytochrome P450 aromatase (CYP19A1) and 3-β-hydroxysteroid dehydrogenase (HSD3B1) were also targeted in this study.
So far, studies have focused on the endocrine regulation of the hypothalamic-pituitary-ovarian system for onset of puberty (32)(33)(34). Information about specific changes in ovarian gene expression-associated specific cycle phases and puberty is sparse. Thus, the aim of the present study was to elucidate the expression pattern of candidate genes, such as ESR1, GDF9, FSHR, LHR, BMPR2, TGFB1, TGFB2, BMP15, BMP6, BMP7, IGF1, IGFR1, IGF2, CYP19A1, and HSD3B1 in pre-(PRE) and postpubertal (POST) heifers in the diestrus phase. Evidence of differential expression will help to support or disprove the hypothesis that these genes modulate or are influenced by the presence of circulating progesterone and the onset of puberty.
MaTerials anD MeThODs animal Management and Puberty Observation
Management, handling, and euthanasia of animals were approved by the Animal Ethics Committee of The University of Queensland, Production and Companion Animal group (certificate number QAAFI/279/12). Twenty Brahman heifers, which were not pedigree animals, but had a characteristic B. indicus phenotype and were typical beef industry animals, were sourced as young weaners born at the same season (<250 kg) and kept at grazing conditions, from two commercial herds in Queensland, Australia. After being sourced, heifers were kept at the Gatton Campus facilities of the University of Queensland; they were all under the same conditions and pasture based diet until project end. Precise day of birth information was not available for these heifers as they were sourced from industry; the effect of age differences in pubertal development is possible but could not be tested in this experiment.
Heifers were examined every 2 weeks for observation of pubertal development, from October 2012 to May 2013. Ovarian activity was observed using ultrasonography [HS-2000(VET), Honda Electronics Inc.]. Pubertal status was defined by presence of a CL observed with the ultrasound (7). Euthanasia plans were based on date of the first CL observation. Six heifers were chosen to be prepubertal (PRE) and six heifer POST. The POST heifer when identified was then paired with PRE heifer which was randomly drawn from the remaining animals and processed on the same day. The animals were weighted, and the body condition (BCS) was measured. POST heifers were euthanized 23 days, on average, after observation of the first CL. POST heifers were euthanized on the luteal phase of their second estrous cycle, confirmed by CL presence on ovarian tissue post euthanasia. Euthanasia was carried out by stunning with a non-penetrating captive bolt followed by exsanguination. Concentrations of progesterone were measured with a radioimmunoassay (RIA) from blood samples collected at exsanguination, to confirm that POST heifers had a functional CL, which was observed at euthanasia. RIA was performed by the Laboratory for Animal Endocrinology of the University of Queensland (Dr. Stephen Anderson). Plasma progesterone concentrations were measured by RIA as described by J. D. Curlewis, M. Axelson, and G. M. Stone (35) with the difference that progesterone antiserum C-9817 (Bioquest, North Ryde, NSW, Australia) was used. Extraction efficiency was 75%, and the values reported herein were not corrected for these losses. The sensitivity of the assay was 0.1 ng/ml, and the intra-and interassay coefficient of variation (CV) was 5.0%.
Post-euthanasia, left and right ovaries were harvested and preserved by snap freezing in liquid nitrogen, then kept at −80°C until RNA extractions were carried out. In total, 24 ovaries (2 × 6 PRE and 2 × 6 POST heifers) were processed separately for RNA extraction and quantitative real-time PCR (qRT-PCR) measurements.
rna extraction and Quantitative real-Time Transcription Pcr analysis
Prior to RNA extraction, the whole ovary tissue was ground under liquid nitrogen to form a powder of which 25 mg was used for RNA extractions; left and right ovaries were kept separately. Total RNA was isolated separately from 25 mg of the homogenized sample of left and right ovaries from PRE and POST heifers, using Trizol method (Life Technologies, Inc.). The total RNA was resuspended in RNase-free ultrapure water and stored at −80°C until further use. RNA concentrations were measured by Nanodrop ND-1000 spectrophotomer (Thermo Fisher Scientific, Wilmington, DE, USA) with an optimal 260/280 ratio between 1.8 and 2.1. Intact 28S and 18S rRNA subunit integrity was assessed by Bioanalyser (RIN 6.9 or above for all samples).
Reverse transcription was performed using GoScript Reverse Transcription System (Promega) and oligo (dT) primers (Invitrogen). The reactions were performed with 6 μg of total RNA and 2 μl of 50 μM oligo (dT)23VN primer, following the manufacturer's recommended protocol. The cDNA concentrations from the samples were estimated on a Nanodrop ND-1000 spectrophotomer (Thermo Fisher Scientific). Finally, the singlestranded cDNA samples were stored at −20°C until analysis by qRT-PCR.
Quantitative real-time PCR reactions were performed in triplicate using SYBR ® Select Master Mix (Applied Biosystem) following the manufacturer's instructions in a ViiA™ 7 Real-Time PCR System (Applied Biosystem). The CV of Ct values from replicates within each sample was low <3% indicating acceptable accuracy and reproducibility (not shown). The oligonucleotide primers used for the reactions were designed using PrimerQuest software provided by Integrated DNA Technologies, Inc. from Bos taurus sequences available in GeneBank database. In the present study, glyceraldehyde 3-phosphate dehydrogenase (GAPDH) was used as an internal control gene because it was stably expressed in our study and previously in dairy cattle (36). The list of primer sequences used for each target gene is listed in Table 1.
Prior to performing qRT-PCR, the amplification efficiency and optimal primer concentration were determined for each gene using serial dilution of cDNA. For this purpose, four concentrations of cDNA (0.625, 2.5, 10, and 40 ng/reaction) and two primer dilutions (100, 200 nM) were tested. The PCR efficiencies for all primers pairs were obtained using the formula E = 10 (−1/slope) × 100, where E is efficiency and slope is the gradient of dilution series in the linear phase. These results are summarized in Table 1, where E is represented as a percentage [%E = (E − 1) × 100]. Samples were amplified separately in triplicate using a ViiA™ 7 Real-Time PCR System (Applied Biosystem, Foster City, CA, USA) with the following amplification program: an initial step at 95°C for 10 min, a second step of 40 cycles of 95°C for 20 s, and a final extension step at 60°C for 30 s. At the end of each reaction, a denaturation curve was plotted to assure that each reaction produced a single fragment, that is, the curve contained only one dissociation peak.
statistical analysis
Progesterone, body weight, and condition score data were analyzed in a completed randomized design using SAS (version 9.1.3), and means were compared by the t-test to the level of 5%. Statistical analysis of Ct data was performed using %QPCR_MIXED macro developed in SAS (version 9.1.3) (https://msu.edu/~steibelj/ JP_files/QPCR.html) developed to generate codes in SAS PROC MIXED suitable to analyze data from qRT-PCR, assuming independent random effects for reference and target genes in each biological replicate (37). The following model was used: ( ) is the sample-specific random effect (common to reference and target genes); and e gikr~, N 0 σ e 2 ( ) is the residual term.
The relative expression was estimated using ΔCt method (target gene Ct -GAPDH gene Ct) as previously reported (38), where Ct is the PCR cycle number at which the fluorescence generated within a reaction crosses an arbitrary threshold. For each target gene, the comparison of gene expression between physiological state (PRE or POST) was performed by CONTRAST statement of the GLM procedure (SAS software) using Student's t-test to the level of 5%. The "estimates" generated by CONTRAST analyses were used to estimate fold-change (relative expression) for pair-wise contrast of interest and were obtained by using 2 −(Estimate) . The contrast between PRE versus POST heifers (both ovaries) was analyzed considering the average of Ct values from right and left ovary of each heifer for each gene. The other two contrasts were: (1) both ovaries from PRE versus only ovaries with CL from POST heifers and (2) ovaries from POST heifers without CL versus ovaries from POST heifers with CL. These two other contrasts should help to elucidate if differences in gene expression observe are related to the local presence of the CL itself or not (are more generally related to PRE versus POST processes). Once the efficiency (E) of the qRT-PCR reaction was close to 100%, one PCR cycle of difference between two samples means twice as much expression in the first sample in comparison with the second. Pair-wise correlations between gene expression values were calculated, and its significance was tested using R software for analyses. Significant correlations were interpreted as predicted gene interactions.
resUlTs serum Progesterone concentrations, body Weight, and condition
Average serum concentration of progesterone for the PRE and POST B. indicus Brahman heifers were 0.4 ± 0.2 and 2.0 ± 0.7 ng/ ml, respectively (n = 6 per group). The difference in progesterone concentration corresponds to the observation of CL in POST heifers. Associated data collected prior to euthanasia of PRE and POST heifers indicated no significant difference in body weight or body condition score between PRE and POST heifers. Body weight averages were 338 ± 54.17 and 362.6 ± 38.62 kg (P = 0.38), and condition score averages were 3.5 ± 0.44 and 3.75 ± 0.41 (P = 0.18), respectively.
Ovarian gene expression
Ovarian gene expression was measured with real-time quantitative PCR in three contrasts: (1) average gene expression from both ovaries PRE versus POST heifers in diestrus; (2) average expression of PRE ovaries versus only ovaries with a CL in POST heifers in distrus; and (3) ovaries with CL versus ovaries without using only POST heifers in diestrus.
In the third and last contrast, we did not observe any differential gene expression between ovaries with a CL versus the contralateral ovary (without CL) from POST heifers (not shown).
DiscUssiOn
This study has demonstrated that the diestrus phase of pubertal heifers is associated with changes in ovarian gene expression of gonadotropin receptors (FSHR and LHR), key steroidogenic enzymes (HSD3B1 and CYP19A1) and important intraovarian factors and their receptor, such as BMP7, TGFB1, and IGFR1. The regulation of ovarian activity is determined by coordinated action between FSH and LH with their receptors, ovarian steroids, and intraovarian factors (8). Genes of the TGFB superfamily, such as GDF9, BMP6, BMP7, BMP15, TGFB1, and TGFB2, and their cell receptors are known paracrine and autocrine modulators of ovarian function and fertility (9-13).
The IGF system plays a key role in follicular development and is an additional important intraovarian factor involved in pubertal development (24)(25)(26)(27)39). Our results fit with current theory and point to specific intraovarian factors relevant to the diestrus phase of pubertal heifers in B. indicus.
We did not detect differential gene expression between ovaries from POST heifers with a CL versus the contralateral ovary (without a CL), which could confirm the systemic effect of circulating progesterone on ovarian dynamics. Therefore, we focused on the differences between PRE and POST heifers. In the present study, the significant differences in ovarian gene expression between two physiological states, PRE and POST (luteal phase), are likely related to onset of puberty and influenced by progesterone signaling. It is important to note that the comparison is between PRE heifers, which had never experienced a luteal phase and the related progesterone signaling, while the POST heifers were in their second diestrus with the corresponding levels of progesterone being produced by CL. Future research examining the other phases of the cycle in POST heifers would complement our findings.
We identified greater expression of FSHR in the ovaries of PRE than POST heifers in diestrus, which could be related to follicular waves characterized by synchronous development of groups of growing follicles, one of which would became dominant and achieved the greatest diameter, suppressing the growth of the other smaller subordinate follicles. Transrectal ultrasonography has detected the sequential growth and regression of large follicles in heifers, near the time of first ovulation (40). FSH and its receptor FSHR play an important role in follicle progression from Relative expression value is expressed as the least square means ± SEM of 2 −(Estimate) . Bars above the origin mean higher expression in PRE compared to POST with CL. *P < 0.05; **P < 0.01. the primary to the advanced stage of follicular development (41). Studies reported the expression of FSHR on granulosa cells in primary and secondary follicles, which is evidence for the role of FSH in follicular development at early stages (41,42). Our findings are further supported by previous studies that demonstrated an increase in FSH binding sites in the rat ovary during the PRE period (43,44). The lower expressions of FSHR in POST heifers in diestrus is in accord with the higher progesterone concentrations during luteal phase because progesterone exerts a negative feedback inhibition on GnRH and suppress further follicular growth and maturation (45).
The POST heifers in diestrus exhibited greater LHR mRNA expression than PRE heifers. These animals are in luteal phase of their estrous cycle, and it is well know that LH has essential role for maintenance and normal function of the CL (46). Therefore, LHR signaling is likely to play has an essential role in suppressing ovarian activity in the remainder of the diestrus ovary.
The BMPs are important for the regulation of follicular development, ovulation, and CL morphogenesis (12,16,19). Interestingly, in the present study, no significant differences in mRNA abundance of GDF9, BMP15, BMP6 TGFB2, and their receptor BMPR2 were observed between the PRE versus POST contrasts. Although these genes are known as modulators of mammalian folliculogenesis and were therefore selected for investigation, it seems they are not involved with the changes in follicular dynamics observed in this contrast of PRE versus POST (luteal phase). The specific BMP associated with diestrus and pubertal development seems to be BMP7.
In the PRE versus POST (luteal phase) contrast, BMP7 was differentially expressed. Our findings demonstrated higher abundance of ovarian BMP7 in PRE than in POST heifers, which may be associated with the different physiological phases represented in this contrast. The ovaries of PRE heifers are likely to be constantly initiating in follicular recruitment. The recruitment stage is characterized by the presence of a cohort of growing follicles at the beginning of the follicular wave. Our finding is consistent with previous studies in mice and human showing that BMP7 promoted the recruitment of primordial follicles into the growing follicles and enhanced estradiol secretion when secretion of progesterone was concomitantly suppressed (9,47,48). Because progesterone is important in the process of ovulation (49), inhibition of progesterone production by BMP7 suggests a functional role as luteinization inhibitor delaying ovulation (50,51). Furthermore, in an in vitro study, BMP7 increased FSHR mRNA expression in rat and human granulosa cells (18,52), while it decreased LHR expression in human granulosa cells (18). In this context, the results of the present study suggest an important mechanism of FSHR and LHR regulation by BMP7 in PRE heifers. Greater ovarian BMP7 contributes to increased FSH sensitivity of granulosa cells via upregulation of FSHR and downregulation of LHR, thus promoting folliculogenesis (recruitment, selection, and atresia) and suppressing ovulation (18).
We observed differential expression of key steroidogenic genes CYP19A1 and HSD3B1 between PRE and POST heifers. CYP19A1 codes for the key regulator enzyme in the steroid biosynthesis pathways: P450aromatase, which catalyzes the conversion of androgens to estrogens (53). The upregulation of BMP7 may explain the increased expression of CYP19A1 in PRE heifers when compared to the diestrus POST heifers. An in vivo study demonstrated that rat ovaries treated with BMP7 had enhanced expression of CYP19A1, which in turn increased estradiol production, as well as increased the number of preantral and antral follicles (47,54).
Greater expression of HSD3B1 mRNA in POST heifers in their luteal phase could be the result of the presence of an active CL and consequent progesterone signaling. Theca-and granulosaderived luteal cells express the enzyme HSD3B1 that converts pregnenolone to progesterone (55). In short, the greater expression of HSD3B1 is necessary for production of progesterone, which is generally not occurring in PRE heifers.
Transforming growth factor-beta isoforms (TGFB1and TGFB2) are multifunctional regulatory molecules because it can either stimulate or inhibit proliferation, differentiation, and other critical cell functions according to species, stage of differentiation of ovarian cells, and presence of others growth factors (48,56,57). In rodents, TGFB1 increased proliferation of FSHstimulated granulosa cells (58). However, in cattle (TGFB2) and sheep (TGFB1 and TGFB2), these growth factors have inhibitory effects on granulosa cell proliferation (59,60). The effect of TGFB isoforms are species specific and might be different in B. indicus and B. taurus animals. TGFB1 mRNA and protein expression in bovine granulosa cells decrease during progress of folliculogenesis (61). Furthermore, in cattle, TGFB1 was present in granulosa cells of earliest stage of development (early preantral and early antral follicles) but absent in larger more advanced follicles (62). We observed greater expression of TGFB1 mRNA in PRE heifers compared to POST heifers (luteal phase), which highlights the important role of TGFB1 in modulating both granulosa cell growth and differentiation. We can speculate that the proliferation inhibition induced by TGFB1 may promote differentiation of granulosa cell in order to acquire FSHR and express steroidogenic enzymes, which allow the cell to be more responsive to FSH and enables secretion of ovarian steroids. An improved understanding of the mechanisms of TGFB1 in follicular growth and pubertal development in bovine ovary requires further investigation. Especially, it will be relevant to understand the interactions between BMP7 and TGFB1 in regulation of bovine folliculogenesis. To achieve a better understanding of this interaction, it will be relevant to study the expression of BMP7 and TGFB1 in all phases of the estrus cycle for POST heifers.
Intrafollicular IGF1 and IGF2 have been reported to act in synergy with the gonadotropins LH and FSH to stimulate growth and differentiation of ovarian follicles and subsequent synthesis and secretion of estradiol and progesterone production (24)(25)(26)(27). The type 1 IGF receptor (IGFR1) mediates most of the actions of both IGF1 and IGF2 (63), and its expression increases during follicular development (64). In this current study, we observed greater expression of IGF1 and IGFR1 in PRE compared to POST heifers (luteal phase). Our findings suggest that IGF1, as well as BMP7, could be underpinning the increased expression of FSHR and CYP19A1 mRNA found in PRE heifers ovaries. This is in agreement with a previous in vivo study that showed IGF1 can regulate FSHR gene expression and indirectly aromatase expression (65). FSHR expression has been shown to be severely reduced in preantral follicles of IGF1 knockout mice, as well as aromatase expression, and was restored to wild-type expression levels after 2 weeks of exogenous IGF1 supplementation (66). Reinforcing this hypothesis an in vivo study showed that FSH enhanced IGFR1 expression (65). Thus, local IGF1 creates a positive feedback loop where IGF1 enhances FSH action and FSH enhances IGF1 action through mutual receptor upregulation.
Taken together, our findings suggest that BMP7 and IGF1 may interact and regulate intrafollicular steroidogenesis and follicular response to gonadotropins during pubertal development. The emerging hypothesis is that two regulators of ovarian activity prepuberty are co-expressed: BMP7 and IGF1. These two genes may play synergic roles modulating the expression of FSHR, LHR, and IGFR1, and steroidogenesis (CYP19A1). BMP7 could be associated with downregulation of LHR and upregulation of FSHR and CYP19A1, which mediates the follicular dynamics in PRE heifer ovaries as compared to POST (luteal phase). Upregulation of IGF1 expression in PRE compared to POST (luteal phase) correlates with increased levels FSHR and CYP19A1. Furthermore, TGFB1 likely plays important role in follicular dynamics in prepuberty. The role of these genes and predicted interactions merits further research to elucidate the molecular pathways in cattle ovarian tissue, pre-and postpuberty, especially to elucidate these genes expression in all cycle phases.
In summary, our results suggest that differential expression of ovarian genes could be related to changes in follicular dynamics and differences of gene expression levels within the cell population that formed the ovarian tissue, which had emerged as consequence of the luteal phase and puberty. The comparative expression of these genes in granulosa versus luteal cells in cattle PRE versus POST (including all cycle phases) should be the focus of further research to better understand their role in puberty. The data presenting herein are a starting point that reinforces the differentially expressed genes as relevant genes for ovarian dynamics in B. indicus heifers.
acKnOWleDgMenTs
The authors gratefully acknowledge The University of Queensland (UQ) in permitting the study to be performed and the Animal Genetics Laboratory (AGL at UQ) for their technical support.
FUnDing
Financial support for this Ph.D. project was provided by The University of Queensland and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES/project number A125/2013). MF is supported by the UQ Postdoctoral Fellowship. | 6,491.8 | 2016-10-18T00:00:00.000 | [
"Biology"
] |
Selecting a Stored Data Approach for Mobile Apps
Stored data is a critical component of any application. The stored data component of mobile applications (apps) presents special considerations. This paper examines stored data for mobile apps. It identifies three types of mobile apps and describes the stored data characteristics of each type. It proposes decision factors for selecting a data storage approach for a mobile app and the impact of the factors on the usability of the app. The paper surveys over 70 apps in a specific domain (that of walking the Camino de Santiago in Spain) to examine their data storage characteristics. It also presents a case study of the development of one app in this domain (eCamino) and how the decision factors were applied in selecting the stored data approach for this app. The paper also discusses the implications of the research for apps in other domains. The paper concludes that in general the data storage approach selected for a mobile app depends on the characteristics of the situation in which the app will be used, but in the domain examined one particular approach (synchronized data storage) has clear benefits over other approaches.
Introduction
Stored data is a critical component (resource) of information systems [12].Without the ability to store data, users would need to enter all required data into a system before any processing could be done or useful output could be produced.Information systems, with their reliance on data files, databases, data warehouses, and big data, could not function if they could not store data as part of the system.In a sense, stored data is the glue that holds the other components of the system together.
The stored data component of mobile applications (apps) presents special problems.Devices on which these apps operate (smartphones, tablets, etc.) typically have limited storage capacity.If the app is designed to be used only offline, then it must store all required data on the device, which limits the amount of data that can be stored.In addition, the data is not updated dynamically from external sources as conditions change, and thus, except for those cases where all data is generated on the device, it is likely to be out of date.On the other hand, if the app is designed to be used online then it can access the required data in real time from a server, in which case the data storage limitation of the device does not impact the app.The app, however, is dependent on a wireless connection with sufficient bandwidth, which is not always available.A third option discussed later, that of data synchronization, presents its own challenges.
The impact on the user of different approaches to data storage in mobile apps can be severe.With some apps (e.g., personal contact list) the user updates the stored data and keeps it on the device.In many situations (e.g., customer ratings) the data is updated by external entities, thus requiring access to a server, which, as explained above, has potential problems.Sometimes even locally stored data may need to be used to update data on a server (cloud storage) so that the user can share the data among several personal devices.
In mobile business, apps are used for a variety of purposes from front-end, customer-facing applications, such as those found in mobile commerce, to back-end systems used only by employees, such as mobile inventory management systems.The trend towards BYOD (Bring Your Own Device) [6] complicates the situation.Employees are bringing their own devices with their favorite apps to their work places (which may be remote) and expecting to use them in their jobs.All these apps, whether company-supplied or employee-provided, need stored data, but the data storage approach may vary from app to app.
As mobile device apps become increasingly ubiquitous, both for personal use and business use, the design of the stored data component becomes more important.Users and businesses will not accept apps that do not meet their data needs.Only well-designed apps that provide the right data at the right time will find favor among users and in businesses.
The purpose of this paper is to explore data storage options for mobile apps.The main question addressed in this paper is in what situations are different approaches to data storage appropriate.The answer to this question can be of value to both app developers and users.Developers need to select a data storage approach for their apps, and knowing which approach is appropriate for which situations helps developers focus on app design that is most likely to meet the users' needs.Users can use the answer to this question to select apps that use a data storage approach that fits their situation.This paper investigates the question by examining the data storage modality for apps used in a specific domain (described later).It surveys over 70 apps used in this domain to identify the intended use of each app and the data storage approach taken by the app.The paper also presents a case study of the development of an app in this domain and the way the data storage issues are addressed in its design.The impact of the data storage approach on the usability of the app for the user is explored.This paper is organized as follows.The next section describes types of mobile apps and the data storage characteristics of each type based partially on literature related to mobile apps data storage.Following this we propose several decision factors that are relevant for selecting a data storage approach for a mobile app.Next we describe the domain of this study.Following this we present a survey of mobile apps in this domain with an analysis of the data storage approach used by the apps in the survey.Next we examine the case of one specific app and show how the decision factors were applied in the case.We follow this with a discussion of the survey and the case.We then discuss the implications of the research for apps in other domains.Finally, we present our conclusion.
Stored Data for Mobile Apps
Mobile apps come in three varieties with respect to their stored data component: 1. Offline apps: These apps store all their data on the mobile device.The data may be initially populated when the app is installed (e.g., maps) and possibly updated by the device user, or initially populated and updated during the app's use (e.g., contact list) by the device user.These apps do not need to be online to function 3. Synchronized apps: These apps store all their data on the mobile device and thus can be used offline, but the stored data may be updated (downloaded) with data from a server when the device is periodically online.
In addition the data on the server may be updated (uploaded) with data from an online device.These apps provide their full functionality when offline.An example of an app of this type is described in the case study that appears later in this paper.
(A fourth type of app is a hybrid app.These apps combine online and offline characteristics.They provide full functionality when online but limited functionality when offline.All data for hybrid apps is stored online but for some apps of this type limited data may be stored on the mobile device.An example of this later variation is Google Maps. To provide full functionality, however, this type of app must be online.Because this type of app must be online to access all stored data, we include it in the online category in this paper).
Table 1 summarizes the data storage characteristics of these types of apps.
Offline
Synchronized apps create a special challenge because a number of synchronization patterns can be used.[7] classifies these patterns as follows: Data synchronization mechanism patterns: These patterns deal with when data is synchronized between the server and the mobile device.Two patterns are: -Asynchronous data synchronization: Data is synchronized while the app continues its normal functioning.The user can continue to use the app during data synchronization.
-Synchronous data synchronization: The normal functioning of the app is blocked while data is synchronized.With this pattern the user is not able to use the app during synchronization.
Data storage and availability patterns: These patterns related to how much data is stored on the mobile device.Two patterns are: -Partial data storage: Only data from the server that is needed by the app is stored on the device.
-Complete data storage: All data from the server is stored on the device.
Data transfer patterns: These patterns deal with how much data is transferred between the server and the mobile device.Three patterns in this category are: -Full data transfer: All the data on the server is transferred to the device and vice versa.
Different combinations of these synchronization patterns impact the usability of an app in different ways.For example, synchronous, complete storage, full transfer can take considerable time for large amounts of data preventing the user from using the app for an extended period.On the other hand, asynchronous, partial storage, timestamp synchronization, although not impacting the user's use of the app, may result in the needed data not being available on the mobile device at the time the user needs it.The app developer must select the synchronization pattern for the app based on the intended use of the app.
Another treatment of data synchronization for mobile apps can be found in [15].
Database capabilities on servers can use common database management systems such as Oracle and DB2.On mobile devices used for offline and synchronized apps, however, the data storage software must meet special requirements.Because the memory capacity of mobile devices is limited, the data storage software must occupy a minimum of storage and must store data efficiently.In addition, the software must be designed so that it provides adequate performance with the limited processing power of mobile devices.Finally, for synchronized apps, the software must be able to synchronize the data with the server's database management system.Two examples of data storage software for mobile devices are SQLite [14] and SQL Anywhere [13].SQLite is public domain, open source.SQL Anywhere is a product of SAP (formerly a product of Sybase until Sybase was acquired by SAP).
Decision Factors
The decision about what mobile data storage approach to employ in a mobile app needs to consider factors that directly impact the usability of the app by the user and its appropriateness for mobile business.A search of the research literature did not find reference to such factors, although some articles address other aspects of mobile device data storage (e.g., [4]).We found one industry article that provides anecdotal guidelines for selecting database software for offline and synchronized apps [3].In addition, several websites suggest design criteria for apps that include reference to the data storage component (e.g., [8]) but do not provide decision factors.
A number of factors could be considered in selecting a stored data approach for mobile apps.In the information gathering phase of the case study presented later in this paper, four decision factors were identified largely from discussion with developers.We propose that these four factors, listed below, are central to the data storage decision as they impact the user directly.We demonstrate the application of these factors later in the case study.
Speed of stored data access: Access to stored data for an offline or synchronized app can be as fast as the mobile device's storage and processing technology can provide.Stored data access for online apps depends on the speed of the communications channel and the volume of data being accessed.Users may notice data access speed differences when using offline/synchronized apps compared to online apps.The speed may impact the user's ability to access the data in a timely fashion.
Currency of stored data: Stored data for offline and online apps is always current.Currency of stored data for synchronized apps depends on database activity since the last synchronization.Users of synchronized apps may find that some stored data is out of date until the next synchronization takes place, which is not the case for offline and online apps.The currency may impact the user's ability to have up-to-date data available.
Domain of Study
To explore the stored data characteristics of mobile apps, we examined apps in a particular domain, that of walking the Camino de Santiago in Spain.The Camino de Santiago (Camino for short) is an ancient pilgrimage in Northern Spain.Although there are many routes, all end at the cathedral in Santiago de Compostela (Santiago for short) in the northwest corner of Spain where the bones of St. James are said to be buried.It can be traveled by foot, bicycle, or horseback; all who make the journey, whether for religious, spiritual, recreational, touristic, or other reasons, are called pilgrims.The most popular route, called the Camino Francés, starts in Saint-Jean-Pied-de-Port in France and ends in Santiago, 774 kilometers (481 miles) away, and typically takes about 35 days by foot.Pilgrims have made the trek to Santiago for over 1000 years with the numbers varying throughout the centuries.In recent years the journey has become very popular with over 200,000 pilgrims completing it in 2014 and 2015 [1].The recent feature film The Way has sparked even more interest in the Camino.
The Camino is well marked and can be walked without guidance.Pilgrims, however, have often used one or more paper guidebooks to find their way.Two popular books in English are [2], [5], but there are many others in English and other languages.With the ubiquity of smartphones and tablets, however, some pilgrims are using apps to guide them (personal observation during summer 2013).In an online search, we have found over 70 apps that are designed specifically for use by pilgrims on the Camino.This domain was selected partly because of the personal experience of one of the authors on the Camino, but also because it provides a number of challenges for app developers.Mobile phone service, although very good in Spain, is not ubiquitous on the Camino.Many pilgrims, especially those from the United States, do not want to use mobile phone service in Spain because of the high cost of roaming.A useful feature for Camino apps is maps, which require high bandwidth to download and significant storage space on mobile devices.Two surveys of pilgrims [9], [10] showed that offline apps are preferred by pilgrims, but then data cannot be updated in real time on the mobile device as conditions change on the Camino.App developers need to consider the decision factors discussed previously in selecting the storage method for their apps in this domain.
Survey of Data Storage Approaches Used by Apps in Domain
Through searches of the Apple App Store, Google Play, Microsoft Windows Phone App Store, and the Internet in general, we identified 71 apps that are designed specifically for use by pilgrims walking the Camino.We excluded apps that, while perhaps useful when walking the Camino, are not specifically for the Camino, such as Google Maps.Although these apps are part of the ecosystem of apps used by pilgrims on the Camino [9], [10], we chose to focus exclusively on Camino-specific apps in this study.We also excluded browser-based systems because we wanted to compare only device-resident apps of the three types identified previously in this paper.
Of the 71 Camino-specific apps that we found 21 (30%) were iOS based, 45 (63%) were Android based, and 5 (7%) were Windows Phone based.We did not consider apps for Blackberry, Firefox OS, or other lesser-used operating systems.Appendixes A, B, and C list the apps that we examined along with each app's type.We note that some apps had versions for different operating systems.
We found six apps that had similar, but not necessarily identical, versions for different operating systems.We counted these apps for different operating systems separately, arguing that pilgrims would associate an app with their device, which could be either iOS, Android, or Windows Phone based, and thus think of a similar app on another device as a different app.We also note that some apps had versions for different languages.If language selection was included in the app, then we counted it as a single app.If, however, the user had to install a different app for a different language, then we counted the versions separately.We also note that some developers have several apps for different routes.(There are over 20 named routes of the Camino de Santiago).We counted apps for different routes from a single developer separately.
We prepared an extensive spreadsheet of the characteristics and features of the apps, including their stored data characteristics.The preparation of this spreadsheet is part of an ongoing research project dealing with Camino apps.This research includes two surveys of over 600 users of Camino apps from the United States and Europe.These surveys dealt with pilgrims' use of apps and their preferences for certain features.Some of the results of these surveys were reported previously in symposium presentations [9], [10].In addition, one paper dealing with diffusion of innovations has resulted from analysis of some of this data [11].Neither the presentations nor the paper, however, deal with the data storage characteristics of the apps.
We identified 5 (7%) of the apps as offline, 56 (79%) of the apps as online, and 10 (14%) of the apps as synchronized.To illustrate the three different types of apps, we briefly describe one app of each type here: Albergues 2.0, offline app: This app provides information about albergues (hostel-type accommodations) on a number of Camino routes.The full database of information about the albergues is stored on the mobile device allowing the app to be used completely offline.No updating of the data is available from a server.The entire app must be reinstalled with new data when a new version is introduced.
Camino de Santiago -Camino Francés, online app: This app provides detailed route maps of each stage of the Camino Francés.The maps are stored on a server and downloaded as requested by the user.This app can only be used while connected to the Internet; it is fully online.
Esoteric Camino France & Spain, synchronized app: This app provides information about unusual points of interest on the Camino Francés and some other routes.The information is written by a travel writer who has walked part or all of several routes of the Camino.The writer updates the information on a server, which is then downloaded to the user's mobile device the next time the user is online.Users can also provide comments about the writer's information, which are uploaded to a server and made available to other users.The app can be used offline for accessing information on all the points of interest downloaded at the latest synchronization.
We identified nine features for each app: route maps, route topography, town maps, information about albergues, information about other accommodations, information about restaurants/cafes/bars, historical/cultural information, points/places of interest, and location based (GPS or Global Positioning System) capabilities.These features were selected both from personal experience walking the Camino and from the surveys of pilgrims identified previously in which the respondents ranked desirable features of an app [9], [10].One author coded each app for these features from information provided online about the app and, when a free version of the app was available, from use of the app.The other author checked the coding.
We counted the total number of features for each app, arguing that this number is one measure of the desirability of an app from the pilgrim's perspective.Pilgrims are quite concerned about carrying too much weight in their backpacks, and guide books can be heavy.The more features that can be included in an app, the fewer other references the pilgrim will have to carry.While a pilgrim could have multiple apps with different features, navigating among them while walking a sometimes primitive trail can be problematic.The ability, for example, to click on a location and find all the information related to it in a single app is desirable.
The appendixes lists the number of features for each app.Table 3 shows the range of number of features and average number of features for the three types of apps.Offline apps offered the fewest number of features, both in terms of range and average.Synchronized apps provided more features and the greatest average number of features per app.Online apps ranged from the least to the most number of features, with average number of features near that of synchronize apps.None of the offline apps provided route topography or information about restaurants/cafes/bars.The most common feature provided by these apps was information about albergues.None of the synchronized apps provided route topography.All the other features were available in at least half of these apps.Maps (route and town with GPS positioning) were the most common feature of online apps.Route topography and information about restaurants/cafes/bars were the least common features of these apps.
As noted previously we counted similar apps on devices with different operating systems as different apps.If we only consider the six apps with similar versions for different operating systems once in our analysis, then the results are similar.See Table 4.The range of features was the same.The average number of features decreased slightly for each type of app.We can see from this analysis that synchronized apps have the most features on the average but may not have the maximum number of features, a distinction that goes to certain online apps.Offline apps fall behind synchronized apps and online apps in terms of average and maximum number of features.This conclusion coincides with our intuition because online and synchronized apps have access to more current data, and thus can provide more features for the user.
A Case Study of Stored Data in the Development of a Mobile App: The Case of eCamino
In this section we examine one particular app, eCamino, focusing on the decisions made during its development.We selected this app because one author had direct contact with the app developers and was invited to meet with them at their office in Budapest.We used the case study methodology of [16], [17] as a guide for gathering information about the development of this app.Our approach is a single case study that is descriptive and explanatory.It is bounded by the activity of developing eCamino, and by the time from the initial conception to the first release of the product.Since we are interested in the development of eCamino, our discussion is based on the first version.Newer versions have been released that have additional features.
We conducted interviews at the office of eCamino Kft., the company that developed and owns eCamino, in Budapest, Hungary, in February 2015.Present at all interviews were two principals involved in eCamino, identified here as A and B. At one time several technical staff were brought into the meeting room to answer technical questions.Interviews were conducted in English, although some of the discussion with the technical staff had to be translated to and from Hungarian, which was done by B.
eCamino is a synchronized app.It includes 7 of the 9 features identified previously, excluding only route topography and town maps (added in a later version).It is based on [2].The full database of user-relevant information with maps and content from [2] is stored on the mobile device allowing the app to be used offline.The database is also stored on a server.Users can update data on the mobile device (e.g., update an accommodation's phone number or a restaurant's opening hours) while walking the Camino.When the mobile device is next online the data in the mobile database is synchronized with the server database.User-entered data is uploaded to the server and is subsequently downloaded to the mobile devices of other users.Users can also upload text and photos to the server, which are then available for others to view on other devices such as laptops through a web portal connected to the server.
In 2012, A published an edition of [2] in Hungarian.A has technical knowledge of maps and GIS from his experience working for the GPS company TomTom.He conceived the idea of a mobile app with maps of the Camino route as shown in [2] and other content from [2].He presented the idea of an app based on [2] the desirability of these characteristics [9], [10]).To provide offline use and maintain currency of the data, a synchronized approach was needed.(This decision is discussed further in the next section).
Another early decision was that the apps should be native, with one version for each platform, rather than a single web app usable on all platforms.Developing native apps for three different platforms created a number of problems.Programming was done in different programming languages by programmers working for Pear Williams Kft. and eCamino Kft.Some initial programming was also outsourced to a local firm.Currently all programming is done in house.The core of the system on the Azure server was written in Java.The web portal was written in PHP.The Windows Phone version of the app was written in C#, the iOS version was written in Objective C, and the Android version was written in Java.No cross development solutions were available at the time, and so each version had to be written from scratch.Currently, such solutions exist.A and B estimate that using them would have saved about 30% of the development time.A further complication was that there were differences among smartphones in Europe and the United States because of different network providers.One activity that took considerable time was the geocoding of the points of interest.This was done manually and took one person about four months to complete.
As noted previously, the core of the system on the Azure server uses Oracle for its database.SQLite was selected as the data storage software on the mobile devices because it is popular and usable on all platforms.The complete SQLite database is approximately 160 megabytes.The web portal uses MySQL.
The architecture of the system is shown in Figure 1.
Figure 1: eCamino system architecture Data synchronization between mobile devices and the server uses the following patterns [7]: Data synchronization mechanism pattern: Synchronous data synchronization.Synchronization, if needed, occurs at startup of the app.The user interface and all app functions are blocked during synchronization.Informal use of the app on an iOS device indicates that this synchronization is typically under one minute.
(Initial population of the mobile database occurs when the app is installed).
Data storage and availability pattern: Complete data storage.All data that is relevant to the user walking the Camino is stored on the mobile device.Some data, such as photographs uploaded by pilgrims, remains on the server and can be accessed by non-pilgrims from the server.
Data transfer pattern: Partial data transfer based on timestamp.Only data changed since the last synchronization, as indicated by timestamps, is transferred between the server and the mobile client.
Application of Decision Factors in Case
Discussions during the information gathering phase of the case study provided a basis for identifying the four decision factors proposed in Section 3. As noted in the previous section, a fundamental decision in the development of eCamino was to use a synchronized data storage approach.This decision illustrates the application of the decision factors using the criteria shown in Table 2: Speed of stored data access: The developers wanted rapid access to the data because they felt users would not want to wait for the type of information provided by the app.All user-relevant data needed to be Francois This paper is available online at www.jtaer.comDOI: 10.4067/S0718-18762016000300004 stored on the mobile device, thus providing access to the data without a network delay.In order to provide rapid access, the app had to be offline or synchronized.
Availability of stored data: The developers wanted the data to be available at all times, even when the user was offline.All user-relevant data needed to be stored on the mobile device so it would be available to the user, whether or not a network connection was present.Either an offline or synchronized app was needed to satisfy this requirement.
Volume of stored data: The developers determined that the amount of data used by the app was small enough to fit on a mobile device.Although data capacity is limited on a mobile device, this limitation does not impact eCamino because the database is only about 160 megabytes.Although an online app would easily satisfy this requirement, an offline or synchronized app would suffice.
Currency of stored data: The developers determined that changes in the data would be limited and infrequent, and thus real-time currency was not needed, although periodic updating would be desirable.Some data on the mobile device could be temporarily out of date without significantly affecting the usability of the app.Data could be updated regularly because of the widespread availability of WiFi in albergues, restaurants, bars, and similar locations on the Camino.It was expected that users would be able to go online at least once a day via WiFi.Although an online app would provide the most current data, a synchronized app would provide stored data that was sufficiently current for the use of the app in the domain.
Table 5 summarizes the application of the decision factors in the case.The totality of the application of these decision factors indicated that the synchronized approach was best for eCamino.With this approach, data can be accessed rapidly because it is stored on the mobile device.Data is available without a network connection, again because it is stored on the mobile device.The volume of data is small enough to be easily stored on the mobile device.The occasional lack of currency of some of the data is not a problem because the data can be synchronized regularly.
Discussion
Two separate surveys of over 600 pilgrims in the United States and Europe [9], [10], found that the most desirable feature of an app was accuracy/currency of information.Not far behind in desirability was the ability to use the app offline (fifth in the U.S. survey and second in the European survey out of 19 features surveyed).Although accuracy/currency of information suggests that an online app would be the most desirable, use of online apps on the Camino can be problematic because of the potential lack of an online connection.The offline capability of synchronized apps, while at the same time maintaining relatively current information, leads us to the conclusion that this type of app is the most desirable for the domain under study.It is important to note that this conclusion is only applicable to the use of apps by pilgrims walking the Camino; different conclusions may be reached in other domains.
The benefits of a synchronized app in the domain of walking the Camino de Santiago are clear from the previous analysis.With synchronized apps, data stored on the mobile devices carried by pilgrims is always available and rapidly accessible.Access to data for an online app may be slow because of communications limitations, or, more significantly, not available because of the lack of an online connection.Although online apps are best for large volumes of stored data, synchronized apps are sufficient assuming the volume of data is within the limits of the mobile device.Apps that download highly detailed maps may not meet this requirement.Finally, data for online apps is always up to date, but the currency of data for synchronized apps is sufficient for the needs of pilgrims on the Camino.
Despite the advantage of synchronized apps in most situations for pilgrims on the Camino, we found few in our survey of apps.Only 10 (14%) were synchronized while 56 (79%) were online.Without further analysis we cannot answer why this is the case but several explanations come to mind.One is that the volume of data is too great to be stored on the mobile device.This might be the situation with apps that display detailed maps, which may require
Implications for Apps in Other Domains
The analysis presented in this paper, although specific to a particular domain, has implications for apps in other domains.The decision factors that we propose can be used to determine the preferred data storage approach for apps in different domains with Table 2 serving as a guide.Although other decision factors may be relevant for apps in other domains, the four decision factors given previously can serve as a starting point for further analysis.Other decision factors may be identified by case study analysis similar to ours.
The apps survey method that we used to analyze the stored data approach across many apps in the Camino de Santiago domain can be used for similar analysis of apps in other domains.Identifying features or other characteristics of different apps in a domain and correlating this data with the data storage approach may yield insights into the desirability of different data storage approaches in other domains.
While we identify the synchronized approach as having certain benefits over online and offline approaches for apps used by pilgrims on the Camino de Santiago, the online or offline approaches may be more beneficial in other domains.The methods that we used in this paper can be used in other domains to help determine the desirability of different data storage approaches.This analysis could reach the conclusion that one data storage approach is best or that no approach is preferred over the others, which would be a useful insight.
Conclusion
This paper examines data storage options for mobile apps.It identifies the mobile data characteristics of the main types of apps with special attention to synchronized apps.It also presents factors to consider in decisions related to selecting the data storage approach used in mobile apps.With this background the paper surveys the data storage characteristics of over 70 apps for one domain (that of walking the Camino de Santiago) and examines the case of the development of one specific app in this domain (eCamino).
The general conclusion in this paper is that different approaches to data storage for mobile apps are appropriate depending on the characteristics of the situation in which the app will be used.Offline apps, with all data stored on the mobile device, are best when the data does not need to be updated or is only updated by the user.Online apps, where the app has real time access to the data on a server, are best when the data is updated by external entities and the currency of the data is critical.Finally, synchronized apps are useful when the mobile device must be used offline but may be periodically online for data synchronization.
More specifically we conclude from the analysis in this paper that, in the context of the domain of walking the Camino de Santiago, synchronized apps show an overall benefit over offline and online apps.Data for synchronized apps is always available and rapidly accessible.There is adequate storage on mobile devices for synchronized data used by most Camino-specific apps.The stored data for synchronized apps is sufficiently up to date for use by pilgrims on the Camino.Whether this or a different conclusion can be reached for apps in other domains is unknown, but it is an interesting question for further research.
This paper also demonstrates the applicability of four decision factors for selecting a data storage approach for mobile apps in the design of one particular app.These factors, which resulted from discussions in the case study investigation, can be used by app developers and app users in any domain.App developers need to decide which data storage approach is best for the apps they are developing.By applying these factors developers can make this determination.We saw an application of these decision factors in the case study.App users can use these factors to decide which app may be best for their use.For example, if a data intensive app is going to be used predominantly in an urban environment with good cellular service, then the user might wish to look for an online app.If the user requirements are different, then perhaps a different type of app would be indicated.Validation of this approach for selecting apps by users, possibly through a user survey of app selection, might be a fruitful area for future research.
We hypothesize that the conclusions of this research, although supported here in the context of apps used by pilgrims on the Camino de Santiago, are applicable in other contexts.Exploration of this hypothesis is an area for future research.Another area for future research would be to look at other case studies beyond the single case (eCamino) investigated for this paper.Such research may identify other decision factors besides those proposed here.Finally, exploring the impact on the user of the different data storage approaches identified here could be a fruitful area for future research.
Table 1 :
Data storage characteristics of mobile apps
Table 2
summarizes these factors.Francois B. Mourato-Dussault Selecting a Stored Data Approach for Mobile Apps Journal
Table 2 :
Decision factors
Table 3 :
Number of features in apps
Table 4 :
Number of features in apps counting similar apps only once
Journal of Theoretical and Applied Electronic Commerce Research
to the author, John Brierley, who agreed to it in 2013.A formed eCamino Kft. in Hungary in 2013.Initial financing for the company was provided by the principals, friends of the principals, and Pear Williams Kft., which is another Hungarian company in the technology sector founded in 2011.The offices of eCamino Kft. are located in the offices of Pear Williams Kft. in Budapest.In 2014 a VC belonging to a Catholic order provided additional funding.Both A and B had walked the Camino and used their experience to help determine the requirements for eCamino.They also interviewed a number of other individuals who had walked the Camino for their views of the desirable features for the app.With the requirements identified, they began the development of the initial version of the app.Development took about six months.The first version, which was for Windows Phone, was released in February 2014.The iOS version was released in March 2014, and the Android version was released shortly thereafter.The first version was developed for Windows Phone because Microsoft had indicated that it would provide support for the project.In addition, Windows Phone based smartphones are common in Hungary.The core database was created using Oracle on a Microsoft Azure server in Ireland.It was necessary for the server to be in Europe because of end-user license requirements.Azure was selected over AWS and other options because Microsoft provided technical support.
Most decisions were made jointly by A and B. A fundamental decision was that the app would use synchronized data storage.All user-relevant data would be stored on the mobile device so that the app could be used offline.At the same time, the data stored on the device needed to be current.(The survey of pilgrims identified previously supports Francois B. Mourato-Dussault Selecting a Stored Data Approach for Mobile Apps B. Mourato-Dussault Selecting a Stored Data Approach for Mobile Apps
Table 5 :
Application of decision factors in development of eCamino Although this paper has characterized this impact in several ways, research on actual use by end-users could confirm these characterizations or reach different conclusions.Ricardo Meana Online 7 | 8,960 | 2016-09-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
A Bibliometric Analysis and Network Visualisation of Human Mobility Studies from 1990 to 2020: Emerging Trends and Future Research Directions
: Studies on human mobility have a long history with increasingly strong interdisciplinary connections across social science, environmental science, information and technology, computer science, engineering, and health science. However, what is lacking in the current research is a synthesis of the studies to identify the evolutional pathways and future research directions. To address this gap, we conduct a systematic review of human mobility-related studies published from 1990 to 2020. Drawing on the selected publications retrieved from the Web of Science, we provide a bibliometric analysis and network visualisation using CiteSpace and VOSviewer on the number of publications and year published, authors and their countries and afflictions, citations, topics, abstracts, keywords, and journals. Our findings show that human mobility-related studies have become increasingly interdisciplinary and multi-dimensional, which have been strengthened by the use of the so-called ‘big data’ from multiple sources, the development of computer technologies, the innovation of modelling approaches, and the novel applications in various areas. Based on our synthesis of the work by top cited authors we identify four directions for future research relating to data sources, modelling methods, applications, and technologies. We advocate for more in-depth research on human mobility using multi-source big data, improving modelling methods and integrating advanced technologies including artificial intelligence, and machine and deep learning to address real-world problems and contribute to social good. by service , urban mobility , algorithm , accuracy , infrastructure , experiment , efficiency , and vehicle . Such keywords tend to be relevant to research in the domain of telecommunications, physics, technology, and transportation. travel Keywords in archaeology evolve from age , residential mobility to , bone collagen and stable isotope Keywords in business change from deforestation , exposure intervention , decision making , living exposure
Introduction
Human mobility is one of the typical human behaviours in daily life. With the prevalence of diverse transport modes, human populations have become highly mobile in this modern world [1]. Understanding the pattern of human mobility serves as the foundation to reveal how people respond to and interact with the urban and natural environment, and to develop valuable applications in transportation, urban planning, epidemic disease controlling, and natural disaster evaluation [2,3]. As such, human mobility-related (HMR) studies have gained increasing attention from scholars in multiple disciplines and have become highly diverse in terms of their theoretical foundation, modelling techniques, empirical studies, and practical applications [4]. However, a systematic literature review
Human Mobility-Related Studies
HMR studies have a long history dating back to the featured work about population mobility in the U.S. by Hugher, and the measures of intra-urban mobility by Corbally, both in 1930 [10,11]. The information of human mobility can directly indicate the patterns of people's travel behaviour and moving trajectory, which could also indirectly reflect their travel preferences, living styles and habits, residential decisions, and psychological responses to the surrounding environment [12]. Due to the essential nature of human mobility, studies on human mobility have been multi-disciplinary and conducted in a wide range of research paradigms, including social science, environmental science, information and computer science, engineering, mathematical science, and physical science, biological science, and medical and health science. As human populations become highly mobile in the modern world with the prevalence of diverse transport modes, measuring human mobility serves as the essential method to study how human beings respond to and interact with the urban and natural environments [13]. Human mobility is normally quantified in both spatial and temporary dimensions based on an individual's trajectory of movement or the mobility flow between locations at the aggregated population level, the latter of which can be further calculated as location-based mobility index [14]. HMR studies have different emphasis on different disciplines. HMR studies integrating into the field of information and computer science, engineering, mathematical science, and physical science have contributed a wide range of measuring and modelling approaches to capture human mobility patterns at both the individual and aggregated population levels [5]. Whereas, HMR studies in the field of social, environmental, biological, geographic, medical, and health sciences focus on exploring the relationship between human mobility and the urban and natural environment, providing policy implications in urban planning, infrastructure configuration, disaster prevention, and transport forecasting [6]. Understanding the interand intra-disciplinary connections in HMR studies is important to explore future research paradigm, which, however, has been rarely addressed in the current scholarship-the knowledge gap this study aims to fulfil.
With the development of information and communication technologies in recent decades, human mobility patterns have significantly changed as the use of the Internet and mobile devices have made interpersonal communications much easier than before [7] and the increase of transport networks and modes (e.g., low-cost air flights, high-speed trains, and sharing vehicles) makes travel more convenient [8]. Such technologies, in turn, largely enrich human mobility-related studies by widely-applied location-aware devices (e.g., smartphones and GPS receivers), the emerging sources of big data (e.g., from social media and transport smart-card), and multi-scale modelling techniques (e.g., from a population to individual level). As such, a systematic review of the HMR literature is needed to reveal the emerging trends of HMR studies and to propose future research initiatives and directions to extend the current research scope.
Bibliometric Analyses and Network Visualisation of the Literature
Methodologically, a bibliometric analysis based on citations is widely used for systematic literature review to assess the importance and impact of publications and their connections with other disciplines [15]. Scholarly citations are generally treated as an indication of the knowledge flow from the cited entity to the citing one in terms of how, why, and at what rate new ideas and technologies spread through a certain research domain knowledge [16]. Specifically, the cited and citing entities are usually considered as the source and target of diffusion. As such, a variety of quantitative diffusion indicators has been used to describe the diffusion characteristics of knowledge production and to evaluate the impact of research, including citation counts, journal impact factors, field diffusion intensity, and the countries and institutions of researchers [17].
A wide range of software tools has been developed to assist scientists to visualise the diffusion of knowledge and reveal the network pattern of citations, including CiteSpace developed by Chaomei Chen [18] and VOSviewer developed by Nees Jan van Eck and Ludo Waltman [19]. Both CiteSpace and VOSviewer are open-source software tools for analysing, detecting and visualizing the trends and patterns in scientific literature. CiteSpace is available to analyse both English and non-English literature based on multiple databases of scholarly publications, while VOSviewer is only available for English literature but with additional functions that CiteSpace does not have (e.g., density mapping). Therefore, our study uses both tools to conduct the bibliometric analysis.
Review Materials
The data used in this study is collected from the Web of Science (WOS) Core Collection including science citation index expanded (SCI-Expanded) and social science citation index (SSCI). The timespan for our search is set from 1990 to 2020 given that the number of publications per annual before 1990 is relatively small. We set up the selection criteria by referencing the definition of human mobility by Hannam et al. [3]. This definition of human mobility encompasses both the large-scale movements of people across the world, as well as the more local processes and patterns of daily transportation, travel behaviours, movements through the urban and regional spaces and within everyday life [3]. Accordingly, the search terms we use are defined as human mobility, mobility pattern, human trajectory, human migration, human immigration, population migration, population immigration, population mobility, rural mobility, urban mobility, migration flow, immigration flow, mobility network, migration network, and immigration network. We use double quotation marks to enclose these terms in the search to ensure the search results contain those terms either in the title, abstract, or keywords; this also excludes potential search biases that may contain a term in different sequences, such as 'human reaction to high mobility group proteins . . . ', which is beyond the scope of this literature review. Publication types are set to include all types (Table 1). With these search criteria, we obtain 5728 publications in all forms which are then exported with full citation records and cited references in a plain text format for further analysis.
Methods
We commence with a descriptive summary of the bibliographic records in terms of the publication year, publication type, disciplinary categories, institutions (author affiliations), and the most productive authors. Then, we use CiteSpace and VOSviewer to visualise the emerging trends of the HMR literature, popular domains, and research frontiers. CiteSpace as a Java visualisation application relies on three central concepts: burst detection, betweenness degree (or centrality), and heterogeneity. The technical details and measures of these three concepts can be found in the user guide by Chen [18]. In general, burst detection is used to identify the nature of popular domains and to detect sharp increases and changes of research interests in a speciality; betweenness degree reflects the popularity or importance of nodes in a network; heterogeneity is used to identify the tipping points of research fields, and detect emerging trends and abrupt changes [18]. The analytical procedure in CiteSpace consists of time slicing, burst and node defining, modelling, thresholding, and mapping. In our study, we define time-slicing as one-year interval, meaning all statistics are calculated by every year. Burst terms are defined as the keywords, abstracts, titles, author affiliations, countries, and disciplinary categories of the bibliographic records, respectively. Nodes are defined to represent the betweenness degree (or centrality) of these burst terms. The visualisation is conducted based on these settings with a threshold of presentation set to be minimum, only showing the top 10% burst terms. Mapping layout is set as a network mode for authors, institutions (author affiliations), countries, abstracts, and a timeline mode for keywords to better illustrate its change over time. Furthermore, VOSviewer obsesses similar functions as CiteSpace but has one additional function of density mapping based on the frequency of a certain attribute such as the keywords or authors [19]. We use both CiteSpace and VOSviewer to present the outcomes of the bibliometric analyses, considering their complementary advantages.
The analytic workflow consists of four sets of bibliometric analyses, including the features of HMR publications, co-citation references, co-authorship, and co-occurring ( Figure 1). First, a descriptive analysis of the selected 5728 HMR publications is conducted based on the year, type, author, disciplinary category, and institution of publications. Second, co-citation references refer to two papers both being cited in the third paper. Accordingly, two articles are defined as having a co-citation relationship if they are cited by one or more articles. A co-citation reference analysis serves as an important indicator to detect the structure and characteristics of a specific domain. In this review, all publications are detected in several clusters (as the top 10% visibility setting) based on the disciplinary category with the most popular abstract and title terms shown on the map simultaneously. In addition, the top journals of co-citations and the top 10 most cited publications are analysed. Third, a co-authorship analysis is conducted based on the name of co-authors appearing in co-citation references, reflecting the extent of influence of the authors and the strength of research collaborations across countries and institutions. The nodes in the network are set to betweenness centrality to represent the importance of countries or author affiliations in the co-authorship network. Fourth, a co-occurring keyword analysis indicates the emerging trends, popular domains and research frontiers based on all selected publications or a particular disciplinary category of the publications. Herein, we utilise a keyword density map to examine the evolving trend of HMR studies and display the results based on a timeline at ten-year interval from 1990 to 2019 (i.e., 1990-1999, 2000-2009, 2010-2019) and consider 2020 as a special year given a large influx of COVID-19 related papers rather than based on the quantity of publications. The classification based on tenyear intervals may distort the retrieval of main topics over time and mislead the horizontal comparison in each category; however, the vertical comparison of the main topics in each interval are comparable across disciplinary categories. Finally, we propose future research directions in four aspects: involving multi-source mobility data, improving modelling approaches, integrating advanced techniques, and contributing to social good.
Results
In this section, we first provide a summary of publications based on the number of publications, publication types, authors, disciplinary categories, and author affiliations. We then present the analytical results using CiteSpace and VOSviewer based on co-citation references, co-authorship, and co-occurring keywords to depict the evolutional and emerging trends in the field of human mobility research. Table 2 shows the number and proportion of various publication types. The most frequent document type is articles, accounting for 87.67% of total publications, followed by 6.6% as review papers and editorial, and 2.89% as conference proceeding papers. Table 3 shows that the most productive author in HMR studies is Liu Y, whose research focuses on urban studies, travel patterns, and transportation. The other top nine authors have diverse research focuses mainly in the field of epidemic, disease transmission, urban mobility, and mobility modelling by using mobile phone and big data. HMR publications are classified to the total 99 default disciplinary categories by WOS. According to academic discipline classifications [20], we further aggregate 99 categories into 13 broad categories ( Table 4). The largest proportion of HMR publications (25.08%) are in social sciences, including demography, sociology, urban studies, regional urban planning, ethnic studies, business, management, public administration, and international relationship. The second broad category (12.94%) is environmental sciences, followed by medical and health sciences (12.28%), information and computing sciences (10.75%), technology (9.90%), and multi-disciplinary (8.47%). Note: *: the sum-up of publications in each broad category is more than 5728 because parts of publications fall into more than one broad category; the percentage in brackets indicates the proportion of publications in a certain category over the total. In summary, the amount of HMR publications increases rapidly since 2008, mainly published as academic articles in the field of social sciences, environmental sciences, and medical and health sciences. Research areas of the three most productive authors are mobility studies in urban, travel patterns, and transportation. The most productive institutions are mainly based in the U.S., UK, and China. Figure 4 indicates the bursts of co-citation references in the top six disciplinary categories by different colours and the node labels show the most popular abstract term (purple) and title term (blue) in that category. We set up the uniformed size of nodes for better visualisation and label the rank of the top six categories in front of their names. The largest burst of co-citation references (labelled as 1) falls in multi-disciplinary sciences with the most popular abstract term as mobile phone and the most popular title term as mobile phone data. The second-largest burst is about green and sustainable science and technology, followed by transportation, demography, applied mathematical sciences, and urban studies. In addition, multi-disciplinary sciences and urban studies are intertwined in the same burst (red colour) indicating the close connection between these two research categories, but the other four categories are relatively independent as being detected in different bursts (shown as different colours). Most of these highly cited journals are multi-disciplinary, and almost half of them are related to transport, physical, and computer sciences. Table 5 shows the top 10 most cited HMR publications. The top two cited papers are in multi-disciplinary sciences in computer science and technology, published in Nature in 2008 by Gonzalez et al. and in Science in 2010 by Song CM et al. [2,21]. Both papers model large-scale human mobility by using big data to reveal that humans follow simple and reproducible patterns despite the diversity of their travel history. The finding of inherent similarity in travel patterns has extended impacts on all phenomena driven by human mobility, from epidemic prevention to emergency response, urban planning, and agentbased modelling. Among these 10 publications, five publications are multi-disciplinary, across the category of computer science, physics, engineering, and telecommunications, providing methods to compute and model human mobility [2,[22][23][24][25][26]. Two publications are in medical and health science, quantifying the impact of human mobility on disease transmission [27,28]. Two publications are in social science, related to urban studies and geography [29,30]. One publication is in transportation science and technology by using mobile phone big data [31]. In sum, the most cited publications are highly multidisciplinary, computation-based, and method-oriented with diverse applications in social, medical, urban, and geographic studies.
Co-Authorship Analysis
The co-authorship analysis by country and institution reflects the countries or institutions where co-authors are highly concentrated and the degree of connections among countries or institutions in the field of human mobility. The different colours in Figure 6 indicate the diversification of research directions. Figure 6A shows that the U.S. has the largest research burst with 1017 publications and 65 as the degree of co-authorship connections to other countries, followed by England, Spain, Germany, France, Italy, and Canada. It indicates the diversity of international collaborations across geographic contexts. Figure 6B shows that the top institutions of co-authorship. The institutions with the largest number of cited publications is the Chinese Academy of Sciences (centrality as 47), followed by University Oxford, University of Melbourne, Wuhan University, Harvard University, University of Washington, Nanjing University, Peking University, University College of London, Stanford University, and Massachusetts Institute of Technology. Among these institutions, six are from China, four from the U.S., two from the U.K., and one from Australia.
Co-Occurring Keywords Analysis
The keyword density map (Figure 7) produced by VOSviewer uses colours and font sizes to represent the frequency of the keywords in HMR publications. The larger font size and lighter colour (yellow) indicate the higher frequency of keywords. It can be observed that there are two dense clusters in Figure 6. The cluster on the left has -frequently used keywords including population, migration, and human migration. They are closely surrounded by keywords including disease, transmission, species, infection, sequence, and climate change. As such, the left cluster is relevant to research in the domain of urban and social studies, epidemic and infectious diseases, public health, climate, and environmental sciences. In the cluster on the right side, the most frequently used keyword is network, followed by service, urban mobility, algorithm, accuracy, infrastructure, experiment, efficiency, and vehicle. Such keywords tend to be relevant to research in the domain of telecommunications, physics, technology, and transportation. We further examine the emerging trends of HMR publications by category with the timeline view of co-occurring keywords ( Figure 8) and the change of co-occurring keywords is indicated in Table 6 (bold showing that keywords appear more than one decade). The most popular category is transportation, followed by genetics and heredity, infectious diseases, demography, telecommunications, physics and multi-disciplinary, archaeology, business, and microbiology. Along with the horizon of each category, arch lines reflect the connection of two co-citations with the same co-occurring keywords. The colours of arch lines indicate the year of citations. The vertical grey lines separate the whole timeline into four intervals (1990-1999, 2000-2009, 2010-2019, and 2020). In the top category of transportation, the keywords are cost, city and transport in 1990-1999, then changed to policy inequality, accessibility, and travel in 2000-2009, further changed to built environment, CO 2 emission, and public transport in 2010-2019, and evolved to automated vehicle and vehicle usage in 2020. In the category of genetics and heredity, the keywords are evolved from population sequence and distance in 1990-1999 to mixture-ethnic group and expansion in 2010-2019. In infectious diseases, keywords have been evolving from mortality and risk transmission in the 1990s to tuberculosis trend in developing countries, influenza, weather, and COVID-19 in 2020. The keywords in demography are relatively consistent with climate change, weather, resettlement disasters, displacement, vulnerability, and adaptation frequently appearing from 1990 to 2019, and with artefact synchrony mitigation as the research frontier in 2020. In telecommunications, keywords rarely appear before 2000 and become relatively more frequent in global mobility network, mobility model, travel time, and internet from 2001 to 2019, and change to wireless communication and smart mobility in 2020. In physics and multi-disciplinary fields, keywords are changed from network pattern, urban mobility, travel pattern, global position system, to more recently smart card data and deep learning. Keywords in archaeology evolve from age, residential mobility to carbon, bone collagen, and stable isotope more recently. Keywords in business change from deforestation, exposure intervention, decision making, and living quality before 2010, to more recently air pollution and population exposure. Finally, keywords in micro-biology change from gene culture, protein, negative bacteria before 2010 to urbanisation, proliferation, and apoptosis in 2010-2019 and become very few in 2020.
In summary, HMR research has become increasingly popular across various disciplines with enhanced collaborations across countries and institutions. Most of the top citied publications are multi-disciplinary studies which provide frameworks, approaches, and applications to model human mobility, and integrate big data into the modelling process. The evolutional pathways of HMR studies show that the emerging sources of big data have been used to better track people's movement patterns and such big data were provided by mobile devices, social media applications, and transport systems. The development of computer science, information technology, and the Internet-of-things (IoTs) innovates the modelling techniques to better analyse and present the spatiotemporal patterns of human mobility and to improve the simulation and prediction of human mobility. Cutting-edge technologies such as deep learning, machine learning, and artificial intelligence have been used to build novel applications of human mobility in multi-disciplinary areas.
Future Research Directions
Based on our bibliometric analyses and future work recommended by top cited studies in each disciplinary category, we identify four directions for future research, which are: involving multi-source mobility data, improving modelling approaches, integrating advanced techniques, and contributing to social good through real-world applications.
The Involvement of Multi-Source Mobility Data
The nature of empirical data used in HMR studies is multi-dimensional and from multi-source based on the spatial and temporal information of individuals' travel trajectory as well as the mobility flow at the aggregated level. First, transport systems are the common source that provide mobility data, for example, the timetable of buses, trains, and flights [32]. However, these data only represent the scheduled mobility flows rather than the actual mobility occurring across places. More recently, smart-card-based big data provided by smart transport systems become the emerging data source to model human mobility based on the location and time of boarding and alighting of travellers [33]. Second, another emerging mainstream data source is individual-based tracking data provided by mobile phones or other telecommunication devices with the function of global positioning system (GPS) to record the spatial and temporal information of mobile phone users [34][35][36]. The rapid development of smartphones in the past decade has extended the GPS-derived data source to large IT companies (e.g., Google and Apple), who can collect the location and time of mobile phone users via the provision of mapping services. Individual-based tracking data has become a top-rated source in HMR studies because of its high accuracy, wide usage, and real-time collection. These data can be further aggregated at different spatial and temporal scales, generating a variety of index-based mobility data. For example, the Google Community Mobility Reports [25] measures the impact of the COVID-19 pandemic on human mobility by comparing human mobility during the pandemic to the pre-pandemic period; it measures human mobility by tracking people's visits to six types of places extracted from Google Map (e.g., workspace, parks, transit stations, and grocery stores). Similarly, other IT companies, including Baidu, Apple, Foursquare, and Safegraph, also publish a variety of mobility indices, social distancing indices, and mobility reports, which have been widely used in COVID-19 related research. Third, a more recent data source is the data retrieved from the increasingly popular social media applications (e.g., Facebook and Twitter). Such social media data contain spatial and temporal information obtained from users when they choose to post contents with locations, generating diverse index-based mobility data at the aggregated levels, such as the mobility-based responsive index [7,37]. It would be useful in future research to compare the quality of these multi-source mobility data and explore the possible replacement of data to improve data availability to researchers and the general public.
The enrichment of these multi-source mobility data inevitably brings in limitations in terms of data quality, coverage, and processing, as well as concerns on privacy and data security issues [38]. Individual-based mobility tracking data retrieved from mobile phone signals, GPS devices, or social media apps could be biased due to the limited group of phone users (e.g., young and middle-aged adults) or application users who post content with locational information [7]. Such a bias in data representativeness will further introduce inaccuracy when individual-based mobility tracking data are aggregated to index-based or flow-based mobility data at a certain spatial or temporal scale. It is therefore critical to construct universal standards to ensure data quality, unify data processing procedures, and facilitate data sharing. These standards should also ensure high standards in the deposit, storage, processing, and distribution of mobility data to ensure data security and protect user privacy. This can only be achieved by the collective effort of multiple parties, including government, private companies who collect and dispatch the data, and data users. While the government can make effort to establish relevant regulations on data collection and usage, there is a general consensus that data providers should make mobility data anonymous and unidentifiable to ensure fully transparent and accountable privacy-preserving solutions. The storage of mobility data must be secure and do not allow any unauthenticated and unauthorized access. Data security in transit is also critical as this ensures that data is protected while being transferred in-between networks during the process of data upload, download, transfer, and backup. On the other hand, data users such as researchers also need to ensure any identifiers from the datasets to be removed before depositing the data in public repositories. Due to the complexity of technologies in data protection and the need for long-term preservation, it is often recommended for researchers to host the sensitive data on their institutional server or network rather than on their personal device. Researchers should also choose to deposit the data they processed to a trusted data repository after use [39].
The Improvement of the Modelling of Individual and Collective Mobility Patterns
The use of multi-source mobility data will enrich the modelling work in future HMR studies. Integration of multi-source mobility data into other research objects in multidisciplines enables researchers to analyse the spatiotemporal metrics of mobility, the association of mobility and other phenomena, and the simulation and prediction of mobility [40]. The great challenge of modelling mobility patterns lies in the improvement of three types of fundamental metrics commonly used in the current scholarship [5]-trajectorybased, network-based, and social-based metrics-each type of metric has its specialities and characteristics that can be improved in future studies. Trajectory-based metrics are based on the trajectory of individuals' mobility and are usually quantified as jump lengths, mean square displacement, time duration, speed, interval, the radius of gyration, entropy index, or the most frequented locations [5,41]. Network-based metrics use graphic visualisation to characterise human mobility. In network visualisation, nodes can represent a group of locations visited by people, and edges can represent related pairs of locations in the historical trajectories. Network-based metrics are usually quantified as the degree of centrality, betweenness centrality, closeness centrality, or eigenvector centrality [41] as well as motifs and origin-destination matrices [24]. Social-based metrics are based on the co-occurrence between two or more people and are usually quantified as frequency of co-occurrences, closeness of important locations, probability of co-occurrences, and/or similarity of historical trajectories [41]. Future work can contribute to comparing different metrics to explore the potential replacement of measures to address data unavailability by comprehensively using multiple mobility metrics.
Another challenge of modelling mobility patterns lies in the involvement of multimodality (or multiple-scale models) at both the individual level and the aggregate population level. On the one hand, individual mobility is subject to a certain level of uncertainty associated with individuals' free will and arbitrariness in their actions, leading to a degree of stochasticity in mobility patterns. Consequently, individual-level models borrow concepts and methods from the classic Brownian-motion models [42] and continuous-time random-walk models [43] and have been developed to more recent Lévy flight, preferential return, recency, and social-based models [5]. However, many studies assert that individuals' trajectories are not random but possessing a high degree of regularity and predictability [21]. Future efforts need to better capture and exploit such regularity and predictability of mobility to forecast an individual's future whereabouts and to construct more realistic models of individuals' mobility.
On the other hand, population-level models (e.g., gravity models and radiation models) describe the aggregated mobility of many individuals and aim to reproduce origindestination matrices by estimating the average number of travellers between any two places during a certain period [22,44]. Most modelling approaches derive the mobility flows as a function of a range of relevant variables of the places considered, such as mutual distances, areal characteristics, and the demographic and socioeconomic level of population levels [5]. However, as discussed previously, mobility occurs over multiple spatiotemporal scales (termed as multimodal), and thus, future studies need to have a more comprehensive picture of human mobility by accounting for the effects of multimodality and creating hybrid models as the interpolation between the individual-level and population-level [45]. For example, a hybrid framework for carrying out human mobility analysis based on the multimodal structure of transport systems has been developed in recent years in the context of multilayer networks in British and French cities [46][47][48][49]. Within such networks, layers may correspond to different transportation modes (e.g., flights, buses, and trains) and connections between layers constitute the interchanges between these modes. Constructing multilayer networks is to associate locations to nodes and flows, and frequency of travel to links between different transportation modes. In this case, the optimal time for a person travelling between a given origin-destination pair can be calculated using optimal path algorithms across the multilayer structure [5]. Similarly, future studies can formulate such hybrid modelling frameworks in other applications, such as controlling mobility to prevent the transmission of multi-diseases by developing multi-layers to track the population's contact with one disease, respectively.
The Integration of Artificial Intelligence Techniques in Human Mobility Studies
The rapid development and recent advances of artificial intelligence techniques, including high-performance computing, storage, and data modelling using machine learning and deep learning methods, bring new opportunities for human mobility studies [6]. The intersection of mobility data management and artificial intelligence is becoming a promising direction to build a new database (e.g., smart moving objects database developed by Xu et al. [50]), with the advantage of automatic approaches of recommending system settings for the provision of solutions. Such a smart moving objects database has the capability to establish a more complex data structure and provide intelligent data extraction. In this case, mining and analysing trajectory data are not limited to spatiotemporal data but also incorporate sentiment and descriptive attributes to find the relationship between human mobility and subjective matters (e.g., personality and emotion) [50]. In addition, the combination of unprecedented mobility data and machine learning approaches have brought on immense advancement in intelligent transportation systems. In particular, traffic forecasting as the core function has been developed to predict future traffic conditions based on historical data, including traffic flow and control, route planning, parking service, and vehicle dispatching [8]. Such intelligent transportation systems can be used to estimate not only regular mobility behaviours but also special events such as public gatherings.
Another future direction is the integration of deep learning into mobility modelling approaches. Deep learning can be defined as an artificial neural network based on which deep learning models with the support of intelligent systems can facilitate the understanding of the deep knowledge of human mobility [51] and provide solutions and prediction for complex nonlinear relationships between mobility and other objects. Several well-known deep learning models include deep neural networks (DNN), convolutional neural networks (CNN), recurrent neural networks (RNN), and deep belief network (DBN) which have been used to explore the relationship between human mobility and personality and to understand human mobility and transportation patterns (e.g., [8,51,52]). Applications combining mobility and deep learning have been developed in the field of disaster prevision, transport planning, and scenario prediction. For example, DeepMob, as an intelligent transport system with deep learning architecture has been constructed for simulating and predicting human's future evacuation behaviours or evacuation routes under different conditions of natural disasters [53]. DeepTransport, another intelligent system, was used to understand human mobility and transportation patterns using GPS-based big data [8]. It can automatically simulate or predict the persons' future movements and their transportation mode in the large-scale transportation network. DeepSpace, an online learning system based on CNN, can deal with the continuous mobile data stream and provide multi-scale prediction models to analyse the mobile big data to predict human trajectories [52]. Future studies can extend along this direction to create intelligent databases, platforms, models, and systems that can be used in the diverse field of disease control and prevention, smart city planning, environmental management, and ecological conservation where human mobility intertwines with the surrounding environment.
The Contribution of Human Mobility Studies to Social Good
Through involving multi-source mobility data and developing advanced mobility models, future HMR studies can contribute to several aspects of social good, including but not limited to promoting population health, designing sustainable smart cities, and providing humanitarian supports to conflicts, wars, and natural disasters.
The first aspect of social good is to improve population health. The development of human genomic technology strengthens our understanding of the relationship between human mobility and the development of genetic diseases [54]. In the context of worldwide migration, moving to a new environment causes genetic adaptation, which could further affect disease susceptibility. For example, the genetic risk of Type 2 diabetes decreased worldwide as people migrated from Africa to East Asia [55]. Additionally, human mobility is an important factor in the geographic spread of contagious diseases. Understanding human mobility helps to control the distribution of contagious diseases, including malaria [27], HIV [56], dengue fever [57] and COVID-19 pandemic [58,59]. The effectiveness of disease control can be estimated by measuring the association between disease distribution and human mobility histories before and after disease control. This application has been widely used to evaluate policies and interventions that restrict population movement during the COVID-19 pandemic [58,59]. For instance, since the outbreak of COVID-19 in early 2020, there have been numerous studies exploring the association between human mobility and COVID-19 transmission, and using mobility data to develop models for the simulation and prediction of virus spread [60]. Current studies indicate that the timing, effectiveness, and stringency of policy implementation are crucial for the success of COVID-19 control efforts in different countries [61]. The early implementation of social and mobility restrictions is especially effective in lowering the peak value of new infections and reducing the infection scale [62]. These studies have contributed in various ways to the COVID-19 control and intervention globally (e.g., China, US, and European countries) [63][64][65].
The second aspect of social good is to help build smart and more safe cities. Given the current immobility caused by COVID-19 in 2020, studies on the resilience, virtuality, and sustainability of smart cities are the most promising domains in urban science research. The construction of smart cities can integrate the simulation and prediction of human mobility to improve transport planning, service and infrastructure planning, human settlement, public security, and citizens' quality of life [66][67][68]. In transport planning, mobility information can be used to monitor and predict traffic dynamics, including traffic flows, congestions, and accidents, by better understanding the supply and demand of the public transport system to improve its efficiency [69][70][71][72]. In service and infrastructure planning, human mobility to visit particular locations (e.g., shopping centres) indicates the demand for its location and services [73]. Measuring the mobility to certain infrastructures can guide urban planning to improve more equal distribution of public facilities and services in cities [69]. In human settlement and migration, mobility flows between suburbs can be used to locate residential concentrations where the development of real estate is most needed [74]. In public safety, tracking people's daily routine routes that are usually repetitive travel between the home and workplace on weekdays can be used to detect individuals' abnormal behaviours and atypical activities for the prevention of crimes [69].
The third aspect of social good is to provide humanitarian supports for people to cope with natural disasters and social conflicts. Simulating the historical settlement pathway of people forcibly displaced by the effects of climate change, conflicts, wars, and other catastrophic events can help to predict human displacement in future, and accordingly to promote humanitarian responses to protect their resettlement [75][76][77]. Such modelling work that needs to involve the measures of human mobility is particularly important for lowlying coastal regions where an extensive number of large cities are nested in face of natural disasters caused by sea-level rise, storm surges, tsunami, and flooding [78]. The measures of human mobility can be controlled as parameters in predictive models, reflecting different scenarios of human adaptation and response to natural disasters and social conflicts [79]. Such mobility-oriented applications are promising in future studies given the complexity of real-world issues and the uncertainty of human-environment interactions.
Conclusions
We present a systematic literature review of HMR studies published from 1990 to 2020 by a bibliometric analysis and network visualisation. In doing this, we analyse 5728 HMR related publications retrieved from WOS to identify the emerging trends, popular domains and research frontiers in the field of human mobility. Over the past three decades, HMR studies have become increasingly interdisciplinary, multi-dimensional, and edge leading by involving multi-source big data, innovative modelling approaches, cutting edge technologies, and novel applications in multi-disciplinary areas. Based on the evolutional pathway revealed in this study, we recommend future research directions in the areas concerning data sources, modelling methods, applications, and technologies. Considering the importance of human mobility in people's daily life, we advocate for future research to address real-world problems and contribute to social good using multi-source big data and advanced modelling methods, including artificial intelligence, and machine and deep learning techniques. We call for multi-disciplinary contributions to enhance HMR studies and to explore human-environment relationship and interaction. | 8,947.4 | 2021-03-16T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
Fine-Grained Activity of Daily Living (ADL) Recognition Through Heterogeneous Sensing Systems With Complementary Spatiotemporal Characteristics
Non-intrusive monitoring of fine-grained activities of daily living (ADL) enables various smart healthcare applications. For example, ADL pattern analysis for older adults at risk can be used to assess their loss of safety or independence. Prior work in the area of ADL recognition has focused on coarse-grained ADL recognition at the context-level (e.g., cooking, cleaning, sleeping), and/or activity duration segmentation (hourly or minutely). It also typically relies on a high-density deployment of a variety of sensors. In this work, we target a finer-grained ADL recognition at the action-level to provide more detailed ADL information, which is crucial for enabling the assessment of patients' activity patterns and potential changes in behavior. To achieve this fine-grained ADL monitoring, we present a heterogeneous multi-modal cyber-physical system, where we use (1) distributed vibration sensors to capture the action-induced structural vibrations and their spatial characteristics for information aggregation, and (2) single point electrical sensor to capture appliance usage with high temporal resolution. To evaluate our system, we conducted real-world experiments with multiple human subjects to demonstrate the complementary information from these two sensing modalities. Our system achieved an average 90% accuracy in recognizing activities, which is up to 2.6× higher than baseline systems considering each state-of-the-art sensing modality separately.
INTRODUCTION
The Internet of Things (IoT) and its rapid development enables various smart home applications that have the potential to support independent living for older adults (Azimi et al., 2017;Kokku, 2017). Engagement in activities of daily living (ADL) is an important metric for these smart home applications to monitor, as engagement in ADL is associated with the risk of disability and all-cause mortality for older adults (Wu et al., 2016). One way that variation in an ADL can be detected is by the length of time or missed steps within an ADL. For example, an older adult with cognitive impairments may insidiously decline in the engagement of ADL (e.g., take longer to perform an ADL or miss-steps within an ADL) as their cognitive impairments progress (Rodakowski et al., 2014). Non-intrusive, fine-grained and in-home ADL monitoring provides a critical platform to detect variation in ADL and ensure safety and independence in the home.
Prior work in ADL monitoring mainly focuses on duration segmentation and type recognition to describe ADL patterns. ADL duration segmentation relies on the dense deployment of sensors that capture a sequence of human interaction with ambient objects (e.g., drawers, doors) to determine the duration of an activity (Kodeswaran et al., 2016a,b). Activity type recognition methods leverage learning algorithms to improve the accuracy and robustness for classifying given sensing signals (Castanedo, 2013;De-La-Hoz-Franco et al., 2018). Combined, these efforts focus on context-level information with the timeresolution of minute or hour, which is coarse-grained. It is indeed challenging to achieve fine-grained, which we define as sub-second-level and event/action-level, ADL recognition nonintrusively and sparsely because each ADL consists of several events or actions. Nonetheless, fine-grained ADL monitoring provides detailed ADL action information, which enables a nuanced understanding of ADL patterns and, most importantly, provides knowledge of when changes in ADL patterns occur. A potential change in ADL patterns may be an indication of changes in disease status or safety for living independently. Prior attempts for fine-grained ADL monitoring combine electrical sensors and passive RFID sensors, where the on-wrist RFID provides locations and an electrical sensor provides appliance usage information. These methods (e.g., Fortin-Simard et al., 2014) require high-density sensor deployment and people carrying devices or tags during their activities. Older adults, especially those with cognitive impairments, may find it difficult to remember to wear or uncomfortable to wear such devices.
Two lines of research suggest that structural vibration and electrical load monitoring provide distinct and unique information about ADL patterns. On the one hand, researchers have noted that when people interact with their ambient environments their actions induce the structures to vibrate, and have used this vibration to infer various types of information (Pan et al., 2017b;Mirshekari et al., 2018) including the action (or motion) of the person (Fagert et al., 2017;Bonde et al., 2020). The distributed vibration sensors also provide spatial information in terms of indoor activities. Additionally, non-intrusive load monitoring methods have been shown to detect appliance usage duration (Berges et al., 2008) from aggregate measurements at the main electrical meter. The load monitoring sensor provides high temporal resolution appliance usage detection and recognition. Thus, we combine these two complementary non-intrusive and passive sensing modalities to cover two important aspects of ADL patterns-occupant action and appliance usage-with fine granularity.
Our system conducts event detection and event-based ADL recognition on these two sensing modalities. The system integrates these estimates over high-resolution time windows using an ensemble algorithm (Pan et al., 2019). The contributions of this work are as follows: • We introduce a fine-grained (sub-second-level, event/actionlevel) ADL detection and recognition system using structural vibration and electrical sensing. • We present an event-based ADL detection and recognition framework, and an ensemble algorithm to fuse ADL predictions from structural vibration and electrical sensing. • We conduct real-world experiments to evaluate our systems and demonstrate its effectiveness and complementary nature of the selected sensing modalities.
In this extended version, we further highlight the complementary analysis on the spatiotemporal characteristics of the multimodal sensing configuration. The rest of the paper is organized as follows. Section 2 discusses the related work and contrast it to our approach. Then, section 3 presents the design of our system in detail. Next, section 4 demonstrates the results and analysis of real-world experiments and evaluations. Section 7 discusses limitations as well as future directions. Finally, we conclude in section 8.
RELATED WORK
In this section, we introduce the related work in the domain of the Internet of Things (IoT). We summarize prior work on sensing systems and learning algorithms in three categories.
Multi-Modal ADL Monitoring Systems
Various cyber-physical sensing systems targeting ADL monitoring have been explored, including both coarse-and fine-grained ADL.
Coarse-Grained (Context-Level) ADL
Kokku et al. proposed the concept of activity signatures in the smart home environment with a variety of sensors (Kodeswaran et al., 2016a,b;Kokku, 2017). They leveraged temporal-, sensor-, and frequency-cut to determine the activity segments at the context-level, e.g., sleeping, bathing, eating. Their context-level activity segment is relatively coarse-grained, and in this work we target action-level fine-grained activity.
Fine-Grained (Action-Level) ADL
Fortin-Simard et al. fused electrical load and the passive RFID sensors to achieve home activity recognition, where the onwrist RFID provides locations and electrical load sensor provides appliance usage information (Fortin-Simard et al., 2014). Moriya et al. explored this direction with Echonet Lite (Moriya et al., 2017). These systems allowed fine-grained activities monitoring. However, they either require dense deployments or people to wear devices, which may not be practical for the elderly with dementia. Compared to them, our system is device-free and can detect more types of movements without requiring human subjects to wear any device nor dense deployment on each monitored appliance.
Structural Vibration-Based Human Sensing
Structural vibration signals have been used for human information acquisition in a passive and non-intrusive way.
It has been explored to obtain occupant information (Poston et al., 2017) including identity (Pan et al., 2017b), location (Mirshekari et al., 2018), heart rate (Jia et al., 2017), hand washing activities (Fagert et al., 2017), office activities (Bonde et al., 2020), etc. When the occupant interacts with ambient objects, such as floor, table, walls, bed, sink, etc., the interactions induce the structure to vibrate in a unique way such that their frequency components reflect the mode excited by different excitation sources (Fagert et al., 2017). Compared to other sensing modalities, it does not require that the user wear a device and, as a result, it allows ubiquitous indoor activity monitoring.
Electrical Sensing for Appliance Usage Monitoring
Non-Intrusive Load Monitoring (NILM) has been explored as an efficient way to monitor in-home appliance usage and related activities by disaggregating the total electrical usage of a building into its constituent components (i.e., appliances) (Hart, 1992;Liao et al., 2014). Though the field has largely focused on inferring appliance usage using a variety of different approaches [e.g., voltage noise (Patel et al., 2007;Froehlich et al., 2010;Gupta et al., 2010), harmonic power (Berges et al., 2008;Giri et al., 2013), etc.] new research has started to look into derivative objectives, such as fault detection and diagnosis of appliance patterns and, relevant to this work, monitoring ADLs (e.g., Alcala et al., 2017). Accurate detection and recognition of different appliance usage is an important aspect of ADL monitoring, and these prior works have shown the feasibility and robustness of the recognition. As a result, we believe the combination of structural vibration and electrical load covers two important aspects of human activity in home scenarios-with or without using appliances. We focus on the combination of these two nonintrusive sensing modalities.
SYSTEM DESIGN
To achieve non-intrusive fine-grained ADL detection and recognition for long-term monitoring, we measure two essential aspects of in-home activity-human actions (via structural vibration sensor) and appliances usage (via electrical sensor). As shown in Figure 1, our system first obtains signals from both structural vibration and electrical sensors. Then, it conducts event detection on signals from each sensor. Next, the system classifies the activities at the event-level. Finally, our system conducts an event-based prediction ensemble to provide accurate recognition for sub-second time windows.
Complementary Sensing Modalities
To achieve non-intrusive monitoring for smart home applications, we selected structural vibration and electrical sensor as the primary sensing modalities. These two sensing modalities are selected because they are both non-intrusiveindirectly inferring activities instead of directly measuring-and complementary with each other. Table 1 lists the different aspects of their complementary properties.
The structural vibration sensing captures the human interaction with the ambient environment, which is mostly the impulsive excitation (Fagert et al., 2017;Han et al., 2017;Pan et al., 2017b;Poston et al., 2017;Mirshekari et al., 2018). It also captures the appliance machinery vibration, such as a motor or a compressor, as well as appliance usage induced vibration, such as water or food boiling. The structural vibration sensing system often consists of multiple sensors, each covers a range of 3-5 m radius. As a result, these sensors of different deployment locations can provide the spatial information of the activity when they collaboratively conduct the estimation. On the other hand, the electrical sensor precisely detects the appliance usage time and duration, which the structural vibration sensing may not (Hart, 1992;Parson et al., 2012;Giri et al., 2013;Song et al., 2014). Since each appliance usage can trigger immediate changes in voltage signals, they can be accurately detected by the electrical load sensor when they are turned on or off. As a result, the electrical load sensing provides appliance usage information with high temporal precision.
Since the appliance usage, as well as human motion or interaction with the appliance, are the two significant aspects of human activities (Fortin-Simard et al., 2014;Moriya et al., 2017). Considering their advantage of capturing the spatial and temporal characteristics of these target activities, we believe that these two sensing modalities are complimentary for our purpose.
Sensing System
Our sensing system consists of structural vibration sensing and electrical sensing. The load sensor measures the appliances. The primary structural surfaces (e.g., the countertop and the floor) are equipped with vibration sensors to capture human action caused vibration.
Structural Vibration Sensing
The structural vibration can be used to infer the occupants' actions causing the vibration. When people interact with objects or structures around them, the interaction causes the surface of the object or structure to deform (Pan et al., 2017b;Poston et al., 2017;Mirshekari et al., 2018). The surface deformation causes mechanical waves dominated by the Rayleigh-Lamb wave. These waves propagate through the structural and are captured by the sensor. Since different activities or interactions excite different modes of the structure (Fagert et al., 2017), they induce vibration with different frequency domain characteristics, which can be used as features to recognize them. We place vibration sensors on the surfaces, including floor and countertops, where the human and appliance interact directly. A vibration sensor consists mainly of three modules-a geophone that obtains the surface vibration, an amplifier module that amplifies the surface waves, and an ADC module that converts the analog signal to digital. Figure 2A shows an example of the structural vibration sensor on a countertop. The Geophone in the figure is SM-24 (Input/Output, Inc., 2006). The opamp board has a modified gain for the monitored surface.
Electrical Sensing
All electrical measurements were collected using a 16-bit National Instruments (NI-9215) data acquisition interface. The current was measured with a Fluke i200 AC current clamp with a cut-off frequency of 10 kHz and voltage was measured with a Pico-TA041 Oscilloscope probe. Both measurements were done on a power strip to which all appliances in the testbed were connected to, as shown in Figure 2B. The setup we used is similar to the one used in Gao et al. (2014).
Event Detection
For each type of sensors, an event is defined as a segment of signal that has distinguishing characteristics compared to the signal segment when no human activities occur. Our system conducts event detection on raw sensor signals. The intuition is that when there are no activities-neither human interacting with the ambient environment nor appliance usage-the signals obtained by the two sensing modalities are considered as the ambient noise. We analyze the signal with a sliding window. The sliding window that covers the ambient noise signals has different signal energy distribution compared to that of an event (Pan et al., 2014). We model the sliding windows that cover the segments of signals known as ambient noise with Gaussian distribution, and we consider the tested sliding window that has significantly higher signal energy as part of an event. One event contains consecutive sliding windows that are detected as part of an event. We use anomaly detection algorithms (Pan et al., 2017b) to detect and extracts the signal segments that are not ambient noise as events. Figures 3A-C show examples of signals captured by different sensors and the events detected based on these signals. The solid green and red lines indicate the start and the end of an event, which have a significant signal energy difference compared to the ambient noise signal (the segments between x-axis 0 and 1 s). Figure 3D shows the label of the events. The event between 1 and 2 s is the interaction with the stove, which demonstrates detectable events for the countertop vibration sensor, as shown in Figure 3A. The events between 2.5 and 5 s are the footstep events, which showed a high signal to noise ratio (SNR) for the floor vibration sensor, as shown in Figure 3B. The event between 5 and 12 s is the kettle boiling water, which is consistent with the electrical load sensing detected event, as shown in Figure 3C.
Event-Based Activity Recognition
Our system enables non-intrusive passive sensing for action-or event-level ADL recognition. For each sensor, the system first conducts feature extraction on its detected event signal segments (section 3.4.1). These features are then used to train a classifier using the support vector machine (SVM) (section 3.4.2).
Feature Extraction and Normalization
For a detected event signal segment, our system extracts its frequency domain characteristics-the power spectral densityas features. We consider the power spectral density as the feature because it is efficient for both electrical load signal (section 3.4.1.1) and structural vibration sensing signal (section ??). The power spectral density is extracted from the signal segments that are normalized by their signal energy to reduce the activity completion variation from different people. For example, different people using electrical stove may turn the nob to different levels, which leads to different amplitude values for the electrical load signals.
Electrical sensing
The load sensor signals have an energy concentration approximately at 60 Hz in the frequency domain. It is because the local alternating current is 60 Hz. On the other hand, the non-linear loads in the circuit often induce the current harmonics, which is unique for the particular circuit of the appliance. As a result, the current harmonics induce frequency characteristics that are distinguishable for different appliances. Therefore, for events extracted from electrical sensing signals Event_Load i , where i = 1...N load , N load is the number of events detected by the electrical load sensor, we first normalize the signal by its energy to reduce the variation caused by different appliance usage duration. Figure 4 shows the examples of the electrical load signal from different events. Appliances, such as kettle and stove are mainly linear load, while appliances, such as microwave have unique current harmonics due to their non-linear load.
For appliances that are mainly linear load, i.e., their frequency components do not show current harmonics, which may make the accurate classification difficult with only features from the electrical load sensor. For example, the kettle and stove load signal shown in Figures 4B,D demonstrate similar frequency characteristics. In this case, we further explore the vibration signal captured during the same time duration of Event_Load i . Figure 5 shows the time and frequency components during the period of time for stove and kettle events. We observe that even the electrical load signals in these cases do not show significant distinguishing characteristics, their vibration signal demonstrate a clear difference in the frequency domain. Therefore, our system extracts the signal of the same time duration of Event_Load i from vibration sensors where the monitored surface has multiple appliances on it. We refer to this signal segment as Event_Load_Vib i,j , where j = 1...S multi , and S multi is the number of surfaces that has multiple appliances on them. Note that for signals collected from vibration sensors, the signal segment of the same time duration may not be detected as an event or a part of an event. The system takes the frequency domain characteristics of Event_Load i and Event_Load_Vib i,j and concatenate them as the feature for the i th event detected by the electrical load sensor.
Structural vibration sensing
Various human actions cause different parts of the structural surface to vibrate. Most of the human interaction with an ambient surface is impulsive, meaning the interaction excites the surface and induces vibrations dominated by the Rayleigh-Lamb waves (Pan et al., 2017a). For varying excitations occurring on the same surface, they will generate different responses in the natural frequencies of the surface structure (Fagert et al., 2017). For the events detected by vibration sensor on surface k during t i , we refer to this signal segment as Event_Vib k,i , where i = 1...N vib , k ∈ [1...N surface ]. N vib is the number of events detected by the vibration sensor on the k th surface. N surface is the number of monitored surfaces.
To take into account the spatial characteristics of the signal, we further extract the signal segment of the same time duration t i of Event_Vib k,i from vibration sensors on another surface l, which we refer to as Event_Vib_Vib k,l,i , where l = 1...N surface , l = k. Our system extracts the frequency components of Event_Vib k,i and Event_Vib_Vib k,l,i after normalizing the signal by energy, and concatenates the frequency components as the features for the i th event on surface k.
Classification With Support Vector Machine
Our system utilizes support vector machine (SVM) (Chang and Lin, 2011) to conduct the classification for the events detected by each sensor. SVM is a widely used classifier, and it aims to find the maximum-margin hyperplane w by minimizing the following loss function (for binary classification): where (y 1 , x 1 ), . . . (y l , x l ) is training data, x i ∈ R n , ∀i are training samples, y i = ±1, ∀i are training labels, and C is the penalty parameter that controls the generalization of the model. Based on our feature analysis, which will be introduced in detail later in section 5.2, features of events detected by vibration sensors are not linearly separable. As a result, our system trains the non-linear SVM model using the kernel function φ(·) (RBF kernel) to ensure the high class separability. Since we have more than two types of activities to classify, we decompose the multi-class classification problem to two-class classification problems to solve (Hsu and Lin, 2002;Chang and Lin, 2011).
Ensemble Events for ADL Recognition
Once the system obtains the event-based classification predictions from each sensor, it further conducts prediction ensemble on the sliding window at a sub-second level, and then outputs the activity recognition at the event-level as shown in Figure 6. Since the type and duration of events detected by different sensors vary, the ensemble occurs at a sub-seconds sliding window level instead of the event-level. We selected the sliding window that is smaller than a single impulsive structural vibration signal segment empirically.
Our system first applies a sliding window through the target sensing time duration. For each sliding window on the signal from each sensor, our system assigns it as an event if the majority of the samples are part of a detected event. Since we use the SVM to predict the event categories, a confidence score between 0 and 1 can be calculated based on the distance between the data point and the margin (Chang and Lin, 2011). Therefore, for a sliding window of a particular event, we also assign the prediction score to it.
For each window, the system collects the prediction values and confidence scores from all sensors. The system first conducts a weighted majority vote if more than two sensors predict the window as an event. The weights are assigned equally over vibration sensors on each surface and the electrical load sensor. For example, if there are multiple sensors on the same surface, e.g., floor, the information from the multiple sensors will be combined. In addition, since different sensors may detect different event durations and types, there may be no more than two sensors detecting an event within a sliding window. When that happens, the system outputs the single sensor decision with the highest prediction score as the final decision of the system instead of conducting a majority vote.
EXPERIMENTS
To evaluate our system, we facilitated engagement in ADL through real-world experiments with multiple appliances in a kitchen scenario. In this section, we first define the ADL, at the event-level, that were conducted in the kitchen in the scenario in section 4.1. Then, we introduce the experimental setup and data collection procedure in section 4.2. Next, we demonstrate the statistics of the collected dataset in section 4.3. Finally, we explain the ground truth collection and data labeling procedure in section 4.4.
Kitchen Scenario Definition
The performance of ADL is critical to ensure safety and independence in the home. Changes in ADL are associated with disability and institutionalization. The Lawton Instrumental Activities of Daily Living Scale is a commonly used tool to assess the elderly's ability to live independently (Lawton, 2000). Critical components of the Lawton tool include food preparation and housekeeping. We focused on tasks related to meal preparation and housekeeping and they are in Table 2 below. We assessed these ADL in adults with no identified physical or cognitive impairments, as the intent of this study was the detection of normal ADL. The table depicts how the structural vibration sensors and electrical load sensors detect different events within the same ADL.
Experimental Setup and Data Collection
We conducted real-world experiments in a kitchen setup in laboratory to evaluate our system. The experiments follow the guideline of the IRB protocol. Figure 7A shows the photos of the experiments, and (B) shows the schematic of the setup. The experiment setup is on a wooden floor structure with an area of 10 × 7 ft. We setup a countertop with three appliances-electrical stove, a kettle, and a microwave on it. There is a vacuum cleaner on the side of the floor as shown in Figure 7B.
We used two distributed structural vibration sensors to monitor the target area, and we placed one on the countertop and one on the floor, as circled out in Figure 7. We used one single-point electrical load sensor to monitor the load of a power strip, where we connect the target appliances-stove, kettle, microwave, and vacuum. We also provide a pot to boil water on the stove, and a mug to heat water using the microwave.
In total, five human subjects are invited to conduct the experiments and we refer to them as P 1 to P 5 . For each trial of data collection, we require the participant to conduct a sequence of tasks listed in Table 2. The details of the task procedures are as follows: • use stove: the participant takes the pot, puts it on the stove, turns on the stove, and then turns off the stove after about 20 s. Figure 7B, and turns off the machine.
P 1 and P 2 conducted all the activities in each trials. P 3 , P 4 , and P 5 conducted a subset of the activities based on their preference. For example, in some trials if they select to use the stove for cooking, they will not use the microwave, and vice verse. For each participant, we collect five trials of activity data.
Dataset Statistics
We further analyze the collected and labeled data in terms of event duration and number. Figure 8 demonstrates the corresponding stats from the data collected from the two people who conducted all target types of activities. The blue bars show the overall event duration and the red bars show the count of each type of events conducted over the eight target fine-grained activities. We can observe a clear bias in both event duration and number. Furthermore, this bias is also not consistent over the type of events. For example, kettle usage (K) has the highest overall duration, but the number of events is relatively low. On the other hand, the number of footsteps is high, while its duration is low due to the short period of events for each footstep. Therefore, if we compare all the events' activity recognition accuracy in either event duration or number, it will be biased.
Ground Truth and Labeling
We use a camera to record experiments as the ground truth. The video records from the angle below the waist so that the identity of the human subjects is not recorded. Figure 7 shows an example view of the ground truth recording. The events listed in Table 2 are labeled on a frame by frame basis manually.
MODULE PERFORMANCE ANALYSIS
In this section, we evaluate the individual performance of event detection module in section 5.1 and event-based activity recognition module in section 5.2. For module performance analysis, P 1 and P 2 's data is used since they performed all types of target events.
Baseline and Metrics
We use detection with signals from only one sensor as baselines and compare it to our ensemble event detection. Since the dataset is biased in terms of event duration and event number, we evaluate the event detection by both event level and sample level detection accuracy.
Event-Level Detection Accuracy
To avoid the bias over different event duration and number, we first compare the detection rate of each type of activities at the event level. The vibration sensors on different surfaces are sensitive to events with different spatial characteristics. For events that occur on the floor, the floor vibration sensor demonstrates higher accuracy-Detect.Acc event for vacuum and walking are respectively 90 and 87%. For the activity occur on the countertop, the vibration sensor on the countertop achieves higher accuracy-Detect.Acc event for operating kettle, microwave, and stove are respectively 90, 93, and 100%. On the other hand, the electrical load sensor achieves the highest accuracy for events of appliances usage-Detect.Acc event for turn on the microwave, open the microwave door, turn on the stove, and turn on the vacuum are respectively 100, 97, 100, and 100%.
The average detection rates over eight types of events are 62, 44, 65, and 97% in respectively for the three single sensor baselines and our approach, which is 1.5× to 2.2× improvement compared to the baselines.
Sample-Level Detection Accuracy
The electrical load sensor achieves a mean Detect.Precision sample of 99%. However, because not all the activities can be measured by the electrical load sensor, its Detect.Recall sample is 52%. The vibration sensors on two surfaces showed similar precision and recall rate-the mean Detect.Precision sample are 78 and 83% and the mean Detect.Recall sample are 60 and 59%. Because they are complementary in detecting events that occur on different surfaces, the ensemble detection achieved a mean Detect.Recall sample of 86%, which is a 1.4× improvement compared to that of only using one sensor for detection. Our approach achieves the highest F-1 score Detect.F1 sample = 0.83.
Event-Based Activity Recognition Analysis
For event-level ADL recognition, different sensors perform differently due to their spatio-temporal variation even using the same classification algorithm. We conduct the classification with 80% data for training and 20% data for testing through cross-validation.
Metrics
We evaluate the event-based activity recognition by the classification accuracy at the event-level over each sensor. Since the number of detected events for each type of activity is biased, we report (1) the average classification accuracy as the mean of multiple types of activities recognition accuracy, Avg.Acc = 1 N Type N Type i=1 Acc i , where N Type = 8 is the number of types of events listed in Table 2, and (2) the overall recognition accuracy as the true positive rate of all detected events (with unbalanced numbers of events for each type), All.Acc = #Correct Event #Labeled Event . Figure 9 demonstrates the average classification accuracy in bars, where blue, red, and yellow bars are the accuracy for electrical sensing, vibration sensor on the countertop, and vibration sensor on the floor, respectively. The dash lines are the overall recognition accuracy for the corresponding sensing modalities.
Electrical Load Sensor
The electrical load sensor achieved an All.Acc of 96.25% when only considering activities that were detected by the electrical load sensor. The error could be caused by the noise from a load of ambient appliances. The Avg.Acc for the five detectable activities (K, M, S, V, MD) is 97%. However, if we take all eight types of activities into account, this average classification accuracy drops to 61% since it cannot detect and classify the rest three activities.
Countertop Surface Vibration Sensor
The vibration sensor on the countertop captures all types of activities and achieved an All.Acc of 88.04%. Human actions, such as operating appliances and walking (OM, PS, V, Step), induce impulsive structural vibrations, which achieved over 90% prediction accuracy. Appliances that induce signature machinery vibration, such as the microwave (M), achieved 100% prediction accuracy. Unlike the microwave, stove (S) and kettle (K) do not cause the machinery vibration via drive motor, however, the boiling may or may not cause the vibration that can be detected by the sensor on the countertop.
The misclassification mostly occurs between the operation of the microwave (OM) and the open status of the microwave door (MD). This could be caused by the similar spatial characteristics of these events, i.e., both on the microwave, and the similar impulsive signals induced by open/close door and putting down the mug when the microwave door is open. Since the microwave door open induces a signature current change, the ensemble prediction achieves a higher accuracy when taking both sensing modalities into account. The Avg.Acc of eight types of activities with the countertop vibration sensor is 81%.
Floor Vibration Sensor
For the vibration sensor on the floor, we observe that most of the stove activities are misclassified as the footsteps (75%). This could be caused by the ambient impulsive floor vibration that is not induced by stove activities being captured by the sensor, i.e., people's micromotion or other building activities. We will further discuss this activity overlapping situation in the discussion section. The kettle usage, i.e., turning on/off the kettle, and the footsteps are misclassified with each other at a rate up to 13%, which could be caused by the similarity between these two types of impulses. Because when the kettle usage induced vibrations to travel through the countertop and floor structures, these structures and their contacting surface will alter the frequency components of the signal. When this happens, the spatial information extracted from fusing data from distributed sensors can increase the prediction accuracy. The Avg.Acc of eight types of activities with the floor vibration sensor is 72%.
In summary, for single sensor, the event-based activity recognition module showed over 70% average classification accuracy and over 80% overall recognition accuracy. We will further demonstrate the ensemble recognition accuracy by fusing these prediction results in section 6.
END-TO-END SYSTEM PERFORMANCE ANALYSIS
The end-to-end system outputs the sub-second sliding window level prediction. As a result, the end-to-end system performance is evaluated by this window-level ensemble activity recognition accuracy as Activity.Acc window = #Correct Windows #Windows . We first compare our ensemble approach to the state-of-the-art baselines (section 6.1) to demonstrate the importance of combining different sensing modalities. Then we further explore the robustness of the system to the variation with different temporal resolution (section 6.2) and across different human subjects whose data are not included in the training dataset (section 6.3).
Complementary Sensing Modalities Analysis
Since the ensemble activity recognition is conducted on each sliding window (section 3.5), we evaluate the prediction accuracy for sliding windows. We consider a window size of 1/8 s for the sliding window because the types of events we investigate (e.g., footstep induced vibration events) has the minimum duration that is approximate as the selected window size.
6.1.1. Compared to Single Sensor Figure 10 demonstrates the classification accuracy for each type of activity with bars from light to dark shades representing results using (1) countertop vibration sensor, (2) floor vibration sensor, and (3) electrical sensor. Our ensemble approach is presented by green bars.
The observation for the three single sensor baselines are consistent with the confusion matrices discussed in section 5.2each sensor achieves high accuracy for different events. For example, the vacuum is detected mostly by the floor vibration sensor and the electrical sensor and achieved the highest (93%) recognition accuracy among all the methods. The placing item on the stove (PS) and the stove heating (S) are detected, respectively by the vibration sensor (91%) and the electrical sensor (100%), which results in an average stove usage recognition accuracy of 45 and 50% for these two sensing modalities, and an average accuracy of 92% for our ensemble approach.
Our ensemble approach achieved the highest classification accuracy for half of the events, and the average accuracy over eight events is 90%, which is a 1.5× to 2.6× improvement compared to the baselines (56, 35, and 61%). The average values for the baselines here are calculated at the sliding window level, which is different from the average values in section 5.2 calculated at the event level.
Compared to Different Sensor Combinations
We further compare different sensor combinations to demonstrate the spatiotemporal complementary characteristics of the system. Figure 11 compares the activity recognition accuracy between different sensor combinations. The countertop and floor vibration combined approach achieve an average of 64% accuracy over eight types of activities, which is higher than that of the single sensor approach. It is because that the vibration sensors on two surfaces are complementary in special characteristics, which allows high confidence and accuracy for the events occurring on each surface. The single vibration and electrical load combination demonstrated a lower accuracy when detecting footsteps, interacting with microwaves and stoves. We believe these activities have significant spatial characteristics. When their spatial characteristics are fused by our ensemble approach, it captures more information that the signal characteristics from every single sensor. In summary, the complementary spatial characteristics of the vibration sensors improved the event-level ADL recognition accuracy and the complementary temporal characteristics of the two sensing modalities further increased the accuracy furthermore.
Robust to Temporal Resolution Variation
The sliding window size determines the system output's temporal resolution. We further explored the system robustness to different temporal resolution activity recognition. We vary the window size from 2 to 1/128 s over five levels, which are 2, 1/2, 1/8, 1/32, and 1/128, respectively. Figure 12 demonstrates the system performance with x-axis as sensor combinationsthree single sensors, the fusion of distributed structural vibration sensor, and our ensemble method-and y-axis as activity recognition accuracy. The accuracy calculated with different sliding window sizes are presented by different color bars. We observe that, for the baseline sensor combinations, the activity recognition accuracy has fluctuation when different window sizes are applied. On the other hand, for our ensemble algorithm, the accuracy values are stable over different window sizes. Our ensemble algorithm achieves a window-level activity recognition accuracy of 88%, which is the highest compared to the baseline combinations.
Robust to Personal Variation
The subject performs ADL differently in various aspects, e.g., speed, strength, interaction. As a result, their ADL may cause different signal characteristics (data distribution) (Han et al., 2017). To understand the system's robustness to the individual action variation, we further evaluate the model trained on P 1 and P 2 's data and test it on that of the other three participants.
We plot the average classification accuracy over eight types of activities in Figure 13, where bars from light to dark shades of gray represent the accuracy for the countertop vibration sensor, the floor vibration sensor, and the electrical sensor. We further plot the accuracy for different sensor combinations in different shades of blue. Our ensemble approach is plotted as green bars.
We can see that for different people, the accuracy of different methods varies. For example, for P 5 , the electrical sensor achieves higher accuracy compared to P 3 and P 4 . While for P 4 , the floor vibration sensor achieves the highest accuracy among baselines. Despite the variation, three single sensor baselines for P 3 , P 4 , and P 5 showed comparable accuracy (between 30 and 60%) compared to that of P 1 and P 2 . Our ensemble approach achieves the highest accuracy compared to the baselines, which are 81, 73, and 87%, respectively for P 3 , P 4 , and P 5 . Compared to the case where the training and testing data are from the same person different trials, the average accuracy of the three participants drops to 80%, but it is still 20% higher than baselines when the training and testing samples are from the same person.
DISCUSSION
This work highlights the potential for non-intrusive fine-grained ADL monitoring to enable ADL pattern change detection. We further discuss the current limitations and future directions in this section. The key directions we plan to further explore beyond this work include: (1) the recognition and segmentation of activities when the same person conducts different activities simultaneously (section 7.1), and (2) based on the detection and recognition, how can we conduct behavior level monitoring for long term in-home monitoring (section 7.2). Once these directions have been explored, it would also be important to explore the sensitivity of our solution to errors introduced by NILM algorithms when relying on their estimates for appliance-level measurements. We plan to collect more data under long-term in-home scenarios, which would include more types of activities and more variation over the same type of activities.
Overlapping Activities: Multiple People, Multiple Activities
When one or more persons conduct multiple activities within the same sensing area, these activities signals-in both sensing modalities-may overlap. For example, when the person is using a stove, they may walk around and use other appliances. When signal overlapping occurs, it alters the signal characteristics (i.e., features), used for classification. Prior work on disaggregating information for electrical load monitoring would allow the simultaneous appliance usage monitoring.
The direct separation of the structural vibration signal when multiple excitation sources' signal mix, however, is a challenge.
Prior work on blind source separation cannot be directly applied here due to the structural dependency of the signal, which makes the assumption of signal independence invalid. Recent work on structural vibration sensing activity recognition utilizes the domain knowledge to achieve activity recognition without separating the overlapped signals (Bonde et al., 2020). However, this work target coarse-grain activities with limited categories instead of fine-grained activities we focused on.
With these prior work on overlapping activities recognition, we can further explore the association between these two sensing modalities under the activity overlapping scenarios. For example, we can combine the historical data where the non-overlapping activity with appliances used, as well as the disaggregation of the electrical sensing, to achieve fine-grained activity recognition from overlapping structural vibration signals.
Behavior Level Monitoring: Metrics and Parameters
Our aging society has a desire to independently live in the home. Non-intrusive in-home monitoring of ADL has the potential to support safety and independence for these older adults. Information patterns in engagement in ADL are critical, as engagement is known to be associated with disability, institutionalization, and all-cause mortality. With non-intrusive in-home monitoring, changes in patterns can be immediately FIGURE 13 | ADL recognition accuracy when training on two persons' data (P 1 & P 2 ) and testing on different people (P 3 , P 4 , and P 5 ).
identified and appropriate supports or care can be deployed to ensure safety and independence in the home for the older adult. This study is a step in showing that non-intrusive in-home monitoring detects ADL in adults with no known physical or cognitive limitations. These findings may be able to extend to older adults who begin to experience changes in their capacity for the performance of ADL. For example, when an older adult takes longer to engage in a cooking task or forgets to turn the stove off, non-intrusive in-home monitoring could detect a change in behavior that notifies family or health-care providers that can visit the older adult to ensure health and safety in the home.
On the other hand, changes in patterns of ADL may cause the interaction between humans and the structural changes, which may lead to data distribution change causing learning or classification error. To combat this potential limitation, continuous learning approaches may be needed to adapt to such data distribution changes. Another challenge is the metric to measure the behavior changes, especially for multilevel activity monitoring. The abnormal behavior detected at different granularity may indicate different aspects of the disease progression. A third challenge is the differences between subjects, i.e., the definition and measurement of the anomaly may vary.
CONCLUSION
We presented a non-intrusive fine-grained ADL monitoring system through ambient structural vibration and electrical sensing in this paper. We highlighted the complementary information acquisition for these two sensing modalities and how to acquire high time-and type-resolution monitoring in this work. Both pieces of information may be used for the development of smart home applications seeking to monitor engagement in ADL. Our system first conducts event-level detection and recognition and then applies an ensemble algorithm on the recognition results from each sensor over the target time period to achieve accurate event-level ADL monitoring. In the real-world experiments (common kitchen activities), our system achieved an average of 90% accuracy for ADL recognition, which is an up to 2.6× improvement compared to the baselines.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://doi.org/10. 5281/zenodo.3745900 .
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by CMU IRB Protocol STUDY2018-00000515. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
SP, MB, JR, PZ, and HN: conception and design of the study and drafting/revising the manuscript. SP: acquisition of data and analysis and/or interpretation of data. All authors contributed to the article and approved the submitted version. | 10,250.8 | 2020-10-08T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science",
"Medicine"
] |
Density-based clustering of static and dynamic functional MRI connectivity features obtained from subjects with cognitive impairment
Various machine-learning classification techniques have been employed previously to classify brain states in healthy and disease populations using functional magnetic resonance imaging (fMRI). These methods generally use supervised classifiers that are sensitive to outliers and require labeling of training data to generate a predictive model. Density-based clustering, which overcomes these issues, is a popular unsupervised learning approach whose utility for high-dimensional neuroimaging data has not been previously evaluated. Its advantages include insensitivity to outliers and ability to work with unlabeled data. Unlike the popular k-means clustering, the number of clusters need not be specified. In this study, we compare the performance of two popular density-based clustering methods, DBSCAN and OPTICS, in accurately identifying individuals with three stages of cognitive impairment, including Alzheimer’s disease. We used static and dynamic functional connectivity features for clustering, which captures the strength and temporal variation of brain connectivity respectively. To assess the robustness of clustering to noise/outliers, we propose a novel method called recursive-clustering using additive-noise (R-CLAN). Results demonstrated that both clustering algorithms were effective, although OPTICS with dynamic connectivity features outperformed in terms of cluster purity (95.46%) and robustness to noise/outliers. This study demonstrates that density-based clustering can accurately and robustly identify diagnostic classes in an unsupervised way using brain connectivity.
Introduction
Since the successful emergence of functional neuroimaging, a new barrier has surfaced: can a strong correlation be established between brain activity (as measured by functional magnetic resonance imaging [fMRI]) and the cognitive state of an individual? More specifically, can we accurately classify neurological diseases based on fMRI data? In response to this, machine learning classifiers have been employed on neuroimaging features to generate models that, within some accuracy, predict the cognitive and disease states to which new data belong [1][2][3][4][5][6][7][8][9][10][11].
There are two major categories of machine learning classification techniques: supervised and unsupervised. Supervised learning, commonly used in fMRI studies, involves splitting the dataset into training and test data. Each member of the training data is given a 'label' as to which class (or group) it belongs to. The classifier is then 'trained' on this data to determine a generalized model (thus 'supervised' learning). Then, the classification
Open Access
Brain Informatics *Correspondence<EMAIL_ADDRESS>6 AU MRI Research Center, Department of Electrical and Computer Engineering, Auburn University, 560 Devall Dr, Suite 266D, Auburn, AL 36849, USA Full list of author information is available at the end of the article accuracy is measured by testing the model on the test data with known labels. Conversely, in unsupervised learning, patterns within the entire dataset are used to 'cluster' the data without any pre-assigned labels, and cluster purity is measured against the known groundtruth, post hoc, instead of an accuracy. As such, unsupervised learning is agnostic to pre-assigned labels, and thus determines inherent classes instead of fitting a model based on classes provided by us. Although the 'cluster purity' given by clustering is, in principle, the same as the 'classification accuracy' given by supervised classifiers (both give the percentage of correct classifications as against the known ground truth), we would use the term 'cluster purity' in this work, so as to highlight that this metric was obtained through unsupervised clustering, and not through conventional supervised classification.
FMRI studies generally use supervised learning methods to classify disease or cognitive states [1-3, 12, 13]. Unsupervised learning methods are generally used for spatially and temporally clustering voxel signals to localize brain activity [10,11,[14][15][16][17][18][19][20]; for dimensionality reduction or feature selection before applying a supervised learning algorithm to the data [13,[21][22][23][24]; or for mapping specific brain patterns to a cognitive state [6,25]. A few reports have adopted unsupervised learning methods as state classifiers across individuals. For example, one study used the one-class support vector machine (SVM) to determine the boundary around healthy control subjects and to classify outliers based on their distance from this boundary. However, such methods are highly susceptible to noise and do not work well with the high-dimensional data common in fMRI [26].
Research on clustering of subjects using unsupervised learning methods is still limited in fMRI literature, to the best of our knowledge. Diagnostic labels are often predetermined in neuroimaging studies, and most studies often stick to this labeling, because of which supervised classifiers are more popular. However, the main advantage of unsupervised learning is that it requires no a priori knowledge of categorical labels, thus reducing the problem of selection bias described by Demerci et al. [5]. This allows for the learning of more complex models, pattern discrimination and the identification of hidden states [27] among others. Unsupervised learning also does not have the issue of overfitting the data, which largely plagues supervised learning models. The overfitting nature of supervised learning models results in high performances on the data being used, but much lower performance when tested on an independent dataset. Unsupervised learning does not encounter this issue from the very nature of its formulation since learning is done in an entirely blind manner. With unsupervised learning, what you see is what you get. In this work, we demonstrate the utility and feasibility of the unsupervised learning approach of density-based clustering, applied on resting-state fMRI (rs-fMRI) data to cluster disease states in a noise-robust manner.
Clustering is an important technique of grouping objects that are similar to each other and isolating them from those that are dissimilar. Clusters are formed with features having minimum intra-cluster distance and maximum inter-cluster distance [28]. Clustering has found wide application in many fields including, but not limited to, the recovery of information, recognition of natural patterns, the analysis of digital images, bioinformatics, data mining, taxonomy, DNA microarray analysis, and many others [29][30][31][32][33][34]. The performance of most clustering algorithms, however, is highly sensitive to their input parameters, such as the number of clusters in k-means clustering [35]. For effective performance, one almost needs a priori knowledge of these parameters. Although there are a variety of clustering algorithms, density-based clustering is one of the most computationally effective methods in clustering large-scale databases [36]. Unlike k-means clustering, density-based clustering techniques do not need the user to specify the number of clusters since the algorithm determines that by itself. Using a local proximity measure, clusters are automatically formed by points in higher density regions, well separated by lower density regions. In this work, we focus on two of the most popular density-based clustering methods: Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Ordering Points to Identify the Clustering Structure (OPTICS). They require few input parameters, need no a priori user choices, detect outliers, and effectively detect clusters of arbitrary shapes [37,38].
DBSCAN is a single-scan technique that makes no presumptions regarding the distribution of data. In addition to clustering the data, the algorithm also detects outliers. The OPTICS algorithm was developed by Ankerst et al. [38] to address DBSCAN's major shortcoming that it cannot detect clusters with different local densities. OPTICS is a non-rational, data-independent representation of the cluster structure that displays important information regarding the distribution of the data. It is capable of identifying all possible clusters of varying shapes. Advantages of OPTICS over DBSCAN are that it eliminates free parameter choices needed in DBSCAN, is relatively insensitive to noise and can detect clusters of different densities [38].
In fMRI data, noise is any signal variation that is not contributed by neuronal activity. It may arise from head movements during scanning, scanner limitations, thermal noise, or other sources [39,40]. The signal-to-noise ratio (SNR) is a measure of how much signal is present in the data relative to noise. Higher SNR values indicate higher quality data with low amounts of noise. However, much progress has been made in detecting and minimizing noise artifacts from fMRI data, like high-pass filtering, motion correction and other pre-processing steps [39,41]. Some noise is always prevalent in the data. It is important that learning methods perform well even when given lower quality data. In this work, we assess the robustness of clustering to such noisy variations.
Resting-state fMRI (rs-fMRI) involves the collection of fMRI data from subjects while at rest in the scanner, with eyes open and mind left to wander. It is characterized by correlations across various regions of the brain, also called functional connectivity (FC). FC estimated from rs-fMRI is extensively used to study brain networks, and prior works have identified alterations in FC in subjects with various psychiatric and neurological conditions (please see [42][43][44][45] for a review). We have used functional connectivity features in this work.
Static Functional Connectivity (SFC) refers to the strength of connectivity between two brain regions and is quantified using Pearson's correlation coefficient between pairs of fMRI time series. It evaluates the temporal correlation between fMRI time series of two brain regions, giving one correlation value over the entire duration of the scan. On the other hand, variance of Dynamic Functional Connectivity (DFC) [46][47][48] captures time-varying connectivity and is obtained using sliding-window Pearson's correlation between pairs of brain regions. Although SFC is the most widely used measure of fMRI connectivity, DFC is emerging as an important measure having certain unique special properties [42]. SFC and DFC provide characteristically different information regarding the relationship between two brain regions. While SFC gives the strength of connectivity or co-activation, DFC gives the variation of connectivity over time (please refer to Hutchison et al. [42] for a review of dynamic connectivity). In this work, in addition to evaluating the performance of DBSCAN and OPTICS clustering approaches, we compared the performance of SFC and DFC features. We explore which of these two popular measures is more suitable for clustering in cognitive impairment.
In this work, SFC and DFC features were obtained from 132 subjects with progressive stages of cognitive impairment (early mild cognitive impairment [MCI], late MCI and Alzheimer's disease), along with matched healthy controls. Cognitive impairment [49] is a spectrum disorder ranging from mild to severe symptoms. There has been enormous interest in identifying subgroups of cognitive impairment representing relatively homogeneous symptoms [49]. In this work, we performed clustering and assessed how well the clustered subject groups matched with clinically diagnosed groups. If this approach yields impressive and reliable results, it would advance clinical diagnosis and classification of cognitive impairment.
In this work, we assessed the performance of DBSCAN and OPTICS, in combination with SFC and DFC features, in accurately clustering three groups of cognitive impairment (including Alzheimer's disease) and a matched healthy control group. Higher clustering performance need not necessarily imply higher robustness to noisy variations in the data. It is possible that a high cluster purity is obtained, but that high performance is not sustained if noise in the data increases. Noise robustness is very important for two reasons: (i) biomedical data, especially brain imaging data, are vulnerable to a large number of unknown sources of variability and known sources of noise, and (ii) the data used in any study is only a representative sample of the general population, and hence generalizability and inter-subject variability are imminent issues; therefore, if the results have to be applicable in a real-world setting, then it has to be robust to such variability. Thus, to assess robustness, we developed a novel technique called 'recursive clustering using additive noise' (R-CLAN). Additionally, since wellformed clusters are partly defined by how well the clusters are separated in the feature space, we propose the 'separation index' to supplement the R-CLAN outcome, as a measure of OPTICS' performance.
The organization of the paper is as follows: section-2 describes the methods used for grouping the dataset into clusters and our methods to assess the algorithms' robustness; section-3 presents the results with clustering performances and their robustness; section-4 evaluates and discusses the findings, and highlights possible future work; and section-5 provides concluding remarks.
Data acquisition and pre-processing
The fMRI data used in this work were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (https ://www.adni-info.org/). ADNI is a multisite, longitudinal study employing imaging, clinical, bio-specimen and genetic biomarkers in healthy elders as well as in individuals with early mild cognitive impairment (EMCI), late MCI (LMCI) and Alzheimer's disease (AD). The ADNI was launched as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial neuroimaging and other biological markers can be combined with clinical and neuropsychological assessment to measure the progression of MCI and AD. For up-to-date detailed information, please see www.adniinfo.org. Table 1 provides the demographics of the subjects used by us.
A total of 132 subjects were considered from phase-1 of the database: 35 control subjects, 34 EMCI, 34 LMCI and 29 AD patients. RS-fMRI data were acquired in 3 T Philips MR scanners using a T2* weighted single shot echo-planar imaging (EPI) sequence with 48 slices and the following parameters Prior to pre-processing, first five volumes of the fMRI time series were discarded to allow for MR scanner equilibration. Standard rs-fMRI data preprocessing steps were performed on the raw data (realignment, normalization to MNI space, detrending, regressing out nuisance covariates [6 head motion parameters, white-matter signal and cerebrospinal fluid signal] using SPM8 [50] and DPARSF [51] toolboxes in the Matlab ® environment. Mean fMRI time series were then extracted from 200 functionally homogeneous regions-of-interest (ROIs) obtained through spectral clustering atlas, [52]).
Obtaining connectivity features
SFC was obtained between all pair-wise ROIs using Pearson's correlation coefficient, giving a 200 × 200 SFC matrix per subject. Multivariate N-way analysis of covariance (MANCOVAN) was used to perform statistical tests between SFC features of all groups, controlling for age, gender and head motion. The top-100 (i.e. lowest p values) significant features were selected for further analysis, that is, a 132 × 100 (subjects × features) SFC matrix was used for clustering. We chose a fixed number of features, because we did not want the difference in number of features (between SFC and DFC) to impact clustering performance. DFC was obtained using sliding-windowed Pearson's correlation [46][47][48], giving a time series of correlation values. The variance of these values over time is a measure of how much the connectivity varies over the duration of the scan, which was the measure we used further. The width of the sliding windows was not chosen arbitrarily, like in most studies, but was instead evaluated objectively wherein the window lengths were determined adaptively using the augmented Dickey-Fuller (ADF) test [46][47][48], an analytical test based on timeseries stationarity. Overlapping windows were used, with successive windows differing by one time point. Similar to SFC, a 132 × 100 variance of DFC matrix was obtained, which was then used in clustering. Please refer to Additional file 1: Tables S1 through S3 for the list of all the top-100 significant SFC and DFC connectivity paths from which the features were extracted for further analysis.
The DBSCAN algorithm
For implementation of DBSCAN and OPTICS, we wrote custom Matlab scripts using Daszykowski's functions (https :// www.chemo metri a.us.edu.pl/index .php?goto=downl oads). Here, we give a brief overview of the DBSCAN algorithm, as described by Ester et al. [37]. The idea, in its simplest form, is that the density of points within a cluster must exceed a certain global threshold. Regions of density below this threshold are considered as noise or outliers. Two input parameters, ε, the minimum radius of a cluster, and MinPts, the minimum number of points required in a cluster, form the global threshold measures. In distinguishing regions of data by density, the following terms are used: a. ε-neighborhood: For a point p in the cluster, its ε-neighborhood is the set of points contained within the radius ε from that point. b. Core point: A point is a core point if its ε-neighborhood contains at least MinPts points. c. Border point: A point is a border point if there are less than MinPts number of points in its ε-neighborhood. d. Density-reachable: A point q is directly densityreachable from point p if p is a core point, and q is within the ε-neighborhood of p. In general, q is density-reachable from p if there is a chain of points from p to q such that each successive point is directly density-reachable from the previous point. e. Density-connected: Two points, p and q, are densityconnected if there exists a third point from which both are density-reachable.
Starting at a core point p, DBSCAN gathers all the density-reachable points of p into a single cluster C. The 16 13 procedure is then repeated again with all the new core points in C until the cluster is completely surrounded by border points from which no other point is densityreachable. DBSCAN then repeats the process again for any unclustered points in the database until all points have been processed. Finally, the points within a cluster will all be density-connected with each other, and the points lying completely outside these clusters will be considered as outliers.
Determination of radius in DBSCAN algorithm
As with the other clustering algorithms, DBSCAN is sensitive to its input parameters, particularly to ε. As noted in Tench et al. [53], if ε is too large, the algorithm will not be as discriminative and may include outliers in its clusters, and if too small, clusters may not be detected at all. Consequently, Ester et al. [37] proposed the method of k-distance graphs to determine ε. In this method, all the data points are ordered according to their distance d from their kth nearest neighbor (setting k = MinPts), and a k-distance graph is formed by plotting d against the ordered points in the feature space. By visual inspection, ε is chosen to be the value d at which an 'elbow' occurs in the plot. However, this method does not provide dependable ε values [54], although it effectively identifies the range of values where one can search for a more optimal ε value. We modified their approach by first obtaining an initial estimate of ε using the k-distance method, and then performing DBSCAN for all ε values ranging from zero to twice this initial estimate, resulting in different number of clusters for different ε values. Finally, we searched for the number of clusters that spanned the largest range of ε values, since this would identify the number of clusters that were most stable among all choices of the parameter ε. We assigned the mean ε value within this range as the final ε value for performing DBSCAN clustering. The rationale behind this approach was that those number of clusters that spanned the largest ε range would represent the most stable and true separation between the clusters.
This process can be visualized through what we term as the epsilon plot (Fig. 3, shown later in the results section), which is a graph of the number of clusters identified by DBSCAN against the range of ε values. Since the number of clusters is a discrete set of values, the plot is characterized by a series of horizontal steps, where a step corresponds to the set of ε values that give rise to that number of clusters. As described earlier, the ε value corresponding to the midpoint of the widest step was chosen as the final ε value used in the DBSCAN clustering.
The OPTICS algorithm
Here, we provide a brief overview of OPTICS as described by Ankerst et al. [38]. OPTICS is effectively an extension of the DBSCAN algorithm that uses a range of ε values, rather than a single, global threshold, to identify clusters of different local densities. Akin to DBSCAN, OPTICS uses input parameters MinPts and 'generating distance' ε; however, ε is now a maximum threshold value. We use ε′ to denote the range of radius values used by OPTICS, where 0 < ε′ < ε. To understand the algorithm, two terms are defined in addition to those used in DBSCAN: a. Core distance: The core distance of a point p is the distance ε′ between p and its MinPts' neighbor such that p is a core point with respect to ε′ . If this ε′ value is greater than ε, then the core-distance is undefined. b. Reachability distance (of p from s): If s is not a core point with respect to ε, the reachability distance is undefined. Otherwise, the reachability distance of a point p from a point s is the maximum of s's core distance and smallest distance such that p is densityreachable from s.
OPTICS works on the principle that to accurately distinguish between different densities of clusters; higher density clusters must first be found before lower density clusters are identified. Hence, OPTICS orders data points based on increasing εˈ value, since smaller ε values indicate higher density regions. This ordered list is part of the output of OPTICS. As in DBSCAN, points not belonging to any cluster are considered outliers. Unlike DBSCAN, however, OPTICS does not explicitly assign cluster memberships; rather, points are ordered based on their reachability and core distance values. In addition, OPTICS is relatively insensitive to ε and MinPts. As long as a large enough value is chosen for ε, and a choice of around 10-20 is made for MinPts (according to [38]), consistent cluster structures will be identified. This makes implementing OPTICS practically non-arbitrary and choice-free.
Separation index to evaluate robustness of reachability plot in OPTICS clustering
To assess the clustering structure in OPTICS, a reachability plot is popularly used [38]. The reachability distance is plotted for each data point, ordered according to the ordered-list output of OPTICS. Figure 1 gives an example illustration of the reachability plot defined by four clusters. A series of peaks and valleys characterize the plot, where a valley indicates a cluster. This plot effectively indicates the cluster membership of each point, although further steps are performed in OPTICS to formally define each cluster. Further details can be found in [38]. Since obtaining the reachability plot is an intermediate step in OPTICS clustering, the success of the method critically depends on the reachability plot. Consequently, assessing the robustness of the plot is indirectly an assessment of the outcome of OPTICS clustering.
To assess the performance of OPTICS on both SFC and DFC features using the reachability plot, we devised a novel measure called the OPTICS separation index. Leaving out the first and last points of the reachability plot, we defined the separation index as the ratio of the mean of peak heights to the mean value of points between the peaks in the valleys. For example, suppose there exists peaks at points i and j in the reachability plot, with cluster-k defined by the valley bounded by these peaks, then the separation index for cluster-k is defined as: where S k is the separation index for the kth group, and RP(i), RP(j) are the values of the reachability plot. The final separation index is an average of the separation indices obtained for each cluster. In simple terms, the separation index evaluates the ratio of the peak heights to the average of the baseline heights. A higher separation index value indicates that the peaks are relatively much larger than the baseline; hence, obtaining clusters using a threshold on the reachability plot would then be less prone to noise from outliers. This is an indirect measure of the robustness of OPTICS clustering. (1)
Assessing robustness using additive noise
Clustering could be defined as robust if it can maintain its performance in the presence of higher noise levels compared to noiseless data. Thus, robustness is a measure of how well the clustering would be performed if lower quality data were given. A clustering method providing higher cluster purity does not necessarily imply that it is more robust to noisy variations in the data. It is possible for a clustering method to provide high performance yet be more sensitive to outliers and noise.
To assess the robustness of DBSCAN and OPTICS, each in association with SFC and DFC input features, we devised a novel approach called Recursive CLustering using Additive Noise (R-CLAN). Here, white Gaussian noise was successively added to SFC (or DFC) features such that the SNR value of the subsequent corrupted features decreased by 1 dB per iteration, with SNR values starting from 100 dB in the first iteration. In this context, the traditional definition of SNR was used, that is, SNR = 10 × log(S/N), where S was the signal power (mean squared value of all the original SFC [or DFC] values), N was the noise power (mean squared value of additive Gaussian noise), and the logarithm being taken to the base of 10. An SNR value of 100 dB indicates a high amount of signal relative to noise (higher quality data), whereas a value of 0 dB indicates an equal amount of noise and signal in the data, making it difficult to distinguish signal from noise (lower quality data).
Starting with data having an SNR of 100 dB, we performed DBSCAN and OPTICS clustering at each SNR value, decreasing the SNR by 1 dB in each iteration and terminating only when the clustering structure had changed (i.e. the cluster purity value changed from the original at 100 dB). That means, we stopped at that SNR value when at least one subject in one of the groups was clustered into another group, thus changing the groups' structure. At this point, the terminating SNR value was taken as a measure of robustness. The lower the SNR value, the more noise that was present in the data, thus more robust the algorithm would be in combination with the input features (SFC or DFC), since the low SNR indicates a higher tolerance for noise. Figure 2 provides the flowchart for this procedure. R-CLAN and separationindex procedures complement one another and were used to measure robustness objectively.
Results
We first present the findings from DBSCAN and OPTICS clustering, before reporting the results from the assessment of robustness using our R-CLAN and separation index techniques. Figure 3a, b shows the epsilon plots Fig. 1 Example illustration of the reachability plot obtained in OPTICS generated using DBSCAN with SFC and DFC features, respectively. With both plots, our method of choosing ε resulted in DBSCAN detecting a total of four clusters, which was the expected number, given that we had four diagnostic groups.
Clustering performances are presented in Table 2. With DBSCAN clustering, the average group-wise cluster purity with SFC and DFC features were 75% and 87.88%, respectively; while OPTICS clustered with 93.18% cluster purity using SFC and 95.46% using DFC features. From this, it is clear that, (i) OPTICS performed better than DBSCAN, (ii) DFC features resulted in higher performance than SFC features, (iii) OPTICS clustering using DFC features resulted in the overall best performance, and (iv) the performance with control and AD groups were higher than that with the intermediate EMCI and LMCI groups.
The execution times for a single instance of DBSCAN and OPTICS algorithms on our computer (Intel © Xeon © quad-core processor, 3.5 GHz) were 0.0206 s and 0.0204 s, respectively, which is impressive. These numbers are interesting, because the developers of OPTICS (Ankerst et al. [38]) reported that OPTICS was consistently 1.6 times slower than DBSCAN. However, their observations may not be applicable to today's computers since they assessed it in their computer 20 years ago. It should be noted that the time complexity of both algorithms is dependent on ε and the underlying data structure, and the general runtime complexity is the same for both: O(n*log(n)). Hence, to determine any consistent differences in execution time, the two algorithms must be repeatedly run on different datasets.
Assessing robustness using additive noise
As noted earlier, higher cluster purity does not necessarily imply its robustness to noise or outliers. We devised a novel technique to assess the robustness of clustering (R-CLAN). A lower SNR value identified by R-CLAN indicates that the clustering method is more robust, because it would have delivered the same clustering performance even at higher noise levels. We applied this technique to clustering of both SFC and DFC features using both DBSCAN and OPTICS (see Table 3). Clearly, OPTICS clustering was more robust to noise than DBSCAN since lower SNR values are indicative of higher robustness. We obtained higher robustness with both SFC and DFC features using OPTICS, indicating that OPTICS could deliver the same level of clustering performance even at higher noise levels. OPTICS could withstand noise levels of 125-315 times (or 21-25 dB) more than what DBSCAN could. It was also observed that DFC features were more robust to noise compared to SFC features, and the best performance was delivered by OPTICS using DFC features, with an SNR of 21 dB. DFC features could withstand about 20 times (or 13 dB) more noise than SFC features using OPTICS. With these observations, we concluded that OPTICS is a more robust clustering technique than DBSCAN, and that DFC features could result in more robust clustering performance than SFC features. These findings also corroborate with clustering results presented earlier, wherein OPTICS and DFC resulted in better cluster purities than DBSCAN and SFC. Our findings attribute both superior performance and higher robustness to OPTICS clustering using DFC features.
Separation index to evaluate robustness of reachability plot in OPTICS clustering
The reachability plot determines the quality of OPTICS clustering, and hence we devised a measure called the separation index to evaluate the robustness of the reachability plot. A higher value indicates superior robustness. Upon evaluating the measure with both SFC and DFC features (see Table 4), we found that DFC features provided a higher value of separation index, which in turn indicates that DFC, compared to SFC features, is a more robust measure for performing clustering using OPTICS. This result corroborates with the result obtained from R-CLAN regarding robustness using additive noise, wherein the DFC features resulted in higher robustness compared to the SFC features. These findings further reiterate that OPTICS is a better clustering algorithm than DBSCAN for unsupervised clustering of fMRI connectivity features obtained from subjects with cognitive impairment, and that DFC features would result in better clustering performance than SFC features.
Discussion
In this work, density-based clustering was performed on static and dynamic functional connectivity features obtained from fMRI data of subjects with cognitive impairment. The data consisted of subjects with early and late mild cognitive impairment, Alzheimer's disease and matched aged healthy controls. DBSCAN and OPTICS clustering techniques were applied to SFC as well as DFC features obtained from fMRI data. Since the input data came from four diagnostic categories, we expected to identify four clusters using DBSCAN and OPTICS, although the algorithms were not biased with this information a priori. Upon clustering, we found that both DBSCAN and OPTICS resulted in four clusters, obtained in a blind unsupervised manner. This shows that these techniques were able to detect inherent disease clusters from neuroimaging data without any supervision. We found that higher cluster purities were obtained with OPTICS clustering compared to DBSCAN, and that DFC features always resulted in superior clustering performance compared to SFC features. Furthermore, the same trend was observed with the measures of robustness and separation index. OPTICS clustering combined with DFC features resulted in highest performance as well as most robustness to noise. Previous work by Jia et al. [46] reported a similar superior performance of DFC over SFC, wherein they determined that DFC features explained significantly more variance in human behavior than SFC. Another report by Jin et al. [47] found that DFC features have better ability in predicting psychiatric disorders (such as post-traumatic stress disorder) compared to SFC features. Our findings corroborate with these previous reports.
Referring to Table 2, the average clustering performance was found to be higher in control (93.57%) and AD groups (92.24%), as compared to the intermediate EMCI (86.03%) and LMCI (80.15%) groups. This indicates a rather expected trend wherein the control (completely healthy) and AD (completely ill) groups are on the extreme, resulting in less outliers, as compared to EMCI and LMCI groups that form the mid-band of the spectrum, resulting in more outliers and lower performance.
A novel technique using additive noise (called R-CLAN) was devised by us to assess the robustness of DBSCAN and OPTICS clustering. Table 3 shows that OPTICS clustering was more robust to noise than DBSCAN for both SFC and DFC feature sets, demonstrating that OPTICS would be able to withstand higher noise levels than DBSCAN. In addition, clustering with DFC features resulted in superior clustering performance as well as superior robustness to noise than clustering with SFC features. Static connectivity is popular in neuroimaging research and is extensively used, while dynamic connectivity is a newer technique that has gained traction more recently [42]. Both SFC and DFC provide characteristically distinct information. While the former provides the strength of connectivity over the entire fMRI scan, the latter provides the temporal variability of connectivity (i.e. change in connectivity over time). Recent studies have demonstrated the unique and superior properties of DFC over SFC [46,55]. A comparison of SFC and DFC in their ability to identify hidden structures in the data is one of the novel contributions of our work, wherein DFC was found to be better and more robust than SFC. This is important, because the research community still largely prefers to not look beyond static connectivity. We hope our findings encourage researchers to incorporate dynamic connectivity analysis among their research strategies.
Traditionally, supervised classifiers are termed 'robust' when the testing error is close to the training error-that is, the classifier is able to perform well even on different datasets on which it is previously not trained. Our definition of robustness works in a similar manner: by adding noise to the underlying data, our form of robustness is a measure of how much the quality of data, evaluated by SNR, must change before the clustering changes the subject classes. The 'different' data here is not a completely new set of data, but the same data changed by noise. Besides the input parameters, feature selection and other algorithmic characteristics, an unsupervised learning algorithm would be sensitive to outliers in the data as well as underlying data quality. Since OPTICS and DBSCAN are already designed to detect outliers [37,38], our robustness measures are a novel way of evaluating their sensitivity to data quality. According to data mining literature, this is the difference between robustness to class noise (identifying outliers [56]), and to attribute noise (errors in the feature values themselves [57]). Attribute noise tends to be more difficult to detect and eliminate than class noise [58]; hence, it is important that clustering is able to work well in the presence of high attribute noise, as it is common in fMRI data even after pre-processing and noise reduction [39]. To the best of our knowledge, little research has been done in determining the level of attribute noise in fMRI data that either clustering or a supervised classifier can withstand before losing accuracy. Using our robustness measures, results indicate that OPTICS and DFC features are the best combination of clustering method and input features, respectively, to use in the presence of attribute noise, as compared to DBSCAN and SFC features.
During our evaluation of robustness, we modeled the noise present in SFC and DFC features using additive white Gaussian noise. It is notable that modeling noise in fMRI with a white Gaussian distribution is well documented [40,41], supported by the fact that non-biological noise comes from various independent sources [40], which tend to resemble white noise. In addition, Chen and Tyler [40] showed that the power spectrum of non-biological noise approached the flat line characteristic of a white noise spectrum, and hence could be well-approximated by white Gaussian noise; however, biological noise was non-Gaussian. Other works indicate that the Gaussian noise assumption is appropriate for data with high SNR values, but noise approaches a Rician distribution at lower SNR values [39,59,60]. Thus, there may be a weakness in our current model of noise since it does not consider non-white sources of noise that can corrupt features. When developing a noise reduction method on fMRI data obtained from Alzheimer's disease patients, Garg et al. [41] used additive white Gaussian noise and Rician noise separately to model different levels of noise in artificial and realworld data [41]. Likewise, future work could compare additive Rician noise and white Gaussian noise within the R-CLAN framework. The response to clustering with various levels of different types of attribute noise could be investigated as well.
The idea of measuring noise robustness of clustering techniques is not alien to the literature. For example, in the original work that presented OPTICS, a steepness value was used to determine at which point clusters begin and end [38]. While it worked well in theory, its success was found to depend highly on the steepness value, an input parameter with no standard way of determining its appropriate value. In another method [61], similar to the peaks we used in finding the separation index, all significant local maxima in the plot were found, and significance was used to distinguish deep valleys representing well-formed clusters from shallow regions that were simply noise. The separation index is similar to these aforementioned methods, in that it is a measure of how well defined each cluster is with respect to the noise points surrounding it. However, it is a much simpler method and requires no human inputs. It could also be used in conjunction with the previous methods and other cluster extracting methods to determine how well-separated a specific cluster is from the points that form its boundaries on the reachability plot. In addition, separation index could be used as a measure of relative density of each cluster, since higher separation indices usually correspond to deeper valleys, which in turn indicates high-density regions within the dataset.
It is interesting to note that DBSCAN and OPTICS were found to be just as competitive in classifying Alzheimer's disease patients as traditional supervised learning classifiers [62]. These two density-based clustering algorithms automatically detect outliers as part of their function, leaving us able to test for robustness to data quality-something that is difficult to do with supervised classifiers since outlier noise is more harmful to their performance [58]. In addition, unsupervised algorithms such as DBSCAN and OPTICS can be used in applications beyond just classification. They can also be used to determine hidden disease states or sub-states that are often undetected in traditional diagnostic classification, future prediction of diagnostic status of new subjects based on current clusters [63] and hypothesis generation and testing, to name a few. For example, with cerebrospinal fluid data, structural MRI and FDG-PET scans as features, an earlier study used hierarchical clustering on healthy controls to identify subgroups within these subjects that could later be susceptible to Alzheimer's disease [64]. However, the number of clusters had to be chosen through visual assessment prior to clustering. Our results indicate that similar experiments could be performed using density-based clustering methods that require few input parameters, and no requirement to provide the number of clusters.
We used static and dynamic functional connectivity in this work. The brain networks obtained from them can also be used in a graph-theoretic framework and future studies could attempt clustering of network properties to obtain newer insights.
Supervised algorithms such as support vector machine (SVM) are popular in brain imaging. Backed by results of this study, we encourage researchers to consider density-based clustering methods in subject grouping and classification. Future work in this area could involve performing these analyses on various neurological and psychiatric disease conditions such as epilepsy, attention deficit hyperactivity disorder (ADHD), schizophrenia, autism spectrum disorder (ASD), post-traumatic stress disorder (PTSD), etc. to determine if disease classification findings are consistently good across various clinical populations.
Our main contributions are as follows: (i) we demonstrated that unsupervised learning, which is not prone to overfitting, could be used to determine inherent disease clusters that provide performances comparable to supervised learning. (ii) We demonstrated that densitybased clustering methods, which require minimal input parameters, perform satisfactorily, and that the OPTICS technique is the suitable choice for fMRI connectivity data. (iii) We proposed two novel robustness assessment techniques and demonstrated that OPTICS clustering is a superior and noise-robust technique for fMRI connectivity data. (iv) For the first time in the literature, we assessed and compared the ability of static and dynamic connectivity features in identifying inherent disease clusters and found dynamic connectivity features to be superior and more robust. This is a significant finding given that the research community is still largely inclined towards the continued use of static connectivity alone.
Conclusion
Unsupervised learning algorithms present some key theoretical advantages compared to supervised learning algorithms (e.g. a priori diagnostic labeling is not required, and they possess the potential to detect unknown data structures). Our work presented and contrasted two relatively new methods to perform unsupervised clustering on fMRI data in a relatively large clinical dataset. We briefly described and assessed the performance of two density-based clustering algorithms: DBSCAN and OPTICS. These algorithms were used to cluster three stages of cognitive impairment (EMCI, LMCI and AD) and matched healthy controls using static and dynamic functional connectivity features. DBSCAN was found to be relatively more sensitive to noise and less precise, whereas OPTICS accurately identified all four groups and was more robust to noise as measured from our proposed R-CLAN and separation index robustness measures. OPTICS clustering using DFC features was found to be more dependable than DBSCAN and SFC features. With the superiority of DFC and OPTICS, we encourage researchers to incorporate dynamic connectivity analysis among their research strategies and hope that we have motivated the community sufficiently to consider employing OPTICS clustering for subject classification and identifying hidden disease clusters. | 9,696.4 | 2020-11-26T00:00:00.000 | [
"Computer Science"
] |
The valproic acid rat model of autism presents with gut bacterial dysbiosis similar to that in human autism
Background Gut microbiota has the capacity to impact the regular function of the brain, which can in turn affect the composition of microbiota. Autism spectrum disorder (ASD) patients suffer from gastrointestinal problems and experience changes in gut microbiota; however, it is not yet clear whether the change in the microbiota associated with ASD is a cause or a consequence of the disease. Methods We have investigated the species richness and microbial composition in a valproic acid (VPA)-induced rat model autism. Fecal samples from the rectum were collected at necropsy, microbial total DNA was extracted, 16 rRNA genes sequenced using Illumina, and the global microbial co-occurrence network was constructed using a random matrix theory-based pipeline. Collected rat microbiome data were compared to available data derived from cases of autism. Results We found that VPA administration during pregnancy reduced fecal microbial richness, changed the gut microbial composition, and altered the metabolite potential of the fecal microbial community in a pattern similar to that seen in patients with ASD. However, the global network property and network composition as well as microbial co-occurrence patterns were largely preserved in the offspring of rats exposed to prenatal administration of VPA. Conclusions Our data on the microbiota of the VPA rat model of autism indicate that this model, in addition to behaviorally and anatomically mimicking the autistic brain as previously shown, also mimics the microbiome features of autism, making it one of the best-suited rodent models for the study of autism and ASD. Electronic supplementary material The online version of this article (10.1186/s13229-018-0251-3) contains supplementary material, which is available to authorized users.
Introduction
The gut and brain form the gut-brain axis through bidirectional nervous, endocrine, and immune communication.
A change in one of these systems will most certainly have effects on the other systems. Disorders in the composition and quantity of gut microbiota can affect both the enteric nervous system and the central nervous system [1]. Specifically, microbiota has the capacity to impact the regular function of the brain, which can in turn affect the composition of microbiota via specific substances. Specific molecules and metabolic pathways in microbiota have been shown to be linked to neural development and neurodegenerative disorders, including Parkinson's disease, Alzheimer's disease, Huntington's disease, schizophrenia, and multiple sclerosis [1][2][3].
Valproic acid (VPA) is a medication used for epilepsy and mood swings. Children prenatally exposed to VPA have an increased chance of being diagnosed with autism [4][5][6][7]. In addition, VPA exposure leads to accelerated or early brain growth which also occurs in some cases of autism [8]. Most importantly, VPA causes an alteration in the excitation/inhibition the cerebral cortex. Specifically, rats exposed to VPA in utero present with an increased glutamatergic and a decreased GABAergic component in the cortex [9]. The VPA rat model of autism experiences behavioral, immune, and microbiota changes similar to those described in patients with autism. We recently discovered that specific GABAergic interneuron types, the parvalbumin (PV)+ Chandelier (Ch) and PV+ Baskets cells (Bsk) cells, are decreased in the prefrontal cortex in autism [10,11]. We also demonstrated that when VPA is administered via intraperitoneal injection to pregnant rats at a specific day of prenatal development with a specific dose (E (embryonic day) 12.5, 400 mg/kg), the offspring of these rats ("400-E12 VPA rats") experienced a decrease in the number of PV+ Ch and PV+ Bsk cells in their adult cerebral cortex similar to what we found in humans with autism (under revision). In addition, the 400-E12 VPA rats also experienced behavioral changes similar to those exhibited by patients with autism (under revision).
ASD patients suffer from gastrointestinal problems and experience changes in the gut microbiota, including shifts in levels of Firmicutes, Bacteroidetes, and Proteobacteria with the abundance of Lactobacillares and Clostridia [12,13]. Other gut commensals found to be altered in autism belong to the genera such as Bifidobacterium, Lactobacillus, Prevotella, and Ruminococcus [14]. Microbiome changes have been also described in several mouse models for autism, with one publication in a VPA mouse indicating a decreased abundance for Bacteroidetes in VPA exposed offspring [15]. It is not yet clear whether the changes in the microbiome associated to specific disease states are a cause or a consequence of the disease. Recent studies indicate that gut microbiota transplantation can transfer behavioral phenotypes, suggesting that the gut microbiota may be a modifiable factor modulating the development or pathogenesis of neuropsychiatric conditions. In this study, we investigated changes in microbial richness and microbiome composition in rats in response to VPA prenatal administration (400 mg/kg at E12) and found VPA-induced alterations similar to those seen in autism.
VPA reduces fecal microbial richness of the offspring
A single IP injection of VPA during pregnancy in rats had a significant effect on fecal microbial richness in their offspring (P < 0.05, the Welch t test). In the control rats, Chao1 value was 1005.62 ± 120.00 (N = 11). VPA injection significantly reduced Chao1 to 925.98 ± 76.62 (N = 10, P < 0.05). However, other microbial diversity indicators, such as Pielou's evenness, PD whole tree, and Shannon and Simpson indices, remained unchanged by VPA.
In utero VPA exposure also had a profound impact on fecal microbial structure. At the operational taxonomic unit (OTU) level, mean Bray-Curtis similarity values (%) within either the control or VPA groups were 63.57 ± 4.04, a significantly higher than mean similarity between the control and VPA groups (59.52 ± 3.24; P = 1.78 × 10 −12 ). A cluster analysis using the group average approach of the resemblance values suggested individual microbial communities from the control and VPA groups were able to form two distinct clusters, respectively (Fig. 1). Together, our findings suggest that the effect of VPA may be long-lasting and could have a significant impact on the fecal microbial community structure in rats prenatally exposed to the toxin.
VPA affects the gut microbial composition
Compared to the control group, VPA treatment significantly altered the abundance of 13 higher level taxa based on linear discriminating analysis (LDA) scores (the absolute log 10 LDA score, or LDA, > 2.0 and P < 0.05 based on the Kruskal-Wallis test), including one class (α-Proteobacteria, Fig. 2a), four families (Fig. 2b, c), and six genera (Fig. 3a, b). For example, the abundance of α-Proteobacteria was significantly increased by VPA treatment ( Fig. 2a; LDA > 3.4 and P < 0.05). The abundance of three families, Eubacteriaceae ( Fig. 2b), Rikenellaceae, and Staphylococcaceae was also significantly increased by VPA (LDA > 2.0 and P < 0.05). On the other hand, the abundance of Enterobacteriaceae (Fig. 2c) was significantly repressed by VPA (LDA = 2.0229 and P = 0.0014). At the genus level, a significantly higher abundance level of the genus Anaerotruncus (Fig. 3a) was observed in the control group than in the VPA group while the VPA significantly increased the abundance of Allobaculum, Anaerofustis, Proteus, and Staphylococcus (LDA > 2.0 and P < 0.01; Fig. 3b).
The abundance of at least 100 OTU was significantly impacted by VPA treatment (LDA > 2.0 and P < 0.05 based on the Kruskal-Wallis test), representing approximately 10% of all OTU in a given gut microbial community (Additional file 1). Together, the relative abundance of these OTU accounted for approximately 15% of the fecal microbial community. Intriguingly, 93 of the 100 OTU significantly impacted by VPA belonged to the class Clostridia. Select OTU with significantly altered relative abundance by VPA were listed in Table 1. Compared to untreated controls, VPA repressed the abundance of 61 OTU while increasing that of 39 OTU. For example, 2 OTU assigned to a named species, Ruminococcus flavefaciens, ID_1110988 (Fig. 3c) and ID_562599, were significantly increased by VPA (Fig. 3c). Moreover, VPA had a profound impact on some of the most predominant OTU. Two OTU, ID_4296216 and ID_264734, belonging to the genus Ruminococcus and the family S24-7, respectively, were significantly increased by VPA; and both had relative abundance greater than 1.0%. OTU ID_272080 (Clostridiales, Fig. 3d) and ID_177930 (Lachnospiraceae) were also among the most abundant.
Differences in microbial composition between the sexes were investigated by comparing male and female rats prenatally exposed to VPA with same-sex control rats. While uneven sample size in the male and female comparison may be a concern, the drastic sex-dependent changes induced by VPA were evident (Fig. 4a, b). At the phylum level, the abundance of Bacteroidetes was significantly increased by VPA in males only (LDA = 4.69; P < 0.05) while the abundance of Actinobacteria was significantly increased by VPA in females only (LDA = 3.50; P < 0.05), as compared to controls of the same sex. VPA significantly repressed the abundance of the class Coriobacteriia while it increased the two classes Bacteroidia and α-Proteobacteria in males only (LDA > 2.0 and P < 0.05). The abundance of several genera was significantly increased by VPA only in females, including Allobaculum, Bifidobacterium, Odoribacter, and Staphylococcus (LDA > 2.6 and P < 0.05). Intriguingly, the abundance of the genus Candidatus Arthromitus, a group of the segmented filamentous bacteria (SFB), was also significantly increased by VPA in female rats (LDA = 3.774 and P = 0.015) but not males. There is strong evidence demonstrating that these gut epithelium-associated bacteria possess strong abilities to modulate host immune responses.
At the species (OTU) level, VPA prenatal exposure induced significant changes in the relative abundance of 66 and 72 OTU in male and female rats, respectively. Among them, the abundance of 61 OTU was also significantly impacted by VPA exposure regardless of gender. A total of 9 OTU displayed significant directional changes by VPA in both male and female rats ( Table 2). For example, the relative abundance of an OTU (GreenGene ID_1110312) assigned to the order Clostridiales and an OTU (Green-Gene ID_1110988) assigned to Ruminococcus flavefaciens was significantly higher in both male and female rats with prenatal VPA exposure (LDA > 3.40; P < 0.001) while 7 other OTU were significantly reduced in fecal microbial communities of both male and female rats with VPA exposure (LDA > 2.0 and P < 0.05).
VPA alters the metabolite potential of the fecal microbial community
Among the 5264 predicted KEGG proteins from the rat fecal microbiome, 4331 proteins were supported by at least 10 hits. Several proteins belonging to ABC transporters, such as multiple sugar transport system permease protein (K02025) and ATP-binding cassette, subfamily B, bacterial (K06147), and RNA polymerase sigma-70 factor, ECF subfamily (K03088) were among the most abundant. Compared to the control, VPA injection repressed the abundance of 11 KEGG proteins, including putative ABC transport system ATP-binding protein (K02003), multiple sugar transport system substrate-binding protein (K02027), LacI family transcriptional regulator (K02529), methyl-accepting chemotaxis protein (K03406), two proteins related to two-component system, K07718 and K07720, and four proteins in the peptide/nickel transport system (K02031, K02032, K02033, K02034; ATP-binding and permease proteins, respectively).
VPA injection appeared to have a profound impact on gut microbial metabolic pathways. A total of 29 pathways were significantly impacted by VPA (LDA score > 2.0; P < 0.05), resulting in a significantly elevated hit count for 21 pathways while repressing 8 pathways (Table 3). For example, the normalized hit counts assigned to bacterial secretion system, DNA replication, DNA repairs and recombination proteins, histidine metabolism, and lipid biosynthesis were significantly increased by VPA. On the other hand, ABC transporters, the most abundant pathways in numerous biological systems, and two-component system, bacterial chemotaxis and bacterial motility proteins, were significantly repressed by VPA.
Microbial co-occurrence patterns and network structure remain unchanged by VPA As Table 4 shows, the global network properties as well as network composition and microbial co-occurrence patterns in fecal microbial communities of the offspring between the control and VPA-treated rats were largely indistinguishable. Both global networks were highly modular with a modularity between 0.84 and 0.86. Both networks shared 230 nodes (OTU) or 57.1% of all members. The number of large modules with ≥ 10 members in the two networks was identical [12]. Moreover, the relative proportion (%) of OTU node distributions at the phylum level was stable between the two networks (Fig. 5). For example, the most dominant phylum in both networks were Firmicutes, accounting for 89.6% and 87.6% of all OTU in the control and VPA networks, respectively, which was similar to the percentage of the OTU assigned to Firmicutes in the microbial communities prior to network inference (88.3 and 87.5%, in the control and VPA groups, respectively). Moreover, the percentage of OTU nodes assigned to Actinobacteria was 0.50 and 0.49% in the control and VPA networks, respectively. Some minor yet notable differences existed, nevertheless. The percentage of OTU nodes assigned to Proteobacteria was 0.99% and 0.49% in the control and VPA networks, respectively. Of note, one OTU (Green-GeneID_1136443) assigned to Mucispirillum schaedleri, the sole species in the phylum Deferribacteres, was present in every sample collected in a relatively high abundance but did not interact with any other OTU in the communities. As a result, this species was not a member of either network. The Z-P scatter plots allowed us to dissect the topological roles of OTU nodes in the network and infer their possible ecological function in the fecal microbial community. As Fig. 6 shows, > 98% of the OTU nodes in both networks were peripherals with most of their links lying inside their own modules, based on the Olesen classification [16]. These OTU likely acted as specialists in the microbial community. A total of six OTU, all assigned to the order Clostridiales, may function as generalists in the fecal microbial community of control rats, including one OTU (GreenGene ID_545038), assigned to the family Peptostreptococcaceae, acted as a connector species, linking modules together while other five OTU were module hubs and may play an important role for the coherence of its own module. The relative abundance of the two of the five OTU, GreenGe-ne_ID_461487 and _1109864, was also significantly altered by VPA administration. In the VPA network, the OTU acted as connectors and module hubs were completely different. While all three connectors were from the order Clostridiales, two of them belonged to the family Ruminococcaceae (GreenGene ID_183686 and _4432234). On the other hand, one of the four module hubs, GreenGene ID_322723, was from the genus Lactobacillus while other three OTU were from the order Clostridiales in the VPA network. Overall, we demonstrated that prenatal administration of VPA reduces fecal microbial richness, changes the gut microbial composition, and alters the metabolite potential of the fecal microbial community in rats. However, the global network property and network composition as well as microbial co-occurrence patterns are largely preserved in these animals.
VPA administration
Intraperitoneal administration of VPA (valproic acid sodium salt, Sigma P4543) was delivered to pregnant Sprague Dawley rats (8 weeks old) at E12.5 (n = 3). Pregnant control dams of the same age were injected with sterile saline also at E12.5 (n = 5). The pups of these dams were the subjects of this study. We collected stool and tissue samples from 10 VPA offspring and 11 control offspring equally distributed among groups.
Fecal total DNA extraction
Fecal samples from the rectum were collected from 8-week-old rats at necropsy and snap-frozen in liquid nitrogen and stored at − 80°C freezers until total DNA was extracted. Microbial total DNA was extracted from fecal samples using a QIAamp PowerFecal DNA Kit (Qiagen, Germantown, MD, USA). DNA integrity and concentration were quantified using a BioAnalyzer 2100 (Agilent, Palo Alto, CA, USA).
Illumina sequencing of 16S rRNA genes
The 16S rRNA gene sequencing was performed as previously described [17,18] . PCR was performed using the following cycling profile: initial denaturing at 95°C for 2 min followed by 20 cycles of 95°C 30 s, 60°C 30 s, and 72°C 60 s. Amplicons were purified using Agencourt AMPure XP bead kits (Beckman Coulter Genomics, Danvers, MA, USA) and quantified using a BioAnalyzer DNA 7500 chip kit and a QuantiFluor fluorometer. The purified amplicons from individual samples were pooled in equal molar ratios. The purified amplicon pool was further spiked with approximately 25% of whole-genome shotgun libraries prepared using an Illumina TruSeq DNA sample prep kit with a compatible adaptor barcode to enhance sequence diversity during the first few cycles of sequencing for better cluster differentiation. The concentration of the pooled final library pool was quantified using a BioAnalyzer high-sensitivity DNA chip kit (Agilent). The library pool was sequenced using an Illumina MiSeq Reagent Kit v3 on an Illumina MiSeq sequencer as described previously. The mean number of 2 × 250 bp pair-end sequences obtained was 347,849.14 (± 90,627.63, SD, N = 21) per sample.
Sequence data analysis
The sequence data were preprocessed using MiSeq Control Software (MCS) v2.4.1. Raw sequences were first Fig. 6 The scatter plot showing the distribution of OTU based on their topological roles in the network in the gut microbial community of rats with and without prenatal VPA exposure. a Control. b VPA. Each dot represents an OTU. Z, within-module connectivity. P, Among-module connectivity analyzed using FastQC version 0.11.2 to check basic statistics, such as GC%, per base quality score distribution, and sequences flagged as poor quality. The four maximally degenerate bases (NNNN) at the most 5′ end of the read pair, which were designed to maximize the diversity during the first four bases of the sequencing run for better identification of unique clusters and improve base-calling accuracy, were then removed. The presence of forward and reverse PCR primers at the 5′ and 3′ ends of each sequence read was scanned; the reads without primers were discarded. Chimeric reads were also removed. The processed pair-end reads were then merged using PandaSeq v2.8 to generate representative complete nucleotide sequences (contigs) using default parameters. The overlapping regions of the pair-end read were first aligned and scored, and reads with low score alignments and high rate of mismatches were discarded. After these quality control steps and filtering procedures, greater than 91% of the input raw sequences (mean 347,849 reads per sample) retained for subsequent analysis.
The QIIME pipeline (v.1.9.1) with the default reference v. 0.1.3 was used to analyze the 16S rRNA gene sequences. Both "closed reference" and "open reference" protocols in the pipeline were used for OTU picking as previously described [18]. The rarefaction depth was set to 100,000 quality reads per sample. The default QIIME parameters were used, except for that the OTU abundance threshold (lowered to 0.0001%). The GreenGene database (v13.8) was used for taxonomy assignment (greengenes.lbl.gov). PyNAST (v1.2.2) was used for sequence alignment. PICRUSt (v1.0.0), a software package designed to predict metagenome functional contents from marker gene surveys (Langille et al., 2013), was used with default parameters to predict gene contents and metagenomic functional information based on the OTU table generated using the closed-reference protocol in QIIME. Briefly, the OTU table was first normalized by dividing each OTU by the known/predicted 16S copy number by using the PICRUSt workflow: normalize_by_copy_number.py. The gene contents or the abundance of KEGG Orthology (KO) were predicted from the normalized OTU table using the workflow: predict_metagenomes.py. The predicted metagenome function was further analyzed by collapsing thousands of KEGG Orthologs into higher functional categories (pathways) (categorize_by_function.py). In addition, specific OTU contributing to a given function or pathway was identified by using the workflow: metagen-ome_contributions.py, as described previously [17]. The linear discriminant analysis effect size (LEfSe) algorithm was used to identify OTU relative abundance values and KEGG gene families and pathways that display significant differences between two biological conditions [19] with a default cutoff (the absolute log 10 LDA score or LDA > 2.0 and P values < 0.05 based on the Kruskal-Wallis test by ranks).
Network construction and visualization
The global microbial co-occurrence network was constructed using a random matrix theory (RMT)-based pipeline [20,21]. The OTU detected in < 50% of all samples were excluded due to a drastic effect of OTU sparsity on the precision and sensitivity of network inference [22]. A similarity matrix, which measures the degree of concordance between the abundance profiles of individual OTU across different samples, was then obtained by using Pearson correlation analysis of the abundance data [20]. A threshold cutoff value (0.88) was automatically determined by calculating the transition from Gaussian orthogonal ensemble to Poisson distribution of the nearest-neighbor spacing distribution of eigenvalues, in the pipeline and then applied to generate an adjacent matrix for network inference [21]. The fast-greedy modularity optimization procedure was used for module separation. The within-module degree (Z) and amongmodule connectivity (P) were then calculated and plotted to generate a scatter plot for each network to gain insights into the topological roles of individual nodes in the network according to the Olesen classification [21]. The network structure was finally visualized using Cytoscape v3.6.1.
Discussion
The gut and brain form the gut-brain axis through bidirectional nervous, endocrine, and immune communications. Mammalian species often contain similar microbiome richness at the level of phylum, but diversity and richness of species are highly variable among individuals [23]. This variability is determined by many factors, including genetics, environment, diet, disease, stress, and age [24]. When microbiota composition is altered due to any of these factors, the function of the intestinal mucosal barrier is reduced; and bacterial products such as amyloids and lipopolysaccharides leak, increasing the permeability of the blood brain barrier, which, in turn, affects the central nervous system [25].
Humans with autism and mice models of autism have shown significant alterations in their microbiota composition. Children with autism present with more GI symptoms than typically developing children, and the severity of their GI symptoms is correlated to the severity of their behavioral symptoms [26,27]. These children also demonstrate bacterial dysbiosis, which has been suggested to play a role in autism's etiology [28]. While different studies have found changes in specific bacteria are often associated to dysbiosis in autism, it is generally accepted that the gut microbial community of patients with autism displays a higher relative abundance of Lactobacillacease and Clostridia and a reduced incidence of the Prevotella and other fermenters [29][30][31][32][33][34][35].
Studies in mice have allowed to better understand the role of the microbiota in autism [36]. The lack of microbiota produces changes in behavior. For example, germ-free mice lack a preference for spending time with another mouse over spending time in an empty chamber and deviate from the experimental expectation that they would spend more time exploring a space containing a new mouse rather than a familiar mouse [37,38]. Germ-free mice also show a differential gene expression associated with neuronal structure and function in the amygdala [39]. Germ-free rats present with a social deficit phenotype in the reciprocal social interaction test [40]. Antibiotic treatment in wildtype and mouse models of autism also affects social behavior [15,41,42]. On the other hand, the use of probiotics ameliorates behavioral deficits [38,42]. Together, these data point out a role of microbiota in regulating behavior. The nature of microbiota has been studied in several mouse models for autism. The inbred mouse, BTBR, that presents with the full spectrum of ASD-like behavior, shows an overall decrease in bacterial diversity characterized by an increase in the relative abundance of the genus Akkermansia and a decrease in abundance of Bifidobacterium and Clostridiales [43][44][45]. In addition, BTBR mice have impaired intestinal integrity and a deficit in the intestinal tight junction proteins Ocln and Tjp1 [46]. Environmental mice models of autism have also produced information about the importance of microbiota in this condition. In the maternal immune activation (MIA) mouse model, the species richness did not differ significantly between control and MIA offspring, but the offspring displayed decreased intestinal barrier integrity, altered gut microbiota, and increased abundance of the families Lachnospiraceae, Porphyromonadaceae, and Prevotellaceae [47]. In the maternal high-fat diet (MHFD) mouse model for autism, the diversity of the microbiota was decreased compared to the control group, with marked decreased in Lactobacillus, Parabacteroides, Helicobacter, and B. uniformis. In this study, we demonstrated that species richness in the fecal microbial community in the autistic-like rat model, the 400-E12 VPA rat, was significantly reduced. Using next-generation sequencing technology in a murine autism model, it was reported that the microbiome composition in mice in utero exposed to VPA presented with a decreased of Bacteroids [15]. Other gut commensals found to be altered in the VPA mice were Deltaproteobacteris and Erysipelotrichales. These changes in VPA mouse microbiota composition were coincident with changes in behaviors linked to autism [15].
Our 400-E12 VPA rats showed a decrease in microbial diversity (species richness). Specifically, significant increases in the abundance of α-Proteobacteria, Eubateriaceae, Rikenellaceae, and Staphylococcaceae. On the other hand, Enterobacteriaceae was significantly decreased by VPA exposure in utero. At the genus level, we found a significantly higher abundance of the genus Anaerotruncus in the control group and a significantly increased abundance of the genera Allobaculum, Anaerofustis, Proteus, and Staphylococcus in the VPA group. This is the first time the microbial species richness and microbiome composition have been studied in a rat model for autism, the 400-E12 VPA rat. The decrease in microbial diversity in this rat model was consistent with the observations in human autism and most of the mouse models of autism studied to date. The gut microbial composition was largely similar to that of humans with autism and murine autism-like models. The enteric bacteria, especially the class Clostridia, are known to play an important role in children with autism (Frye et al. 2015). In our study, Clostridia is the most dominant class in the rat fecal microbial community, accounting for more than 60% of all sequence reads, followed by the class Bacteroidia with more than 30% of the sequences. Among the 100 OTU significantly impacted by prenatal VPA administration, the vast majority of them, 94, belonged to Clostridia, suggesting that ecological manipulation via antibiotics or pre-or pro-biotic approaches targeting this class of gut bacteria may prove effective in alleviating autism symptoms. A significant reduction in microbial species richness, such as Chao1, in the 400-E12 VPA rats was consistent with the observation in BTBR T + Itpr3 tf /J mouse model of autism [44]. However, biodiversity encompasses both species richness and evenness as well as interactions among species in the ecosystem [16]. While a marked reduction in species richness was evident in the rats with prenatal VPA exposure, species evenness in the rat gut microbial community did not appear to be impacted. Furthermore, the microbial co-occurrence patterns and microbial interactions in the community appeared to be preserved in the rats with prenatal VPA exposure.
Moreover, our findings provide further evidence of sex-specific alterations of gut microbiome by prenatal VPA administration in rodents [15]. For example, in male rats, the abundance of the family Coriobacteriaceae as well as the class Coriobacteriia was significantly repressed by VPA. An OTU (GreenGene ID_1113282), belonging to Mollicutes, was significantly increased by VPA. On the other hand, a twofold increase in the relative abundance of the phylum Proteobacteria, from 1.03% in the control rats to 2.17% in the male rats with VPA exposure, was observed. The VPA-induced increase became more evident in the class α-Proteobacteria, from 0.14% in the control male rats to 0.56% in the male rats with prenatal VPA exposure. The Proteobacteria are known to be a marker for an unstable microbial community and a risk factor of human disease [48,49]. An elevated Proteobacteria level is frequently associated with metabolic disorders and intestinal inflammation. The pathological relevance of elevated Proteobacteria abundance in autism warrants further investigation. In contrast to male rats, prenatal VPA exposure induced a distinguishingly different set of microbial taxa in female rats. The abundance of the genus Staphylococcus and the family S24-7 was significantly increased by prenatal VPA exposure only in female rats. A significant elevation of Candidatus Arthromitus, which harbors commensal SFB, by VPA was observed only in female rats. Numerous studies have established solid links between SFB colonization and human disease [50]. As a potent inducer of IgA production and T H 17 immune responses as well as innate immunity, SFB may play a role in the pathogenesis of autism. Indeed, a recent study shows that pregnant mice colonized with SFB were more likely to produce offspring with maternal immune activation (MIA)-associated abnormalities [41].
The composition of the microbiota is of great importance to the function of the brain. Bacteria can regulate brain function through several mechanisms. Some bacteria, such as Bifidobacterium and Lactobacillus, that inhabit in the gut, have the capacity to produce anti-inflammatory cytokines, while other, such as Clostridium and Ruminococcus [51], can produce pro-inflammatory cytokines. Metabolic products of the gut microbiota, such as short-chain fatty acids, have also been implicated in autism. Gut microbiota has been suggested to regulate many nervous functions including neurogenesis, differentiation, myelination, formation and integrity of the blood-brain barrier, neurotrophin and neurotransmitter release, apoptosis, gap junction modification, and synaptic pruning [52]. Moreover, several microRNAs participate in signaling networks through the intervention of the gut microbiota [53]. In addition, gut microbiota release inflammatory cytokines that can act as epigenetic regulators and regulate gene expression being a factor for example in cancer risk and diabetes-associated autoantigens [54][55][56]. Here, we demonstrated that VPA also alters the metabolite potential of the microbial community in rats. VPA prenatal administration significantly elevated 21 bacterial pathways while repressing 8 pathways. Among them, there was an increase in activation of the bacterial secretion system, DNA replication, DNA repairs, and recombination proteins and a decrease in ABC bacterial transporter pathways. These data indicate a potentially higher activity of those pathways related to bacterial survival and function.
In conclusion, our data on the gut microbial community of the 400-E12 rats in response to prenatal VPA exposure indicate that this model, in addition to demonstrating behavioral and anatomical similarities to autism, also mimics the microbiota features of autism, making it one of the best-suited rodent models for the study of autism. | 6,928.6 | 2018-12-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Analysis of the Bacterial and Host Proteins along and across the Porcine Gastrointestinal Tract
Pigs are among the most important farm animals worldwide and research to optimize their feed efficiency and improve their welfare is still in progress. The porcine intestinal microbiome is so far mainly known from sequencing-based studies. Digesta and mucosa samples from five different porcine gastrointestinal tract sections were analyzed by metaproteomics to obtain a deeper insight into the functions of bacterial groups with concomitant analyses of host proteins. Firmicutes (Prevotellaceae) dominated mucosa and digesta samples, followed by Bacteroidetes. Actinobacteria and Proteobacteria were much higher in abundance in mucosa compared to digesta samples. Functional profiling reveals the presence of core functions shared between digesta and mucosa samples. Protein abundances of energy production and conversion were higher in mucosa samples, whereas in digesta samples more proteins were involved in lipid transport and metabolism; short-chain fatty acids production were detected. Differences were also highlighted between sections, with the small intestine appearing more involved in carbohydrate transport and metabolism than the large intestine. Thus, this study produced the first functional analyses of the porcine GIT biology, discussing the findings in relation to expected bacterial and host functions.
Introduction
The mammalian gastrointestinal tract (GIT) is settled by a wide range of different microorganisms that coexist in a sensitive ecosystem and mutual cooperation, whereas bacteria predominate. Within the GIT, the microorganisms are not only living on the nutrients and energy out of the diet, but they also produce several substances such as vitamins, organic acids, secondary bile acids, and gases. Their products and the simple existence of microorganisms show an influence on host health, whereas changes in diet, environmental stress, and disease can change the microbiome within the GIT [1].
The pig (Sus scrofa) is one of the most important farm animals in today's agroeconomy, with the swine industry expanding worldwide. Achieving balance between today's meat production methods and animal welfare has proven to be difficult. To enhance the pig's health situation-the production efficiency as well as the product quality-it is important to understand the intestinal environment, especially the interactions among microorganisms and between microorganisms and their host [2,3]. The predominant aim of intestinal microbiome investigations in farm animal sciences is to determine a balanced microbial composition to optimize animal health, performance, and pathogen resistance [4], but also to investigate which microbiome composition maximizes the benefits and minimizes the costs of animal husbandry [5]. Also, the feed industry is interested in establishing or preserving this microbiota by developing feed additives, diets, and other interventions [3,5]. In addition, the pig seems to be interesting for human medical purposes as it reveals similarities in size, immunobiology, distribution of lymphocytes [6], microbial ecosystems [7,8], physiology, and disease development to humans [9]. Thereby, pigs can potentially be used as model animals for human biology [10].
The important interface between the microorganism and the host in the GIT is the mucus which functions as a barrier between feed, microorganisms, and the animal [11]. It is produced by exocrine glands and the goblet cells, which are located in the mucosal epithelium and protects the former against injuries, but also against chemical and physical forces. Mucosa consist of a complex mixture of large glycoproteins (so-called mucins) water, electrolytes, separating epithelial cells, and enzymes, but also secreted immunoglobulins and antimicrobial molecules [12]. The mucus contains bacteria and cell debris, since the mucus is the first connecting site between host and bacteria. In previous years, the mucus-associated microbiota in pigs became of interest and were studied in regards to dietary effects and age-related changes [13][14][15][16][17][18][19]. The overall intestinal bacterial phyla in pigs are headed by Firmicutes, Bacteroidetes, Proteobacteria, and Spirochaetes, whereas Fibrobacteres, Actinobacteria, Tenericutes, Synergistetes, and Planctomycetes account for less than 1% of all 16S rRNA gene sequences [13,20,21].
Since more than a decade, metaproteomics has been used to examine microbial proteins in different sample types to identify and quantify metabolic proteins and the pathways they are involved in [22]. Microbial protein and host protein co-extraction is an intrinsic bias whose effect can only be minimized, not avoided [23,24]. Although it was tried to prioritize microbial protein extraction, co-extracted host proteins have been used to concurrently study the metabolic status of the host [6]. Thus, a benefit can be found in identifying the bacterial and host proteins in one run which gives new insights to both parts from exactly the same sample.
The present work attempted to detect the active bacterial fraction of the pig's microbiome along five different sections (stomach, ileum, jejunum, cecum, and colon) by considering both luminal (digesta) and mucosal compartments of each section. A concomitant surplus is the identification and description of the porcine proteome to follow host functions.
Animal Experiment and Sampling
All experiments and care of animals were approved by the local authorities (Regional Commission of Stuttgart, permit number: V308/13 TH) in accordance with the German Welfare Legislation. This study was generated in addition to the study of Heyer et al. [25], from which detailed trial operations can be taken. Pigs (German Landrace × Piétrain, initial body weight 54.7 kg ± 4.1 kg) were randomly assigned to four experimental diets. Diets were formulated to meet or exceed the animal's nutrient requirement and differ among each other in the protein source and the calcium and phosphorous (CaP) levels. Two out of the four diets contain the low digestible (LD) corn-field peas meal as a protein source, whereas the remaining two diets comprise the high digestible (HD) corn-soybean meal as a protein source. Each of these dietary groups was further supplied with high and low CaP levels. Diets with high and low CaP level were formulated to contain respectively 120% and 66% of the requirement for 50-75 kg pigs [26].
After a feeding period of 9 weeks, including an adaptation of 19 days to the diets, one female pig per diet was anesthetized and euthanized by intravenous injection via the ear vein with pentobarbital (about 70 mg/kg BW, CP-Pharma, Handelsgesellschaft mbH, Burgdorf, Germany). Mucosae from the stomach (Pars nonglandularis), and both digesta and mucosae from jejunum (80 cm from the Plica ileocecalis), ileum (20 cm from the Plica ileocecalis), cecum, and the mid-colon were aseptically collected by scraping the mucosal layer from the tissue with a sterile glass slide and stored at −80 • C.
GC Analysis of Short Chain Fatty Acids
Concentration of short chain fatty acids (SCFA) in jejunal and cecal samples was analyzed by gas chromatography according to Wischer et al. (2013) [27]. Briefly, SCFA were directly measured in a gas chromatographer equipped with a flame ionization detector (HP 6890 Plus; Agilent, Waldbronn, Germany). GC-grade short chain fatty acids (Fluka, Taufkirchen, Germany) were used as internal standards. Measurements were performed in a capillary column (HP 19091F-112, 25 m × 0.32 mm × 0.5 µm) by following the given program: 80 • C, 1 min; 155 • C in 20 • C/min; 230 • C in 50 • C/min., constant for 3.5 min., carrier gas: helium. Short-chain fatty acid concentration is scored as referred to kilogram sample.
Sample Preparation
The sample preparation was carried out after the method of Apajalahti et al. [28]. Each sample containing 5 g of fresh substance was resuspended in 10 mL of washing buffer. The following steps of protein extraction, quantification, digestion, and peptide purification were performed as previously described by Tilocca et al. [29].
The MS spectra (m/z = 300-1600) were ascertained at a resolution of 70,000 (m/z = 200) using a maximum injection time (MIT) of 50 ms and an automatic gain control (AGC) value of 1 × 10 6 . The internal calibration of the Orbitrap analyzer was conducted consulting lock-mass ions from ambient air following the method of Olsen et al. [30]. The 10 highest peptide precursors were used for data dependent MS/MS spectra. Therefore, high energy collision dissociation (HCD) fragmentation was used with the following settings: resolution 17,500; normalized collision energy of 25; intensity threshold of 2 × 10 5 . For fragmentation, only ions with charge states between +2 and +5 were chosen. Therefore, an isolation width of 1.6 Da was set. AGC was adjusted at 1 × 10 6 whereas MIT was set at 50 ms for each MS/MS scan. To prevent further fragmentation, it was decided to eliminate fragmented precursor ions for 30 s within a 5 ppm mass window.
Data Analysis
The raw files from the mass spectrometric measurements were analysed by MaxQuant (v 1.5.1.2, Max Planck Institute of Biochemistry, Munich, Germany) using the database consisting of sequences of the Sus scrofa genome (61,019 entries, March 2016) and an in-house database of bacterial proteins (14,535 entries, October 2015) identified by a two-step search approach in a previous study analyzing 84 porcine fecal samples [31]. Protein grouping node was activated with the default software settings. The data analyses failed to include protein entries from dietary intake, leading to non-identified proteins, which are probably dominant in the digesta samples.
Phylogenetic distribution of the bacterial proteins was assessed on the basis of the identified peptides with Unipept [32]. This tool provides the phylogenetic assessment of the bacterial community up to strain level depending on amino acid sequence homologies. In the present study, protein identification is done at phylum and family level as deeper taxonomic levels are omitted due to a low peptide-to-protein ratio and the risk of false positive identification. Calculation of alpha diversity and statistical evaluation was done with Primer-E (v. 6, primer-e, Auckland, New Zealand), by first standardizing the peptide datasets and afterwards creating a lower triangular resemblance matrix (resemblance measure S17 Bray Curtis similarity). The functional classification of the bacterial proteins was performed by the categorization into COG classes through the WebMGA online tool. [33]. Proteins descending from the pig were categorized and illustrated using proteomaps [34].
PRIDE Accession
The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE [35] partner repository with the dataset identifier PXD011360 (login Username<EMAIL_ADDRESS>Password: a1lkk6Ch)
General Protein Results
A total of 2951 different bacterial proteins were identified, with a higher number of bacterial proteins recovered in the digesta (2917) than in the mucosa (973). The overall distribution of the bacterial proteins is shown in Figure S1. The co-identification of porcine proteins revealed 4550 hits in total (Table S1).
An initial interest was the investigation of the protein distribution of all identified proteins in gut sections and compartments. The proteins of digesta samples from jejunum and ileum were by the majority originating from the pig. Bacterial proteins were underrepresented here as in these sections the number and diversity of bacterial cells is lower compared to cecum and colon. Thus, the sole limitation of available biomass can be the reason for a limited identification of proteins with mass spectrometry. In the distal gut sections, proteins mainly originated from bacteria and only sparsely from the host ( Figure 1, Table S1). Parasite and uncharacterised proteins were low in abundance in all samples. Pig proteins were the majority in all mucosa samples (>75% pig protein), with highest protein counts in the colon mucosa ( Figure 1). Even though the mucosal compartments revealed higher total numbers of proteins, all mucosa samples showed much lower counts of bacterial proteins than the digesta samples (973 vs. 2917 proteins). The highest counts of bacterial proteins were found in cecum digesta samples with an average of 1384 proteins.
The overall comparison between the protein results of digesta and mucosa showed a separation of the datasets (Figure 2A). In addition, mucosal samples were grouped according the animal and not based on the section. Digesta samples showed a separation based on both factors, animal and section.
The separation between small intestine (jejunum and ileum) and large intestine (cecum and colon) ( Figure 2B,C) was approved with a significant value of p = 0.0004 for the comparison between ileum and colon samples. Figure 2. Sample ordination to discriminate the protein distribution between mucosa and digesta samples (A), within the mucosa (B) and the digesta (C) samples. Principal Coordinate Analysis (PCoA) plots were drawn from protein data using S17 Bray Curtis similarity. The percentage represents the contribution of the principal component to the difference in sample composition. Points of different colors and shapes represent samples of different groups, and the closer the two sample points are, the more similar the composition of the samples species is.
Phylotypes of Bacterial Proteins in Digesta
The alpha diversity of the digesta samples showed the highest diversity in cecum and colon samples with Shannon indexes of 2.7 vs. 2.4 in small intestine samples ( Figure S2A). The lower diversity in the ileum is also described in other microbiota studies where amplicon sequencing was used [17,21]. This difference is caused by the luminal environment in the small intestine, where pH conditions, higher oxygen levels, and a fast digesta passage [36] may be a challenging site for the fermenting bacteria. The longer retention time in the large intestine favors the fibre fermentation process, which leads to an enhanced energy gain for bacteria using these substrates and the corresponding fermentation products, respectively [37].
The major phyla identified based on the bacterial proteins in digesta along the GIT were Firmicutes followed by Bacteroidetes and Actinobacteria (Figure 3 and Figure S3). The predominant abundance of families belonging to Firmicutes and Bacteroidetes is equivalent to studies based on amplicon sequencing [16,17,20,21]. Dominating families affiliated to the proteins were Prevotellaceae, Clostridiaceae and Lactobacillaceae (Table 1 and Table S2). Proteins belonging to Firmicutes showed higher peptide counts in the small intestine (jejunum 64.7%; ileum 69%) compared to the cecum and the colon. This trend was also shown based on operational taxonomic unit (OTU) counts in young pigs [17], whereas the opposite was described in 28-day-old piglets [16]. Here, proteins of Lactobacillaceae, Clostridiaceae, and Veillonellaceae dominate this phylum in all sections, which is in line with other recent pig microbiota studies [15,17,21] ( Figure 3B). Lactobacillaceae proteins were considerably high in the small intestine (jejunum 15%; ileum 18%) compared to the large intestine (cecum 9%, colon 6%). Proteins of the family Clostridiaceae showed higher counts of assigned peptides in the ileum (16%). Peptides of Veillonellaceae decreased continuously by half from jejunum (11%) to the colon (5%).
Proteins of Bacteroidetes were highly abundant, whereas its average occurrence was much higher in the large intestine (36%) compared to the small intestine (22%). The majority of proteins belonged to Prevotellaceae, which were increasing from the small (20%) to large intestine (27%). Again, theses changes in abundance along the GIT fits with other findings [17].
In the present study, Actinobacteria proteins defined mainly by the ones from Bifidobacteriaceae were found in all sections of all animals, with greater average abundances in the small intestine (6%) compared to an abundance below 1% in the colon. Thus, Actinobacteria appear to be the third most abundant phylum in this metaproteomic analyses. This is in contrast to previous sequencing studies where Proteobacteria is at the third position [17,21]. In an accompanied study, copy numbers of Bifidobacterium sp. were found mainly in the jejunum and caecum, but not in the colon, which fits in with the proteomic results [38]. Bifidobacteriaceae are producing acids from a wide range of carbohydrates via the specific metabolic pathway called the "bifid shunt" [39]. Feeding dietary fibers leads to a lowered pH of the gut content and causes an environment, which is favorable for Bifidobacterium sp.
In several studies, high fibre diets cause an increase of beneficial bacteria such as Bifidobacteriaceae in the cecum [40] and support the intestinal epithelial morphology [15]. Both contribute to the pigs' GIT health and lead to a higher defensive power against harmful organisms [40].
Phylotypes of Bacterial Proteins in Mucosa
The alpha diversity of the mucosal samples was characterized by Shannon indexes of around 2.4 along the small intestine and the cecum, and an average index of 2.6 in colon samples ( Figure S2B). No clear increase in diversity was found compared to lumen samples, although this was estimated on the basis of other microbiota studies. There, Shannon indexes between the lumen and mucosa samples of the small intestine almost doubled [17,20]. One reason for this discrepancy is probably caused by the lower number of identified bacterial proteins in the mucosal samples ( Figure 1).
The majority of the proteins from mucosal samples belonged to Firmicutes with 59% in the stomach and 53% to 47% in subsequent sections ( Figure 3, Table 2 and Table S3). Peptides of Lactobacillaceae had the highest counts within all peptides belonging to this phylum on the mucosal site. A longitudinal change was observed with higher abundances in the stomach and small intestine (about 19%) and a reduction by more than half in the colon (8%). Peptides of Clostridiaceae showed a reverse abundance pattern ranging from 4% in the stomach to 11% in the colon. In contrast, peptides of Lachnospiraceae were equally distributed along the mucosal samples of young pigs. The abundance of peptides assigned to Bacteroidetes changed from stomach (27%) to the small intestine (jejunum, 29%; ileum, 22%) and increased in the large intestine (37%) ( Table 2). Peptides of Prevotellaceae showed the highest abundance belonging to Bacteroidetes in the mucosal samples. A decrease of Prevotellaceae peptides was seen from stomach (20%) to ileum (15%), with the highest abundances found in the large intestine (27%). The dominance of the Lactobacillus and Prevotella was also described by Mann et al. [14] which characterized the mucus-associated microbiota along the GIT. Interestingly, ileum samples of mucosa and digesta decreased in Prevotellaceae peptide numbers and simultaneously increased in Lactobacillaceae peptides. This again may derive from the tolerance of several genera of the latter to bile [41], whereas Prevotella cells may be more sensitive to this secretion.
The highest peptide hits in the ileum were assigned to Proteobacteria (12%), whereas the lowest were found in the stomach (4%) (Figure 3). Pasteurellaceae was the dominating family of this phylum, with higher peptide hits in the ileum. Pasteurellaceae are a large family including pathogenic and commensal organisms [42]. They are oxidase producing aerobic to facultative anaerobic bacteria living on mucus layers of the respiratory-, genital-and gastrointestinal tract [42]. Higher abundances in the mucus of the large intestine enhance the assumption that in here, Pasteurellaceae can alter to anaerobic respiration as oxygen levels are decreasing from proximal to distal sections [42]. In contrast, Proteobacteria were identified to contribute up to 50% of cecal mucosa transcripts with Helicobacteraceae as predominant family in growing pigs [19]. In piglets, this bacterial family has been observed to dominate the mucosa of the small intestine [16].
Functional Annotation and Distribution of Bacterial Proteins in Digesta and Mucosa
Classification of the identified proteins into COG classes reveals that major functions of the bacterial community of both digesta and mucosa were linked to energy production and conversion, translation and carbohydrate transport and metabolism. The overall distribution of the functions is relatively robust between the sections and animals in the digesta samples, whereas mucosal samples showed a host-dependent effect across all GIT sections (Figures S5 and S6).
Proteins related to energy production and conversion were very similar between all animals along all sections with a higher relative abundance in the mucosa (23-31%) than in the digesta (23-25%), matching the results of another study [24]. In mucosa samples, these proteins were mainly assigned to the oxidative phosphorylation, followed by carbon metabolism and carbon fixation. Proteins from digesta samples were additionally sorted into pyruvate metabolism. Main proteins belonging here were rubrerythrin and succinate dehydrogenase/fumarate reductase followed by components of the pyruvate ferredoxin oxidoreductase (Tables S4 and S5). Rubrerythrin was assigned to different bacterial families within the Clostridiales. For the reduction of fumarate especially Prevotella was assigned. These phylogenetic assignments are in accordance with a rat microbiome analyses [24].
Proteins sorted into the translation cluster on average remained quite alike along all sections. GTPases-translation elongation factors and a wide range of ribosomal proteins (L2, S3) dominated this cluster (Tables S4 and S5).
More proteins of the carbohydrate transport and metabolism group were counted in the small intestine than in the large intestine ( Figure S5) and mainly belong to glycolysis and gluconeogenesis.
Proteins related to the biosynthesis of amino acids were also highly counted with an average contribution of 5% in digesta samples and a section-dependent abundance of 2% in colon mucosa and 9% in cecal mucosa samples. The relative abundance of proteins assigned to lipid transport and metabolism was on average higher in digesta samples (3%) than in mucosa samples (2%). A dominating protein assigned to this cluster was acyl-CoA dehydrogenases, followed by acetyl-CoA acetyltransferase.
Proteins involved in the formation of SCFA were mainly identified in digesta samples ( Figure 4A) with variable distributions of proteins belonging to formate, acetate, propionate, and butyrate formation. Protein sorting was done as described by Polansky et al. [43] (Table S6). The taxonomic affiliation of these proteins showed a dominant contribution of Clostridiales and Veillonellales to produce butyrate in the small and large intestine, which is complemented in the distal gut section by members of Spirochaetes. Proteins involved in propionate formation are expressed mainly by Acidaminococcales, Veillonellales and Selenomonadales and to a less extent by Bacteroidales. In the small intestine, acetate forming proteins were affiliated with Bifidobacteriales and Veillonellales, whereas in the large intestine Bacteroidales and Clostridiales proteins were found. These phylogenetic findings fit the meta-analysis of the core swine gut microbiota where the functional role of the phylotypes were just discussed based on the presence of the 16S rDNA sequencing data [21]. Formate biosynthesis seemed to be done by Coriobacteriales and Clostridiales in the small intestine and by Clostridiales and Fusobacteriales in the large intestine (Table S6). The total concentration of SCFA in the digesta samples showed almost 4 times higher values from jejunum to cecum ( Figure 4B). Similar ratios were also measured in other studies and discussed with the increased bacterial diversity and fermentation capacity in the cecum compared to small intestine [44]. Among all SCFA drastic changes were found for propionate and butyrate which increased from 0.9 to 182 mmol/kg DM and 26 to 108 mmol/kg DM, respectively ( Figure 4B). This is in accordance with the change in diversity of bacterial phylotypes involved in these fermentation activities and was described by others as well [44]. Color code is equal for A and B: light grey: formate, blue: acetate, dark orange: propionate, light orange: butyrate, grey: valerate. Proteins were sorted according to Polansky et al. [43] (Table S4).
Animal Proteins
The mass spectrometric measurements of all samples allowed an additional identification of pig proteins using a database of the Sus scrofa genome. The general distribution of pig proteins was more homogeneous along the sections in the mucosa samples compared to the digesta samples ( Figure 5). Mucosal proteins were mostly assigned to metabolism and organismal systems of the host cells. Cellular processes was the one dominating general cluster of all sections' mucosa. Within this cluster, proteins related to exosomes where the membrane associated ones such as keratin 8 (KRT8), annexin A2 (ANXA2), and albumin (ALB) predominate this cluster, followed by tight junction proteins (myosin, heavy chain 9 (MYH9)). In lower abundance, cytoskeletal proteins (tubulin beta 4B (TUBB4B)) were found in all sections. They are all important for the stabilization of the cytoskeleton and the cell shape. In general, proteins belonging to the cluster of organismal systems were more represented in the stomach sample and continuously less in protein counts within the distal sections. Proteins binned to haemoglobin beta (HBB) were quite high in the stomach samples. However, these polygons decreased towards the ilea and increased again in the colon sections. In the general cluster of genetic information processing, functional proteins related to the mitochondrial biogenesis (prohibitin 2 (PHB2)) and translation factors (mitochondrial translation elongation factor (TUFM) and eukaryotic translation elongation factor (EEF2, EEF1A1)) were found. This cluster was of smaller size in all ileum samples. The general cluster of metabolism proteins was dominated by proteins that were mainly assigned to oxidative phosphorylation such as subunits of ATP synthase and were followed by transport related proteins (e.g., solute carrier family 25 (SLC25A5)). The capacity to take up microbial products such as lactate and other SCFA were seen by the identification of monocarboxylic acid transporters (MCT1, MCT4), where two to three times more peptides were counted in the samples of the large intestine than in the small intestine. This trend of abundance matches to expression data from the human GIT [45]. The general cluster of environmental information processing was dominated by a subcluster related to the calcium signalling pathway (voltage dependent anion channel 1 (VDAC1)), which increased from stomach to cecum.
Proteomaps of the digesta samples showed a more heterogeneous picture than the mucosa samples since functions of proteins differed widely between sections and animals. Main general clusters were those related to organismal systems, metabolism and for some animals also cellular processes were predominant here ( Figure 5). Main proteins of the organismal information processing cluster in digesta samples were related to hemoglobin, pancreatic secretion (PCPA1), reninangiotensin system (RAS, ENPEP), fat digestion, and absorption functions (colipase, CLPS). For animals 3, 8, and 15, this cluster increased from jejunum to colon, except for animal 7. Cellular process proteins were dominantly assigned to exosomes (KRT8, ANXA2, ALB) such as in mucosa samples. The general cluster of metabolism functions also dominated several sections of animal proteomaps. The lipid and steroid metabolism was one dominating sub cluster, showing a greater number of assigned proteins in the large intestine. These trends of protein abundances were unexpected as the pancreatic lipase (PNLIP) hydrolyses dietary fats into fatty acids and enters the duodenum via pancreatic secretion [46]. Thus, a higher abundance is expected in the small intestine. Also higher present were proteins annotated to carbohydrate metabolism such as pancreatic amylase (AMY2), which were elevated in the jejunum and ileum section.
The heterogeneous results obtained with the digesta samples indicate a highly variable matrix which challenges the interpretation of the animals' metabolism based on the proteome. In this view, we also need to consider the differences in the dietary treatments of each animal, since it is most likely affecting the host response, thus the resulting proteome. Unfortunately, the lack of replicates per dietary treatment doesn't allow us to deduce any satisfactory conclusions out of this. Animal proteins from the mucosa are more likely an adequate site for the investigation of host functions within a specific section. Here, host cells are involved in the respective functions such as the secretion of digestive enzymes or the uptake of feed compounds. The response and the potential changes of the proteomes are due to the different conditions in the microenvironment.
Conclusions
The current study employs a metaproteomic approach to investigate the porcine microbial community associated with diverse GIT sections, both in its mucosal and luminal compartment. The obtained results highlight a clear alteration between the small and the large intestine, which was evident at the bacterial phylogenetic level and by the distribution of functional protein clusters. This underlines the physiological differences between these two segments. Bacterial proteins in mucosa and digesta differed between sections of the pig in phylogeny and protein functions. A higher diversity of bacterial proteins was found in digesta samples compared to mucosa samples, albeit this might be an effect of a lower protein identification rate on the mucosal site. In general, from proximal to distal sections, more proteins and peptides were found. The metaproteomics approach is assumed to be the "keystone to ecosystematic studies" of environments and their associated microbial communities. Besides the sole consideration of the prokaryotic proteins in an organismal sample, this study showed the benefit to use the so far interference proteins of the host to reveal insight into the host metabolism. However, this method shows multiple challenges within several steps of the analytical workflow. Until now, a lot of information is lost by the data analysis process, since not all proteins or peptides can be identified and annotated to a function or phylogeny. Nevertheless, with this possible output, the present study gives us the first glimpse into the active microbiome of the porcine GIT and the host proteins. This may help us to identify key players or biomarkers as targets to design therapeutic intervention systems for various fields of application. Nevertheless, we retain that further investigations involving a higher number of animals and an integrative approach with other Omics methods (e.g., metagenomics or metatranscriptomics) will further facilitate the understanding and interpretation of the biology of the pigs' gut and its associated microbial communities.
Supplementary Materials: The following are available online at http://www.mdpi.com/2227-7382/7/1/4/s1. Table S1: Number of proteins identified with Sus scrofa and bacteria database. Including taxonomic information revealed by Unipept searches. Table S2: Relative abundances of bacterial families in digesta samples based on peptide information along the GIT sections in each animal and on average Ø, standard deviation (SD). Table S3: Relative abundances of bacterial families in mucosa samples based on peptide information along the GIT sections in each animal and on average Ø, standard deviation (SD). Table S4: Bacterial proteins identified in all digesta samples and sorted according to COG classification Table S5: Bacterial proteins identified in all mucosa samples and sorted according to COG classification. Table S6: Phylogenetic affiliation of proteins belonging to SCFA biosynthesis in digesta and mucosa samples along the GIT section.
Funding:
We gratefully acknowledge the financial support of the Carl Zeiss Stiftung and the microP and METAPHOR project funded by the Ministry of Science, Research and the Arts Baden-Württemberg. | 6,862.8 | 2019-01-10T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
Spongiosa Primary Development: A Biochemical Hypothesis by Turing Patterns Formations
We propose a biochemical model describing the formation of primary spongiosa architecture through a bioregulatory model by metalloproteinase 13 (MMP13) and vascular endothelial growth factor (VEGF). It is assumed that MMP13 regulates cartilage degradation and the VEGF allows vascularization and advances in the ossification front through the presence of osteoblasts. The coupling of this set of molecules is represented by reaction-diffusion equations with parameters in the Turing space, creating a stable spatiotemporal pattern that leads to the formation of the trabeculae present in the spongy tissue. Experimental evidence has shown that the MMP13 regulates VEGF formation, and it is assumed that VEGF negatively regulates MMP13 formation. Thus, the patterns obtained by ossification may represent the primary spongiosa formation during endochondral ossification. Moreover, for the numerical solution, we used the finite element method with the Newton-Raphson method to approximate partial differential nonlinear equations. Ossification patterns obtained may represent the primary spongiosa formation during endochondral ossification.
Introduction
Most of the long bones of the mammals skeletal system are developed from a process called endochondral growth [1][2][3][4]. This process ends with the gradual production of bone from cartilage tissue during fetal development and postnatal growth. The process of ossification occurs from a hyaline cartilage tissue mold, which has a similar shape to the bone in a mature stage. The cartilage tissue molds are formed through the condensation of mesenchymal cells [5] followed by their differentiation into chondrocytes (cells that produce and maintain cartilage matrix) and secretion of typical components of the extracellular matrix of cartilage [6]. Once the mold of cartilage is formed, it is invaded initially in its center and then at each end by a mixture of cells that give rise to the appearance of primary and secondary centers of ossification, respectively, [7][8][9]. The ossification centers invade the cartilage gradually until it is completely replaced by bone tissue, except the articular surfaces. In this way, and eventually the bones reach their skeletal maturity [10]. The processes of endochondral development, growth, and elongation of the bones are made by the continuous addition of cartilage and subsequent replacement by bone tissue.
During the chondrocytes differentiation process, the matrix composition changes dramatically through the production of other components such as collagen type X, the expression of metalloproteinases, and subsequent calcification. At the same time the blood vessels invade the calcified cartilage bringing osteoblasts which build immature bone. Chondrocytes in the growth plate are subjected to the influence of excess extracellular factors including systemic and soluble local factors, as well as, extracellular matrix components. Several studies [9,[11][12][13] provide evidence that the proliferation of chondrocytes in the growth plate is under the control of a local closed loop that depends on the spatial and temporal location; and that mainly involves the molecular 2 Computational and Mathematical Methods in Medicine signals synthesized by chondrocytes: parathyroid hormonerelated peptide (PTHrP), Indian hedgehog (Ihh), transforming growth factor (TGFβ), bone morphogenetic proteins (BMPs), vascular endothelial growth factor (VEGF), matrix metalloproteinase type 9, known as gelatinase-B (MMP9), and the transcription factor RUNX2. They interact together in a feedback loop to regulate the rate at which chondrocytes leave the proliferative zone, differentiate hypertrophic cells and give way to immature bone formation [10,11,14]. The inappropriate balance in the expression of these molecules along with the ones that encode the collagens and other growth factors have been subject of studies as possible causes of impaired bone formation by the endochondral ossification mechanism [15][16][17].
The process of endochondral ossification has been studied for several years, and different models have been developed in silico, verified by histological reports, and in vivo experiments, which have tried to explain the process of bone formation through this mechanism [7-9, 13, 14, 18-21]. For example, Courtin et al. [18] in their work performed the comparison between the sequence of morphological events involved in embryonic bone formation and spatiotemporal characteristics of self-organization generated by a reactiondiffusion model related to the metabolism of the periosteal bone mineralization. In that article, 3D structures are obtained (by computer simulation) with a close resemblance to the primary internal architecture of the periosteum of long bones. The hypothesis of Courtin et al. is based on the role of self-organization of mineralization of bone metabolism, which gives rise to a well-organized space architecture. Subsequent research such as those by Garzón-Alvarado et al. [7][8][9]22] have raised different hypotheses about the interaction of mechanical, cellular, and molecular factors that lead to the formation of secondary ossification centers in the epiphyses of long bones, and it also helped the development and bone growth and the primary bone formation. These hypotheses suggest that biological processes and interactions between different factors can be represented using mathematical models, where the chemical feedback among molecular reagent factors through reaction-diffusion mechanisms may explain the stable spatial patterns found in the origin of the secondary ossification centers and in the formation of cartilage canals. As far as the authors know, no research has been conducted on the action of different cellular, mechanical, and molecular factors on the development and production of primary spongiosa architecture during the endochondral ossification process, which is the basis for the production of trabecular bone. Similarly, there are no biochemical models involving reaction-diffusion systems with Turing instabilities and reaction equations based on the Schnakenberg model, that allow enlarge the knowledge and understanding of the development in the primary stage of the trabecular bone.
Therefore, this paper presents a hypothesis on the development of trabecular bone architecture. Starting from the assumption that the interaction of two molecular factors expressed by hypertrophic chondrocytes which through a reaction-diffusion mechanism generate a stable spatialtemporal pattern. This patterns lead to the formation of trabeculae present in the primary spongiosa tissue. This is the first model that attempts to explain the formation of primary spongiosa, which serves as the basis for defining a complete model of trabeculae formation at an early stage of skeletal development.
Molecular Factors.
The sequential changes in the behavior of chondrocytes in the growth plate are highly regulated by systemic factors and the production of local factors. Growth hormone (GH) and thyroid hormone are systemic factors involved in endochondral ossification that regulate the behavior of the chondrocytes. GH is a peptide hormonebased protein that stimulates growth, cell reproduction and tissue regeneration. It is an important regulator of longitudinal bone growth [23]. The main effect of GH on chondrocytes, it is to stimulate their proliferation [24]. However, thyroid hormone is considered another systemic regulator of bone growth, and it stimulates production of collagen type II and X and alkaline phosphatase (ALP), which act as markers of bone mineralization.
On the other hand, local factors act as receptors to carry out intracellular signaling and selective activation of transcription factors of chondrocytes, such as insulin-like growth factors (IGF), which act as a local mediator of the effects of GH on cartilage growth. These factors are essential for embryonic skeletal development [25,26], and in chondrocyte proliferation and/or hypertrophy. Parathyroid hormone-related peptide (PTHrP) is expressed by perichondrial cells at the initial stage of the chondrocytes proliferation. The PTHrP diffuses out of its place of production to act on cells carrying the receptor PTH/PTHrP [27]. The PTHrP keeps chondrocytes in a proliferative state and prevents hypertrophy [28]. The Indian Hedgehog (Ihh) is a local factor produced by the expression of prehypertrophic chondrocytes that stimulates the proliferation of chondrocytes and inhibits its hypertrophy. [14]. The Ihh stimulates osteoblastic differentiation of mesenchymal cells, which is essential for the formation of the periosteum surrounding the zone of hypertrophic chondrocytes. The formation of the periosteum precedes the formation of primary ossification center and it is maintained through its expansion [13,29]. Bone morphogenetic proteins (BMPs) are another local factor. These are members of the transforming growth factor beta (TGFβ superfamily), which is capable of inducing strongly immature bone formation, cartilage, and connective tissue [30]. Finally, within the local factors, there is the vascular endothelial growth factor (VEGF), which stimulates the process known as angiogenesis and acts as a vasodilator by increasing vascular permeability [31]. VEGF acts on vascular endothelial cells through specific tyrosine kinase membrane receptors, thereby regulating functions such as proliferation, differentiation, and migration of chondroblasts, osteoblasts, and osteoclasts [14]. During chondrocytes hypertrophy in the growth plate, VEGF is released and the extracellular matrix surrounding the hypertrophic cells, which begin the process of calcification. Then the extracellular matrix is Computational and Mathematical Methods in Medicine 3 invaded by blood vessels which provide nutrients and attract osteoblasts and osteoclasts that help the formation of the trabecular bone [32].
Transcription factors are mostly specific to a particular cell lineage and act as growth regulators of cell differentiation. They are predominantly expressed during skeletal development and its main function is to control cell proliferation or differentiation [33]. The transcription factor Runx2 is also called Cbfa1/Osf2/AML3/Til1/PPB2αA and is an essential protein in the differentiation of chondrocytes and osteblasts as well as the morphogenesis of the skeletal system [34]. The Runx2 controls bone mineralization of growing bones by stimulating osteoblast differentiation, promoting chondrocyte hypertrophy, and contributing to the migration of endothelial cells and vascular invasion. The Runx2 is expressed by chondrocytes in the early stages of hypertrophy and is maintained until terminal hypertrophic differentiation [35].
Regulation of Cartilage Matrix Degradation during Endochondral Ossification.
The increase in cell volume experienced by the chondrocytes submitted to hypertrophy requires the degradation of the matrix that surrounds these cells. Moreover, the invasion of the ossification front requires an extensive (but selective) degradation of cartilage transverse columns surrounding the hypertrophic cells in the final state [10,28]. There have been several studies to identify the proteolytic enzymes responsible of these events of matrix degradation and the cells responsible for its synthesis [36][37][38]. These studies have emphasized on the enzymes capable to degrade the two major protein components of the cartilage matrix, collagen type II and aggrecan. Within the growth plate is expressed selectively MMP13 by hypertrophic chondrocytes, which degrades collagen fibers and aggrecan [38,39].
The MMP13 expression by chondrocytes is a prerequirement for the invasion of the growth plate by blood vessels, osteoclasts, and osteogenic cells. Therefore, these cells cannot enter in the empty gaps created by the death of hypertrophic chondrocytes until it degrades the septa of the cavities by the MMP13 [10]. Blood vessels invade the growth plate at the same time that the osteoblasts do it, which are necessary for the establishment of the primary ossification center. Thus, in the absence of MMP13 most of the cartilage matrix is not removed and there is no cell invasion in the bone marrow or bone matrix deposition in the remaining cartilage. The vascular invasion of the growth plate is facilitated by the vascular endothelial growth factor (VEGF), which is expressed by chondrocytes, and regulated with the hypertrophy under the control of Runx2 [10,11]. On the other hand, the VEGF in the endochondral ossification, increases bone formation and decreases bone resorption [40,41], indicating that VEGF regulates the production of MMP13.
Hypothesis Required for the Development of Primary Spongiosa Using
Reaction-Diffusion Systems. The main hypothesis of this paper is based on the existence, within the endochondral ossification process, of the controlled interaction of two signaling molecules that diffuse and react chemically in the cartilage extracellular matrix, to carry out the formation of primary spongiosa from the growth plate. Accordingly, we assume the existence of a reaction-diffusion system where two primary molecules are involved, such as VEGF and MMP13, which can lead to a stable pattern in time and unstable in space, similar to the patterns present in the structure of the trabecular bone during endochondral ossification.
The presence of MMP13, which is released by hypertrophic chondrocytes, allows the degradation of cartilage matrix components (collagen and aggrecan) and leads to vascular invasion and ossification front [10,36,37]. This vascular invasion is facilitated by the presence of VEGF expressed by hypertrophic chondrocytes [31,40,42]. This means that when MMP13 and VEGF exist in all regions of the epiphyseal cartilage, having a high concentration of VEGF, there will be an adequate control of the invasion of endothelial cells, osteoclasts, chondroclasts, and osteoblasts, which are present in the primary ossification development [40]. Similarly, in those areas where there is a high concentration of MMP13, it will completely degrade the cartilage, giving rise to the trabecular bone architecture. Last statements are supported on Hiltunen et al. [43] studies. In their work it was injected a saline solution containing VEGF in the distal femur of white rabbits. Their results demonstrate that VEGF induces bone formation by increasing osteoblast activity and decreasing the resorption process. The resorption process is produced by both osteoclasts in bone and metalloproteinases (MMPs) in the growth plate. Therefore, it can be supposed that in the development of the architecture of the primary spongiosa, there must exist a regulation of MMP13 by the VEGF (inhibitory mechanism). So, it stops the degradation of the cartilage and begins the invasion front of ossification.
Model Description.
The regulatory process proposed in this model is outlined in Figure 1, and it is based on an activator-substrate system (also called exhaustion model) (see Appendix A). The process indicates that there is a control loop between VEGF (activating factor) and the MMP13 (substrate), where the VEGF is self-activated and inhibits the production of MMP13 stopping the degradation process and giving a way to the mineralization of the remaining cartilage matrix [40]. On the other hand, we assume that the MMP13 is self-inhibited, but enables the production of VEGF, thereby this loop is called positive feedback system. The VEGF helps vascular invasion and bring with it the osteogenic cells that allow the construction of the primary spongiosa. The MMP13 allows the degradation of the cartilage matrix and the subsequent invasion of the cartilage by the ossification front.
The regulatory mechanism is modeled by reaction-diffusion equations. The reaction term (synthesis of soluble extracellular factors) is considered dependent on the concentration of the reactants and the presence of hypertrophic chondrocytes. According to this, the hypothesis is based on the origin of the patterns presented in the primary spongiosa. It could correspond, from a mathematical point of view, to the patterns that occur in the Turing space when two chemical reactants interact.
The definition of the relationships shown in Figure 1 can be quantified by means of equations which provide local changes of the extracellular soluble factors and the production rate of bone: where C CH is the concentration of hypertrophic chondrocytes, S VEGF , S MMP13 represent the concentrations of VEGF and MMP13, respectively. The remaining terms are model parameters. α 1 and α 2 are terms that quantify the production of each molecular factor by hypertrophic chondrocytes, μ is a constant that quantifies inhibition of the VEGF production by it excess, γ 0 regulates the nonlinear interaction between the concentration of MMP13-VEGF quantifying the concentration or molecular inhibition of each molecular factor, and D VEGF and D MMP13 are the diffusion coefficients of VEGF and MMP13, respectively. In the biological interpretation of the above equations the term γ 0 S 2 VEGF S MMP13 represents the nonlinear activation of S VEGF (production of VEGF by the presence of MMP13) and the nonlinear consumption of S MMP13 (by the presence of VEGF). Equation (1c) represents the activation of bone production rate by the presence of high amounts of VEGF, which is regulated as time goes on.
In this equation c Bone indicates the production of bone per unit of volume due to the concentration and distribution of VEGF within the domain. η is a constant that regulates the production of bone over time, S n Umbral represents the value of the concentration of VEGF with which begins the process of ossification, T a is the time required for the process of cartilage calcification, and t r represents the time that limits the production of bone.
Solution of the Reaction-Diffusion Equations System Using the Finite Element Method.
To solve the set of ((1a), (1b), and (1c)), we used the finite element method, with tetrahedral elements. Due to the nonlinearity of the terms included in the model, we used the Newton-Raphson method to solve the problem of time evolution of the concentration of VEGF (S VEGF ) and MMP13 (S MMP13 ). The time integration was performed using the trapezoidal rule.
Weak Formulation.
Let ((1a), (1b), and (1c)) be rewritten as Using weighted residuals, the system of (2) takes the form where Ω represents the domain of the problem which is limited by the boundary Γ. w 1 , w 2 , and w 3 are weight functions.
Computational and Mathematical Methods in Medicine 5 Using Green's theorem and weakening the system of (3), the residue of the problem is given by Defining null flow conditions on the contour of the problem, the border-related terms in (4) are canceled, so the residue is expressed as To discretize the finite element solution we use approaching functions written as a linear combination of orthogonal functions as shown in the following: where N v , N m , and N c represent the shape functions which depend only on the space used for the formulation, S VEGF and S MMP13 are the values of S VEGF and S MMP13 in the nodal points, and the superscript e indicates the discretization of the finite element variable. For the weighting functions we used the Bubnov-Galerkin formulation, indicating that the functions w take the same form of approximation functions N.
Substituting (6) in (5), we obtain the residual vector in its discrete form (7), where r e VEGF , r e MMP13 and r e CBone are residue vectors for each equation and ∇N is the gradient vector of the approximation functions Using the time discretization equation based on the Crank-Nicolson equations, the equations in (7) are transformed into the following: where S e VEGF and S e MMP13 are the nodal values S e VEGF and S e MMP13 evaluated over time t + Δt. α is a parameter characteristic of the integration method.
Using (8) is possible to determine each of the terms of the tangent stiffness matrix, as shown in the following: Δt Ω N T NdΩ.
The nodal values S e VEGF and S e MMP13 in time t + Δt can be approximated by the iterative algorithm of Newton-Raphson, as described in (10) 3.4. Numerical Implementation. The set of ((1a), (1b), and (1c)) were implemented and solved numerically using the finite element method with a Newton-Raphson scheme. The two examples given were solved in a laptop of 4096 MB and 800 MHz processor speed. Computer simulation was carried out in an incremental iterative scheme that allows solving, computationally, the evolution of both the concentration of molecular factors (S VEGF , S MMP13 ) and the production of immature bone. Initially the growth plate is assumed as a structural matrix with an initial concentration of chondrocytes in a hypertrophic stage 65.000 cell/mm 3 . The initial concentrations of VEGF and MMP13 are distributed randomly in the growth plate, with a disturbance of 10% on the steady-state, the concentration is given by (S VEGF , S MMP13 ) = (1.0, 0.9) [ng/mL] [43] (see Appendix A). The selection of random initial conditions around the steady state is similar to the event of molecular expression of the hypertrophic chondrocytes in an area of ossification. The flow condition for each molecular factor in the boundary is assumed to be null, this is because these conditions are assumed periodic over the domain. The parameter values used are shown in Tables 2(a) and 2(b), the justification of all the parameters used in the illustrated examples is given in Appendix B.
Results and Discussion
To verify the potential of the proposed model in predicting the primary spongiosa architecture, two numerical tests were conducted in a three-dimensional cubic element with a length of 0,22 mm. This is an average length of a minimal element of the trabecular bone in postmineralized fetal stage, as presented by Ruimerman et al. [44] in their work of modeling and adaptation of trabecular bone. The parameters of the reaction-diffusion model were selected in order to obtain structures with a periodicity in accordance with the one present in the trabecular bone [28,44,45]. In the finite element mesh we employed 17.756 nodes and 16.625 tetrahedral elements. In all the simulations we used incremental steps of Δt = 0.1 therefore this means that the simulation time is measured in seconds and every step Δt in the biological process corresponds to 7 seconds.
As a result of chemical interaction between the two molecular factors (reactants) and by the numerical results it was determined spatial patterns stable over time. The concentration of molecular factors in the cartilage and the action of the diffusive processes allows the formation of a pattern that is replicated throughout the domain. The architecture of the primary spongiosa obtained by the proposed RD model depends on the parameters used in ((1a), (1b), and (1c)) (see Tables 2(a) and 2(b)), so you can get structures with wave number (2,2,2) as shown in Figure 2 and with wave number (4,2,2) in Figure 3 (see Appendix B). The wave numbers allow either to define or to condition the frequency and distribution of the number of pores in a specific direction [9,22,46]. The appearance of either microstructure of the primary spongiosa, depends on the location of the parameters in the reaction diffusion equation ((1a), (1b), and (1c)) in the space of Turing. The location of certain points in this space, determine spatial patterns as shown in the results of this article.
The results in Figure 2 show the formation of two halfwaves in each of the x, y, and z, directions, while Figure 3 shows the formation of four half-waves in the x direction and 2 half-waves in the y and z directions, respectively. Figures 2(b), 2(c), 3(b), and 3(c) show the results for the organization of VEGF and MMP13 after the stabilization of the reaction-diffusion process, note that in the areas of greatest concentration of VEGF is produced cartilage calcification and in areas of greatest presence of MMP13 degradation occurs (empty space).
From the reaction-diffusion mechanism, it can be determined the change in the concentration of VEGF and MMP13 for each time step, as shown in Figure 4. The concentrations of VEGF (S VEGF ) and MMP13 (S MMP13 ) within the cartilage evolve according to its diffusivity, its interaction, and its expression by the hypertrophic chondrocytes. So, the VEGF and the MMP13 are concentrated in high amounts in specific areas of the growth plate, allowing the formation of regular patterns similar to what happened in different biological models [9,[47][48][49][50][51].
The architecture of the primary spongiosa in a cubic element of length 0.44 mm is shown in Figure 5. In this figure we can observe the regular patterns for two different wave modes (2,2,2) and (4,2,2). Likewise, it details the advance in the front of ossification in different time steps, allowing the invasion of the cartilage by osteogenic cells that produce degradation and calcification. This will promote the formation of primary trabeculae, which subsequently undergo the bone remodeling processes, product of the stress distribution on bone tissue.
The development of primary spongiosa has been the subject of study in recent years, but there is no clarity about the biological and mechanical factors affecting its formation. It was found that the structure of the trabecular tissue from its beginning (fetal age) to adulthood has different patterns in its formation, as well as, a variation in the trabecular density product of the bone remodeling process and the effect of mechanical loads. The proposed model assumes that the formation of patterns is due to the interaction through a reaction-diffusion system of two molecules (VEGF, MMP13) during endochondral ossification. The results presented show that the patterns self-organize along the domain used, as shown in Figures 2, 3, and 5. These structures represent the architecture of the primary spongiosa considering only biochemical effects. The results obtained in this work can be compared to the structure used by Ruimerman et al. [44], which, show self-organized repetitive patterns that serve as the basis for the maintenance and adaptation of mature trabecular structure.
The production of molecular factors that act as activatorsubstrate by the differentiation of prehypertrophic chondrocyte, are not necessarily the only factors expressed by these chondrogenic cells, which probably affect considerably the process of ossification, even the chondrocytes are not the only cells that act in this process. However, the proposed model only focuses on the formation of primary spongiosa architecture and not the entire calcification process, in which bone cells also operate as osteoclasts and osteoblasts. In the complete calcification process the model should not only incorporate chemical influences (bioregulatory model) but also involves loads and restrictions at the boundary (mechanical effects) as well as additional biochemical factors that must be taken into account. For example, Ruimerman et al. [44], Jang and Kim [52], Renders et al. [53], and Coelho et al. [54] have proved in their works that mechanical factors play an important role in the development, adaptation, and maintenance of the trabecular bone structure. Moreover, from the viewpoint of the biochemical factors, works such as those of Brouwers et al. [14] have evaluated the potential of three growth factors, PTHrP, Ihh, and VEGF, that interact and regulate the tissue differentiation and development of a long bone.
Much has been learned in recent years about the cellular and molecular mechanisms that guide the different events which allow the production of immature bone through endochondral ossification mechanism [6,10,11,14,19,27,31,33,45,[55][56][57][58]. Nevertheless, there are still concerns about the relationship and interaction of different events to allow ossification and endochondral growth.
In this paper we presented a bioregulatory model based on a set of reaction-diffusion equations to predict the formation of primary spongiosa architecture. The application of the reaction-diffusion model with parameters in the Turing space is an area of constant work and controversy in biology. Garzón-Alvarado et al. [8,9,22,46], Courtin et al. [18] and Cramping and Maini [47], used in their researches reaction-diffusion models to simulate different biological processes, finding in their results, that the use of these systems may explain many complex biological phenomena where pattern formation is a constant variable.
Conclusions
In this paper we presented the development of a biochemical model involving reaction-diffusion systems with instabilities in the Turing space. This model attempts to explain the generation of primary spongiosa in the process of endochondral ossification, an event that is not yet fully understood, due to the amount of biological, mechanical and biochemical effects that are present. The model involves the controlled interaction of two important molecular factors, such as VEGF and MMP13, present in the development and bone formation.
The work presented in this paper illustrates and supports the validity of the reaction-diffusion models to describe the processes occurring during a complex event of pattern formation in bone biology. From the results presented, it can be concluded that the chemical feedback between the two reactants molecular factors (activator-substrate) could be an explanation from a set of possible factors that determine the complex spatial patterns found in the origin of the architecture of the primary spongiosa. However, it is clear that these results have been obtained with a mathematical model based on assumptions and simplifications that should be discussed. The hypothesis presented suggests that the origin of the primary spongiosa is internally controlled by cartilage cells. This is achieved through two biochemical reagents, VEGF and MMP13. These are not the only factors acting in the endochondral ossification, there are many others, among which count Ihh, PTHrP, Runx2, BMP, [10,11,14,31] and likely influence similarly to the trabecular bone formation. Until now there has been a great effort to fully understand the role of each of these substances, how they interact and what processes they regulate. It is possible that VEGF-MMP13 are not the factors that control the entire process of endochondral ossification, but the existence of an activator-substrate mechanism ensures high stability for the development of this biological process. By the other side, the hypothesis raised is based entirely on the cycle of bone formation through the endochondral ossification mechanism, as presented in [6,10,28]. Afterwards, in the bioregulatory model (see Figure 1) it is taken the MMP13 as an agent responsible for the degradation of cartilage septa and as a promoter of the activation of VEGF, which fosters vascularization of the cartilage and subsequent calcification. However, we do not discard the possibility of a new bioregulatory model where it considers the counter case presented. This indicates that the activation of MMP13 may be produced by the action of VEGF, suggesting that cartilage degradation initially requires vascular invasion, as presented in the work of Pufe et al. [59].
In the development of the model it is assumed for the initial conditions that the activating factor is released by hypertrophic chondrocytes, as well as the substrate; however, the type of spatial instability obtained is independent of the initial conditions. Nevertheless, this model is very stable and robust with respect to the initial conditions and the range of parameters.
Finally, despite all the limitations and simplifications, the proposed mathematical model is able to reproduce in detail the architecture of the primary spongiosa, allowing variation in porosity and thickness of the trabeculae. The proposed model will serve as the basis for the formation of the secondary spongiosa architecture, from the bone remodeling process, observing the action of bone cells and the different mechanical effects that determine the orientation of the trabeculae.
A. Reaction-Diffusion Equations
Turing [60] proved, theoretically, that a chemical system of reaction diffusion could spontaneously, evolve in a heterogeneous spatial pattern from a uniform initial state in response to infinitesimal perturbations [51,59]. His model is a system of two special partial differential equations, which are known as the two components of a reaction-diffusion system. In the most general case where several components are present the system has the form: where the unknown functions u i (x, t) can be interpreted as a reagent of concentration, the term D i ∇ 2 u i describes the diffusion reagent and f i is a smooth function (usually polynomial or rational in u i ) that describes a non-linear chemical interaction between the reagents. Ω refers to the bounded domain and in the system operates some initial boundary conditions. Turing also introduced some key concepts such as an activator and an inhibitor, so it was assumed that cellular states are discrete and can be modified by special chemical reagents. It is known that for two components of a reactiondiffusion system, the Turing instability, which leads to the formation of patterns [60], it is possible to find it for two types of systems characterized by the signs of the stability matrix (Jacobian) in a positive homogeneous stationary system. The signs of the coefficients of the linearized dynamic system around a fixed point, namely the defined curves by the functions f (u, v) = 0 and g(u, v) = 0 in the plane u − v, for a system of two chemical components, are cut in the point (u 0 , v 0 ), contain relevant information on the mechanisms of destabilization of the homogeneous solution. It can be shown that when the signs of the stability matrix are given by It is possible to find a Turing instability if the reaction rates and diffusion coefficients allow the fixed point (u 0 , v 0 ) to be stable to small homogeneous perturbations, and be able to become unstable to inhomogeneous perturbations, these conditions can be summarized mathematically by four inequalities: where d represents the nondimensional coefficient of the diffusivities d = D v /D u . These inequalities are based on the linear stability analysis, that is, from the study of the eigenvalues (λ) of the linearized dynamics. The stability matrix indicates what type of Turing instability can be found, for the case presented in this paper, the matrix corresponds to type 1 which identifies a reactiondiffusion system activator-substrate.
According to (A.1), for a reaction-diffusion system of two components, we have where we have the following terms of reaction: For the point of equilibrium, we have The Jacobian (stabilization matrix) is given by (A.7) Therefore, For a steady point (u 0 , v 0 ), we have f (u, v) = 0 and g(u, v) = 0; therefore, u and v are given by (A.10) Taking a = 0.1 and b = 0.9 [9] and replacing in (A. 10 (A.11) Therefore, the stability matrix takes the following structure of signs: Indicating that the substance u is self-activated and inhibits the substance v, while, the substance v is self-inhibited and activates the substance u, forming an activator-substrate system.
B. Estimation of the Values for the Parameters
The set of ((1a), (1b), and (1c)) corresponds to a coupled system, where the equations that correspond to the molecular factors (1a) and (1b) are extended reaction-diffusion equations, similar to a Turing system, which has a controlled diffusion due to instabilities. For (D VEGF , D Ihh / = 0), the controlled diffusion by instabilities appear for some combination of parameters [9,47,61], this defines a domain in the parameters space called Turing space. To get a Turing space is necessary a linear stability analysis of the reaction-diffusion system on the homogeneous solution, which is obtained by forcing (∂S VEGF /∂t)(D VEGF = 0) = 0 and (∂S MMP13 /∂t)(D MMP13 = 0) = 0, and leads to (S * VEGF , S * MMP13 ) = ((α 1 + α 2 )/μ , α 2 μ 2 /γ 0 (α 1 + α 2 ) 2 ). The linear analysis allows you to find the spatial patterns of the linearized solution and the range of parameters that ensure the emergence of such specific patterns [61]. Therefore, the solution can be expressed as (S VEGF , S MMP13 ) = (u+S * VEGF , v + S * MMP13 ), where u and v are small perturbations in each molecular factor, respectively. From (1a) and (1b) the results of the linear analysis allow to write the following inequalities: These inequalities define a domain in the parameter space, known as the Turing space, where the uniform steady state (S * VEGF , S * MMP13 ) is linearly unstable. If we express (1a) and (1b) in a nondimensional form (Schnakenberg equation) and as a function of small perturbations of the molecular factors (S VEGF , S MMP13 ), respectively, through (u, v), we can obtain where the parameters can be identified for the two models and their relationship: where T is the characteristic time of the biological process and L is the characteristic length of the dimensional model. Therefore, defining (γ, d, a, and b) it is possible to obtain the eigenvalues and eigenvectors of the set of (B.3) and from them, the different spatial patterns corresponding to different wave numbers.
In the case of the proposed dimensional model it is necessary to define the nondimensional parameters (L, D VEGF , D MMP13 , C CH , μ, γ, α 1 , and α 2 ). To estimate these values, we consider for this work some experimental evidence: (i) The typical concentration of VEGF in human tissue is S VEGF = S MMP13 = 1 ng/mL [43].
(ii) The study domain is a cubic three-dimensional element of side L = 0, 22 mm [44].
To reproduce the patterns present in the primary trabecular bone architecture with the proposed model, it is necessary that all parameters are in the Turing space and that they satisfy the restrictions (A.3). Therefore, taking the values for S VEGF , L, and D VEGF , and the set of values for γ, d, a, and b [8,22] (see Table 1) that satisfy the restrictions of Turing, and using the relations of (24), we get the set of values to be used for the solution of ((1a), (1b), and (1c)) (see Tables 2(a) and 2(b)). Parameter Value Units μ 1,327 mm 3 /cell day γ 0 1,327 (10 12 ) m m 9 /cell day g 2 α 1 1,327 (10 −7 ) g / c e l l d a y α 2 1,19 (10 −6 ) g / c e l l d a y D MMP13 5,9 (10 −4 ) m m 2 /s T 11,7 Minutes | 8,475.6 | 2012-09-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
Discovery of a drug candidate for GLIS3-associated diabetes
GLIS3 mutations are associated with type 1, type 2, and neonatal diabetes, reflecting a key function for this gene in pancreatic β-cell biology. Previous attempts to recapitulate disease-relevant phenotypes in GLIS3−/− β-like cells have been unsuccessful. Here, we develop a “minimal component” protocol to generate late-stage pancreatic progenitors (PP2) that differentiate to mono-hormonal glucose-responding β-like (PP2-β) cells. Using this differentiation platform, we discover that GLIS3−/− hESCs show impaired differentiation, with significant death of PP2 and PP2-β cells, without impacting the total endocrine pool. Furthermore, we perform a high-content chemical screen and identify a drug candidate that rescues mutant GLIS3-associated β-cell death both in vitro and in vivo. Finally, we discovered that loss of GLIS3 causes β-cell death, by activating the TGFβ pathway. This study establishes an optimized directed differentiation protocol for modeling human β-cell disease and identifies a drug candidate for treating a broad range of GLIS3-associated diabetic patients.
Much of
is pure speculation or just well known facts about the SMAD signaling pathway. I do not see any reason to include this schematic in the main section of the manuscript. It should be deleted or placed in Supplements.
Reviewer #2 (Remarks to the Author): Amin et al. develop an extension on published human pluripotent stem cells differentiation protocols, which produces mono-hormonal beta-like cells (refereed to as PP2-beta) that are more mature than those achieved through older versions. (Importantly, their PP2-beta cells are glucose responsive, and express maturation markers that are not detected in cells differentiated by the older versions of the protocols). This, by itself, is an important achievement. The authors then use this improve protocol to unmask the role of GLIS3 in human pancreatic development, using a clever isogenic HESC lines analysis. GLIS3 is one of the most important GWAS candidates for diabetes, implicated in both T1D and T2D, and thus most likely to affect beta cells. The authors confirm that GLIS3 is involved in cell death, and use high-content chemical screen to identify the drug galunisertib as a GLIS3-specific candidate for rescuing cell death in GLIS3 mutant cells. This is an elegant work which maximizes the strengths of human pluripotent stem cell differentiation for disease modeling and drug discovery, uses this platform exactly how it should be used, and reports on several important advancements to the field. I have no major concerns with the manuscript.
Minor points: 1) The loading controls for the Western blots in figure 5c and 5d seem way overexposed, and it is hard to conclude anything form the experiment.
2) The authors state that "compared with PP1, PP2 cells express higher levels of trunk PP markers, including for PDX1, NKX6.1 and NEUROD1 as indicated by qRT-PCR essays", but the referred figure (figure 1b) shows no significant difference in PDX1 expression between PP1 and PP2 cells.
3) Is there significance for the difference in GHRL expression between replicates of PP1-beta expression levels in figure 1h? Are these biological replicates?
Reviewer #3 (Remarks to the Author): Human embryonic stem cells (hESCS) have recently been used to study the function of genes involved in human embryonic and foetal development. In this article, the authors generate new inactivating alleles of GLIS3 in hESCs and show that they decrease beta cell development. GLIS3 is a gene which causes neonatal diabetes upon homozygous or compound heterozygous mutations and for which heterozygous variants predispose to both type 1 and type 2 diabetes. While inactivation in mice has clarified its role in beta cell development its role in human is only extrapolated from mouse data. The authors convincingly show that GLIS3 reduces beta cell development and increases cell death in progenitors (only those obtained using specific protocols) and beta cells. The article further shows that it is then possible to screen for drugs that correct the defect and identify TGFb inhibitors. The screen adds tremendously to the study and could inspire others. It is unlikely that it will lead to a treatment using TGFb inhibitors during pregnancy but it clarifies the disease mechanisms. The authors actually end their paper by a note on how this should help in neonatal diabetetes treatment and should clarify what they have in mind.
While the article fails to decipher the mechanisms by which GLIS3 controls the TGFb pathway, this is nevertheless a beautiful and comprehensive study. Some requests for improvement are provided below. A major one is whether death is a primary defect or not. Since it occurs rather late in the protocol, it may be a consequence of overcrowding. Is the proliferation rate of PP2 cells the same in WT and GLIS3 KO? It is also striking that it starts two days after the change of medium triggering differentiation. Since death also seems to affect PP2 progenitors prior to their differentiation wouldn´t those be expected to die as soon as they become GLIS3 high? This should occur at the latest at 23 days. There is a big gap between day 9 and 23 during which PP1 transform into PP2 (or are selected). Why does it take that long? When does GLIS3 go up in this period?
The second main concern is the lack of focus on NEUROG3 (see more specific comments below). It is important to provide a point of comparison with the data in the knock-out mice. This is all the more important that there is little in common between the phenotype described in mice and in human ESCs other than resulting in less beta cells. If in addition the tools and readout used are different, it limits even more the comparison.
Points needing clarification, improvement or correction: -It is often unclear in the paper which of the experiments were performed on different KO lines and when done on one, which one was used. -Abstract, lines 3-4: Was GLIS3 really not expressed in the previous protocol? It is said later that it is, though at lower levels. The reason for the lack of phenotype in previous attempts is rather speculative. The previous mutations were possibly not total loss-of-function. Arguments for the new ones being total loss-of-function should be discussed somewhere. -Abstract, lines 5: it does not seem justified to talk about a secondary transition in human. This is a concept in mice where there are two waves of endocrine cell production but this does not seem to be conserved in human. The cells may indeed correspond to a later type of progenitor (refer to what it corresponds to in vivo and what the criteria are) but should not be called secondary transition.
-Page 4, line 12: Clarify if it is known when GLIS 3 is expressed in the human pancreas. Do the PP1 cells express GLIS3? This is clarified later but a reader can wonder already at this point. -Page 5, line 8: Why do the authors write that there is spontaneous differentiation? It is triggered by a medium change isn´t it? -Page 5, line 10: Clarify if this protocol also leads to the differentiation of other monohormonal cell types. -Page 5, line 15: how do the levels of UNC3 or MAFA compare to mature beta cells? The maturity of beta cells in vitro is a big issue at the moment. It is not the main point of the paper but many readers will be interested to know. -Page 7, bottom: indicate in the text the effect on GCG cells. -Page 13, lines 12-14: While the difference with mouse studies may highlight a species difference indeed, it may also reveal a difference between in vitro and in vivo systems. The authors should be more careful.
- Figure 1C: How many PP1 and PP2 samples were used? This looks like n=1 of each, which is not sufficient considering the modest fold changes. The fold changes are at odds with Fig 1B for the genes that could be compared. - Figure 1K: le ratio of secretion at 20mM glucose versus 2mM is not fantastic for the PP2-b (and for the control islets themselves) - Figure 2b: repeat numbers? Statistical significance? NEUROG3 seems down. Why are endocrine cell numbers normal??? Is it a reduction in transcript per cell or cells expressing NEUROG3? What happens at the protein level? ChromograninA also seems down though cell numbers are normal. Are these changes in NEUROG3, and CHRGA stable with time in vitro? - Figure 3b and d: the Annexin V flow profiles are very unusual. Annexin V is ususally much easier to gate, forming two different populations rather than the continuum used here. The continuum is much more difficult to gate and could lead to wrong interpretations. Attention to Topro misspelling in b. Cell death is however shown in many different ways and is likely real but since it was not seen in the mouse model that´s something to be careful about.
- Figure S1b: Is it not PP2 at day 23?
We would like to thank all the reviewers for the valuable comments on our manuscript and for appreciating the significance and novelty of our work. We understand the reviewers' concerns and have performed additional experiments to address them. These experiments are included as 15 main and 26 supplementary new or modified figure panels and 2 new supplementary tables. The responses to reviewers' comments are detailed as below, highlighted in blue.
Reviewers' comments:
Reviewer #1 (Remarks to the Author): In this study, the investigators examine the role of GLIS3 in pancreatic beta cell differentiation using human PSCs. They reveal a role for GLIS3 in the regulation of apoptosis and SMAD signaling. Overall the findings are interesting and novel.
1. Abstract, line 8-9: "This phenotype is distinct from Glis3-/-mice, emphasizing the significance of a human disease model". It is a bit premature to conclude this. It is difficult to compare an in vitro cell system with the differentiation of beta cells in vivo where cell environment, e.g., mesenchymal cells, endothelial cells as well as hormones produced by other tissues, can influence this differentiation. Previous data by the same investigators (Ref 20) showed a different result with Glis3 KO hPSCs on cell differentiation then the current study indicating that the differentiation protocol used can greatly influence what effect loss of GLIS3 expression can have on beta cell differentiation. This is another indication that one needs to be cautious translating in vitro data to the phenotype observed in Glis3 deficiency in human or mice. In addition, in humans with GLIS3 deficiency all pancreatic endocrine cells are reduced different from what is observed in the in vitro cell system described here.
Response: We understand the reviewer's concern. To address this issue, we have removed the claim from the abstract. In addition, we have added the following description in the discussion. "The difference between GLIS3 -/-hESC-derived cells and Glis3 -/-mice might underline the distinction between mouse development and human cell-based systems or the difference between in vitro or in vivo conditions." at Page 15 Line 1.
In reference to humans with GLIS3 deficiency, we are not aware of any studies that directly analyze the pancreatic tissue of the patients carrying loss of function mutations of GLIS3. We would be happy to include the reference if reviewer recommends it.
Response: The flow cytometry plot for PDX1+ cells has been added as Supplementary Fig. 1a. The percentage of PDX1 + cells is 83.9 ± 9.1%.
Response: We have changed the description to "the derived primary transition PP1-β cells are mostly polyhormonal (comprising a population of 60-70% poly-hormonal and 30-40% mono-hormonal INS + cells), which represent the cells from older protocols." at Page 4 Line 20. In contrast, 85-95% of PP2-β INS + cells derived using new differentiation protocol, are mono-hormonal. 4. Fig. 1g: the % of UNC3, NKX6.1, PAX6 and Isl1 positive cell should be provided. It is difficult to see from the IHS what % is positive.
Response: We have added the percentage of UCN3, NKX6.1, PAX6 and ISL1 positive cells in the main text at Page 5 Lines 16-17.
5. The relative percentage of Sst+ and Ghrl+ cell is increased. This relative increase appears to be due to the loss of cells by apoptosis, thus fewer cells, not to an increase in the absolute number of these cells due to increased Sst and Ghrl differentiation. The authors should comment on that in the paper.
Response: To address this concern, we quantified the total number of SST + and GHRL + cells in WT and GLIS3 -/-hESC-derived cells at D30_L which were differentiated at similar starting density. The total number of GHRL + and SST + cells were significantly higher in GLIS3 -/-cells compared to WT cells. We have included the data in Supplementary Fig. 4i, 4j.
6. P 9: The investigators state that most of the Glis3-PP2-cells are PP2 cells, but they also state that Glis3-PP2-cells are more prone to undergo apoptosis. Also seems that in the absence of Glis3 at least some PP2 cells still are able to differentiate into PP2-cells? The PP2 and PP2-terminology should be used for phenotype of the cell not the presumed stage of differentiation. Not clear how they distinguish Glis3-PP2 from Glis3-PP2-cells and what causes the increase in sensitivity to undergo apoptosis in PP2-cells.
Response: Some characters are missing in the reviewer's comments. We assume that the reviewer means "PP2-β cells" by "PP2-cells".
First, we agree with the reviewer that it is confusing to use PP2 and PP2-β for both cells and stages. To clarify this issue, the differentiation stages and cells were defined as below and added as the Supplementary Table 1.
Stage
Cells D9/day 9 PP1 cells D16_E/day 16 using the early progenitor protocol PP2 cells D23_L/day 23 using the late progenitor protocol INS-GFP + cells were defined as PP1-β cells D30_L/day 23 using the late progenitor protocol INS-GFP + cells were defined as PP2-β cells Second, reviewer is correct. Although the percentage of INS-GFP + cells decreases at D30_L, loss of GLIS3 does not completely block differentiation to PP2-β cells.
Finally, the percentage of apoptotic cells in GLIS3 -/-PP2 cells are around two-fold higher than that in WT PP2 cells ( Fig. 3a and 3b), which is comparable to the fold change of the percentage of apoptotic cells in GLIS3 -/-PP2-β cells versus WT PP2-β cells (Fig. 3d, 3e). However, the overall apoptotic rate is higher in PP2-β cells than PP2 cells, this might due to the change of culture medium. PP2 cells were maintained in a relatively rich medium containing EGF and FGF, which facilitate cell survival and self renewal. However, PP2-β cells were maintained in basal medium containing only B27.
7. It is not clear whether the reduced INS gene expression in the PP2-cells is due reduced number of INS+ cells or whether INS transcription is affected as well by the lack of GLIS3 expression. Since NGN3 has been reported to be a GLIS3 target gene, I was also wondering whether the lack of GLIS3 affects the number of NGN3+ cells. These two questions should be determined and included in the paper.
Response: Again, some characters are missing in the reviewer's comments. We assume that reviewer means "PP2-β cells" by "PP2-cells".
To determine whether insulin transcription is affected by the lack of GLIS3 expression, qRT-PCR analysis was applied to monitor insulin transcriptional expression in the purified WT and GLIS3 -/-INS-GFP + PP2-β cells. Indeed, the transcriptional expression of insulin in GLIS3 -/-INS-GFP + PP2-β cells is significantly lower than that of WT INS-GFP + PP2-β cells. We have included the data in Supplementary Fig. 4k.
Regarding the NGN3 + cells, immunostaining was used to quantify the number and percentage of NGN3 + cells in WT or GLIS3 -/-hESC-derived cells at D9, D16_L and D23_L. We did not observe significant difference between WT or GLIS3 -/-hESC-derived cells at any of the time points tested. We have included the data in Supplementary Fig. 3c, 3d.
8. I am surprised about the poor separation between Annexin V+ and Annexin V-cells. There appears not to be a clear distinction between the two cell populations. This makes calculation of the number of apoptotic cells less convincing. Can this be improved for example with double staining for ToPro3 and Annexin V as was done in Fig. 3b? This gives a much better separation between apoptotic and nonapoptotic cells.
Response: We apologize for the poor separation of Annexin V + and Annexin Vcells. To determine whether this is due to hESC-derived population or the staining protocol, we first tested the same staining protocol on EndoC-βH1 cells and found that a clear separation of Annexin V + and Annexin Vcells in EndoC-βH1 cell line. (a in the following figure). Secondly, to improve the accuracy of gating strategy, we have included a positive control, the cells treated with 10 µM Camptothecin (Sigma) for 4 hours, to facilitate gating (c and d in the following figure and Supplementary Fig. 5c).
In addition, we repeated experiments in The gate set based on the Camptothecin-treated sample was also applied to the INS-GFP + cells (Fig. 3d, 3e).
Together, we have repeated our experiments using two Annexin V staining kits with different fluorescence conjugates. In addition, we have added the positive control of EndoC-βH1 cells and hESC-derived cells treated with Camptothecin to facilitate gating. Importantly, the Annexin V staining using two different staining kits led to the same conclusion that GLIS3 -/-cells show a significantly increased cell apoptosis compared to WT cells. This can be visualized by plotting the live cells stained for annexin V on a histogram (Fig. 3f). We also measured the median fluorescence intensity (MFI) and found that the GLIS3 -/-cells have significantly higher MFI for Annexin V compared to WT cells (Fig. 3g). We feel these data should be sufficient to support our conclusion that GLIS3 -/-cells show a significantly increased cell apoptosis than WT cells.
9. Fig. 5c and d: How do the investigators know that the increase in p-SMAD in PP2-cells is due to changes in INS+ cells and not to a shift in the cell populations (reduced percentage of INS+ cells). Although the inhibitor hints at that its is related to cells, it also could act on other endocrine cells that via paracrine factors affect cells. It would be good to examine whether p-SMAD co-localize with INS+ cells by immunohistochemical staining or flow cytometry.
Response: To address the reviewer's concern, we have performed staining of p-SMAD2/3. Indeed, nuclear staining of p-SMAD2/3 was detected in GLIS3 -/-cells at D30_L, but not WT cell at D30_L. In addition, all INS + cells have nuclear expression of p-SMAD2/3. Interestingly, the increase in p-SMAD2/3 was observed in both INS + cells and INScells. The image has been added as Supplementary Fig. 9b. Consistent with this data, the inhibitor rescues cell death in both INS + and INScells (Fig. 4d-4f and Supplementary Fig. 8a, 8b).
10. It is not clear how p-SMAD is activated. Is the expression of any of known activators upstream of the SMAD pathway increased in GLIS3 deficient cells?
Response: RNA-seq analysis suggested that both TGFβ ligands (TGFβ2 and TGFβ3) and receptor (TGFBR2) are significantly upregulated in GLIS3 -/-cells compared with WT cells at D23_L, suggesting that loss of GLIS3 might activate p-SMAD signaling by upregulating the above genes. We have added this data in Supplementary Fig. 9a. Fig. 5F is pure speculation or just well known facts about the SMAD signaling pathway. I do not see any reason to include this schematic in the main section of the manuscript. It should be deleted or placed in Supplements.
Much of
Response: We have moved the old Fig. 5f to Supplementary Fig. 9c.
Response: We have changed it to Fig. 1g at page 5, line 14.
2. Fig. 1a and Supplemental Fig. 1a are redundant. Delete one of the two Figures.
Response: We have deleted the schematic in Supplementary Fig. 1a.
3. M&M, Immunohistochemistry: correct typo in "BD Bioscences" Response: We apologize for the typo. This has been corrected.
Response: In the new Fig 3b, we showed new flow cytometry analyses using DAPI.
Reviewer #2 (Remarks to the Author): Amin et al. develop an extension on published human pluripotent stem cells differentiation protocols, which produces mono-hormonal beta-like cells (refereed to as PP2-beta) that are more mature than those achieved through older versions. (Importantly, their PP2-beta cells are glucose responsive, and express maturation markers that are not detected in cells differentiated by the older versions of the protocols). This, by itself, is an important achievement. The authors then use this improve protocol to unmask the role of GLIS3 in human pancreatic development, using a clever isogenic HESC lines analysis. GLIS3 is one of the most important GWAS candidates for diabetes, implicated in both T1D and T2D, and thus most likely to affect beta cells. The authors confirm that GLIS3 is involved in cell death, and use high-content chemical screen to identify the drug galunisertib as a GLIS3-specific candidate for rescuing cell death in GLIS3 mutant cells.
This is an elegant work which maximizes the strengths of human pluripotent stem cell differentiation for disease modeling and drug discovery, uses this platform exactly how it should be used, and reports on several important advancements to the field. I have no major concerns with the manuscript.
Response: We thank the reviewer for acknowledgement of the significance and novelty of the manuscript.
Minor points: 1) The loading controls for the Western blots in figure 5c and 5d seem way overexposed, and it is hard to conclude anything form the experiment.
Response: We have included the lower exposed blots for both experiments. The quantification of Fig. 5c and 5d was not affected since the relative phosphorylation level was normalized to total SMAD2/3, as its signal has a wide linear range of detection. In addition, the increase of p-SMAD2/3 in GLIS3 -/-cells was further confirmed by immunostaining (Supplementary Fig. 9b).
2) The authors state that "compared with PP1, PP2 cells express higher levels of trunk PP markers, including for PDX1, NKX6.1 and NEUROD1 as indicated by qRT-PCR essays", but the referred figure (figure 1b) shows no significant difference in PDX1 expression between PP1 and PP2 cells.
Response: We apologize for the confusion. We have corrected this in the text and changed the sentence to "Compared with PP1, PP2 cells express higher levels of late trunk PP markers, including NKX6.1 and NEUROD1 as indicated by qRT-PCR assays" at Page 5, Line 10.
3) Is there significance for the difference in GHRL expression between replicates of PP1-beta expression levels in figure 1h? Are these biological replicates?
Response: While the FPKM values for GHRL is higher in both PP1-β samples, it does not reach significance. Overall, our immunostaining and flow cytometry data (Supplementary Fig. 1i, 1j) suggest that GHRL is not strongly co-expressed with INS in neither PP1-β cells nor PP2-β cells. Thus, no significant difference was detected. In the old version of the manuscript, we included all endocrine markers, including GHRL, in the heatmap for the reader's reference. If the reviewer thinks it is inappropriate, we will remove this gene from the heatmap. The samples shown in Fig. 1h are biological replicates.
Reviewer #3 (Remarks to the Author): Human embryonic stem cells (hESCS) have recently been used to study the function of genes involved in human embryonic and foetal development. In this article, the authors generate new inactivating alleles of GLIS3 in hESCs and show that they decrease beta cell development. GLIS3 is a gene which causes neonatal diabetes upon homozygous or compound heterozygous mutations and for which heterozygous variants predispose to both type 1 and type 2 diabetes. While inactivation in mice has clarified its role in beta cell development its role in human is only extrapolated from mouse data. The authors convincingly show that GLIS3 reduces beta cell development and increases cell death in progenitors (only those obtained using specific protocols) and beta cells. The article further shows that it is then possible to screen for drugs that correct the defect and identify TGFb inhibitors. The screen adds tremendously to the study and could inspire others. It is unlikely that it will lead to a treatment using TGFb inhibitors during pregnancy but it clarifies the disease mechanisms. The authors actually end their paper by a note on how this should help in neonatal diabetetes treatment and should clarify what they have in mind.
Response: We thank the reviewer for the appreciation of the significance of the manuscript. We agree with reviewer 3 that it might be difficult to apply TGFβ inhibitor during pregnancy. Since many GLIS3 SNPs are associated strongly with both T1D and T2D, it is conceivable that they share a common mechanism with our knockout model. The identified drug candidates can potentially be used to treat patients carrying GLIS3 SNPs that lead to decrease/loss in GLIS3. More importantly, the TGFβ inhibitor provides a chemical tool to dissect the downstream signal perturbations in the absence of GLIS3. We have discussed this issue in the main text at Page 15 Line 21.
While the article fails to decipher the mechanisms by which GLIS3 controls the TGFb pathway, this is nevertheless a beautiful and comprehensive study. Some requests for improvement are provided below. A major one is whether death is a primary defect or not. Since it occurs rather late in the protocol, it may be a consequence of overcrowding. Is the proliferation rate of PP2 cells the same in WT and GLIS3 KO? Response: 1. We performed additional RNA-seq analysis and found TGFβ ligands (TGFβ2 and TGFβ3) and receptor (TGFBR2) are significantly upregulated in GLIS3 -/-cells compared with WT cells at D23_L, suggesting that loss of GLIS3 might activate p-SMAD signaling by upregulating the above genes. We have added this data in Supplementary Fig. 9a. We agree with the reviewer that the detailed mechanism by which GLIS3 controls the TGFβ pathway still needs further investigation.
There is no significant difference regarding the cell proliferation rate of WT and GLIS3 -/-PP2 cells. In addition, when WT and GLIS3 -/-PP2 cells were replated at similar density. GLIS3 -/-cells still show higher apoptotic rate. Together, it suggests that the high cell death in GLIS3 -/-cells is not a consequence of overcrowding.
It is also striking that it starts two days after the change of medium triggering differentiation. Since death also seems to affect PP2 progenitors prior to their differentiation wouldn´t those be expected to die as soon as they become GLIS3 high? This should occur at the latest at 23 days. There is a big gap between day 9 and 23 during which PP1 transform into PP2 (or are selected). Why does it take that long? When does GLIS3 go up in this period?
Response: We monitored the GLIS3 expression from at D9, D12_L, D16_L, D19_L and D23_L during the PP1 to PP2 transition and found the GLIS3 was not significantly increased until day 23 ( Supplementary Fig. 1f). Thus, no strong difference of cell apoptosis between GLIS3 -/-and WT cells until day 23 (Fig. 3b, 3c). The percentage of apoptotic cells in GLIS3 -/-PP2 cells are around two fold higher than that in WT PP2 cells ( Fig. 3a and 3b), which is comparable to the fold change of the percentage of apoptotic cells in GLIS3 -/-PP2-β cells versus WT PP2-β cells ( Fig. 3d and 3e). However, the overall apoptotic rate is higher in PP2β cells than PP2 cells, this might due to the change of the culture medium. PP2 cells were maintained in a relatively rich medium containing EGF and FGF, which facilitate cell survival. However, PP2-β cells were maintained in basal medium containing only B27. This emphasizes the importance of using a minimal component differentiation protocol when using hPSC-derived cells to study diseases.
The second main concern is the lack of focus on NEUROG3 (see more specific comments below). It is important to provide a point of comparison with the data in the knock-out mice. This is all the more important that there is little in common between the phenotype described in mice and in human ESCs other than resulting in less beta cells. If in addition the tools and readout used are different, it limits even more the comparison.
Response: We have performed a new set of experiments to address reviewer's comment on NEUROG3, which are detailed in the response to the specific comment below.
In addition, we understand reviewer's concern regarding the description of species difference. To address this issue, we have removed the claim from the abstract. In addition, we have added the following description in the discussion. "The difference between GLIS3 -/-hESC-derived cells and Glis3 -/-mice might underline the distinction between mouse development and human cell-based systems or the difference between in vitro or in vivo conditions.
Points needing clarification, improvement or correction: 1-It is often unclear in the paper which of the experiments were performed on different KO lines and when done on one, which one was used.
Response: We have added the detailed information of WT and GLIS3 -/-clonal lines used in each figure panel in the Supplementary Table 7. 2-Abstract, lines 3-4: Was GLIS3 really not expressed in the previous protocol? It is said later that it is, though at lower levels. The reason for the lack of phenotype in previous attempts is rather speculative. The previous mutations were possibly not total loss-of-function. Arguments for the new ones being total lossof-function should be discussed somewhere.
Response: We have added the RNA-seq data of GLIS3 expression in INS-GFP + cells derived using the previous protocol. The expression level is <1, which is considered as "not expressed" (Fig. 1k). To be more accurate, we have removed the claim that GLIS3 is expressed at low level. We also confirmed the results by qPCR on the unsorted cells differentiated using their protocol and cell line. At the terminal stage of the differentiation (day 17), the level of GLIS3 expression in comparable to that of PP1 cells in our protocol. The data is included in the Supplementary Fig. 1e.
The previous study generated lines harboring frameshift mutations in the zinc finger domain of GLIS3. We have added the discussion "We cannot fully exclude the possibility that the previous mutations are not total loss-of-function" at Page 14 Line 14.
3-Abstract, lines 5: it does not seem justified to talk about a secondary transition in human. This is a concept in mice where there are two waves of endocrine cell production but this does not seem to be conserved in human. The cells may indeed correspond to a later type of progenitor (refer to what it corresponds to in vivo and what the criteria are) but should not be called secondary transition.
Response: We have change the names to early stage pancreatic progenitor (PP1) and late stage pancreatic progenitor (PP2). We would be open to further discussion if reviewer thinks other names are more appropriate.
4-Page 4, line 12: Clarify if it is known when GLIS 3 is expressed in the human pancreas. Do the PP1 cells express GLIS3? This is clarified later but a reader can wonder already at this point.
Response: GLIS3 is expressed in adult human islets (Supplementary Fig. 1d). Due to the ethical concerns, it is very challenging for us to get fetal pancreatic tissue to monitor the GLIS3 expression.
5-Page 5, line 8: Why do the authors write that there is spontaneous differentiation? It is triggered by a medium change isn´t it?
Response: The reason for using the term "spontaneous differentiation" was to indicate that factors stimulating/forcing the differentiation are not added to the cells. To avoid confusion, we have changed it to "differentiation". We would be open to further discussion if reviewer recommends other appropriate descriptions.
6-Page 5, line 10: Clarify if this protocol also leads to the differentiation of other monohormonal cell types.
Response: The reviewer raises an important question. We have performed additional flow cytometry experiments to address this question and have included the results in the Supplementary Fig. 1m. Indeed, the majority of α-like, δ-like and ε-like cells that are derived using this protocol are monohormonal.
7-Page 5, line 15: how do the levels of UNC3 or MAFA compare to mature beta cells? The maturity of beta cells in vitro is a big issue at the moment. It is not the main point of the paper but many readers will be interested to know.
qRT-PCR was performed to compare the expression of UCN3 or MAFA in INS-GFP + PP2-β cells and human islets. There is not a statistically significant difference between PP2-β cells and human islets. We indeed detected a big variation of UCN3 and MAFA expression in primary human islets. This data has been added as Supplementary Fig. 1n.
8-Page 7, bottom: indicate in the text the effect on GCG cells.
Response: We have now indicated the effect on GCG cells in the paragraph's conclusion at Page 8 Line 18.
9-Page 13, lines 12-14: While the difference with mouse studies may highlight a species difference indeed, it may also reveal a difference between in vitro and in vivo systems. The authors should be more careful.
Response: We understand the reviewer's concern. To address this issue, we have removed the claim from the abstract. In addition, we have added the following description in the discussion. "The difference between GLIS3 -/-hESC-derived cells and Glis3 -/-mice might underline the distinction between mouse development and human cell-based systems or the difference between in vitro or in vivo conditions. 10- Figure 1C: How many PP1 and PP2 samples were used? This looks like n=1 of each, which is not sufficient considering the modest fold changes. The fold changes are at odds with Fig 1B for the genes that could be compared.
Response: We have added samples in RNA-seq experiments. In the new Figure 1C, n=2 biological replicates for PP1 and n=3 biological replicates for PP2. Figure 1K: le ratio of secretion at 20mM glucose versus 2mM is not fantastic for the PP2-b (and for the control islets themselves)
11-
Response: We performed additional GSIS experiments. The fold induction of primary human islets varies between 1.09 to 4.51 folds. Although the ratio of GSIS of cells at D30_L derived using current protocol is not comparable to primary adult islets, the major focus of this manuscript is to study the role of GLIS3 in human pancreatic differentiation and hESC-derived β-like cell survival. As the reviewer 3 mentioned before, deriving functional mature human β-cells is not the key point of this manuscript. We feel that PP2 and PP2-β cells derived using current protocol already provide some useful information on the role of GLIS3 in human pancreatic differentiation and hESC-derived β-like cell survival, and facilitate understating of the downstream signal of GLIS3.
In addition, we added the following in the discussion "The GSIS response of PP2-β cells is not indistinguishable from human primary islets. Additional optimization might be required to further increase PP2-β cells' response to glucose stimulation." at Page 14 Line 5.
12- Figure 2b: repeat numbers? Statistical significance? NEUROG3 seems down. Why are endocrine cell numbers normal??? Is it a reduction in transcript per cell or cells expressing NEUROG3? What happens at the protein level? ChromograninA also seems down though cell numbers are normal. Are these changes in NEUROG3, and CHRGA stable with time in vitro?
Response: We have included the repeat numbers and statistical analysis in Fig. 2b. If anything is missing, we would be happy to fix them.
In addition, immunostaining was used to quantify the number and percentage of NEUROG3 + cells in WT or GLIS3 -/-hESC-derived cells at D9, D16_L and D23_L. We did not observe a significant difference between WT or GLIS3 -/-hESC-derived cells at any of the time points tested. We have included the data in Supplementary Fig. 3c, 3d. However, the fluorescence intensity of NEUROG3 + cells in GLIS3 -/population is significantly lower than WT NEUROG3 + cells (Supplementary Fig. 3g-3i).
We also monitored the percentage of ChromograninA + (CHGA + ) cells, which is shown as the Supplementary Fig 3e, 3f. Consistent with NEUROG3 + cells, the number and percentage of CHGA + cells are comparable in WT and GLIS3 -/-cells.
Together, it suggests that GLIS3 knockout does not affect the formation of endocrine cells and their progenitors at different stages of differentiation.
13- Figure 3b and d: the Annexin V flow profiles are very unusual. Annexin V is ususally much easier to gate, forming two different populations rather than the continuum used here. The continuum is much more difficult to gate and could lead to wrong interpretations. Attention to Topro misspelling in b. Cell death is however shown in many different ways and is likely real but since it was not seen in the mouse model that´s something to be careful about.
Response: We apologize for the poor separation of Annexin V + and Annexin Vcells. To determine whether this is due to hESC-derived population or the staining protocol, we first tested the same staining protocol on EndoC-βH1 cells and found that a clear separation of Annexin V + and Annexin Vcells in EndoC-βH1 cell line. (a in the following figure). Secondly, to improve the accuracy of gating strategy, we have included a positive control, cells treated with 10 µM Camptothecin (Sigma) for 4 hours, to facilitate gating (c and d in the following figure and Supplementary Fig. 5c).
In addition, we repeated experiments in Fig 3b, 3d and Supplementary Fig 3a (now Supplementary Fig. 4c) with another staining kit with a different fluorescence conjugate (the old kit: PE-annexin V from BD Biosciences and a new kit Alexa 647-conjugated annexin V from Thermo Fisher Scientific). Similarly, the cells treated with 10 µM Camptothecin and EndoC-βH1 cells were used as positive controls to facilitate gating. The PP2-β cells did not form distinct populations with the A647 kit either (a and b in the following figure). The gate set based on the Camptothecin-treated sample was also applied to the INS-GFP + cells (Fig 3d, 3e).
Together, we have repeated our experiments using two Annexin V staining kits with different fluorescence conjugates. In addition, we have added the positive control of EndoC-βH1 cells and hESC-derived cells treated with Camptothecin to facilitate gating. Importantly, the Annexin V staining using two different staining kits led to the same conclusion that GLIS3 -/-cells show a significantly increased cell apoptosis compared to WT cells. This can be visualized by plotting the live cells stained for annexin V on a histogram ( Fig. 3f). We also measured the median fluorescence intensity (MFI) and found that the GLIS3 -/-cells have significantly higher MFI for Annexin V compared to WT cells (Fig. 3g). We feel these data should be sufficient to support our conclusion that GLIS3 -/-cells show a significantly increased cell apoptosis than WT cells.
We thank the reviewer for catching this mistake. We have now corrected it in the current version. | 9,024.4 | 2018-07-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
One-Pot Synthesis of Semiconducting Quantum Dots–Organic Linker–Carbon Nanotubes for Potential Applications in Bulk Heterojunction Solar Cells
Materials and composites with the ability to convert light into electricity are essential for a variety of applications, including solar cells. The development of materials and processes needed to boost the conversion efficiency of solar cell materials will play a key role in providing pathways for dependable light to electric energy conversion. Here, we show a simple, single-step technique to synthesize photoactive nanocomposites by coupling carbon nanotubes with semiconducting quantum dots using a molecular linker. We also discuss and demonstrate the potential application of nanocomposite for the fabrication of bulk heterojunction solar cells. Cadmium selenide (CdSe) quantum dots (QDs) were attached to multiwall carbon nanotubes (MWCNTs) using perylene-3, 4, 9, 10-tetracarboxylic-3, 4, 9, 10-dianhydride (PTCDA) as a molecular linker through a one-step synthetic route. Our investigations revealed that PTCDA tremendously boosts the density of QDs on MWCNT surfaces and leads to several interesting optical and electrical properties. Furthermore, the QD–PTCDA–MWCNTs nanocomposites displayed a semiconducting behavior, in sharp contrast to the metallic behavior of the MWCNTs. These studies indicate that, PTCDA interfaced between QDs and MWCNTs, acted as a molecular bridge which may facilitate the charge transfer between QDs and MWCNTs. We believe that the investigations presented here are important to discover simple synthetic routes for obtaining photoactive nanocomposites with several potential applications in the field of opto-electronics as well as energy conversion devices.
The excitons generated far from the donor-acceptor interface, with limited exciton diffusion length (a few nanometers for conjugated polymers, <20 nm) [2] and exciton lifetime (<50 ns) [23], may result in exciton recombination.The electron-hole combination yields photoemission and results in the decrease in PCE.When the D-A interface is within the exciton diffusion length, the excitons may find one or multiple D-A interfaces in their close vicinity that may reduce the recombination losses [2,24].Charge transport through continuous conductive pathways in D-A bulk matrix to electrodes aid in improving the PCE.
In this manuscript, QD-PTCDA-MWCNTs photoactive nanocomposites were synthesized in a single one-pot step, which does not require multistep sequential synthesis and purification of QD-PTCDA-MWCNTs nanocomposites.This also minimizes chemical waste.Overall, the material synthesis of the DA active material demonstrated in this report is less labor intensive; therefore, saves time and energy for the production of photoactive nanocomposites for device fabrication.The QD-PTCDA-MWCNTs nanocomposites possess two interfaces-covalent attachment of PTCDA to QDs and π-π stacking between PTCDA and MWCNTs (Figure 1).Incorporation of PTCDA, as a molecular linker, in the QD-PTCDA-MWCNTs nanocomposite significantly improved the binding interactions between QDs and MWCNTs-~100-fold increase in the QD attachment on MWCNTs in the presence of PTCDA was observed in the electron microscopy studies.PTCDA is an π-conjugated organic molecule with absorption and emission in the visible spectrum region (Figure 2).PTCDA also acted as an electron acceptor for a photoexcited QD donor in the nanocomposites [25,26].The photoinduced charge transport of the nanocomposites was investigated using the devices fabricated with QD-PTCDA-MWCNTs nanocomposites.
The excitons generated far from the donor-acceptor interface, with limited exciton diffusion length (a few nanometers for conjugated polymers, <20 nm) [2] and exciton lifetime (<50 ns) [23], may result in exciton recombination.The electron-hole combination yields photoemission and results in the decrease in PCE.When the D-A interface is within the exciton diffusion length, the excitons may find one or multiple D-A interfaces in their close vicinity that may reduce the recombination losses [2,24].Charge transport through continuous conductive pathways in D-A bulk matrix to electrodes aid in improving the PCE.
In this manuscript, QD-PTCDA-MWCNTs photoactive nanocomposites were synthesized in a single one-pot step, which does not require multistep sequential synthesis and purification of QD-PTCDA-MWCNTs nanocomposites.This also minimizes chemical waste.Overall, the material synthesis of the DA active material demonstrated in this report is less labor intensive; therefore, saves time and energy for the production of photoactive nanocomposites for device fabrication.The QD-PTCDA-MWCNTs nanocomposites possess two interfaces-covalent attachment of PTCDA to QDs and π-π stacking between PTCDA and MWCNTs (Figure 1).Incorporation of PTCDA, as a molecular linker, in the QD-PTCDA-MWCNTs nanocomposite significantly improved the binding interactions between QDs and MWCNTs-~100-fold increase in the QD attachment on MWCNTs in the presence of PTCDA was observed in the electron microscopy studies.PTCDA is an π-conjugated organic molecule with absorption and emission in the visible spectrum region (Figure 2).PTCDA also acted as an electron acceptor for a photoexcited QD donor in the nanocomposites [25,26].The photoinduced charge transport of the nanocomposites was investigated using the devices fabricated with QD-PTCDA-MWCNTs nanocomposites.
Results and Discussion
The electron donor-acceptor morphology of the active materials in BHJSCs is crucial to controlling its interface [27,28], and it depends on a number of factors including, solvent evaporation rate, solubility and compatibility of donor and acceptor in the solvent used for processing, and substrate surface chemistry and morphology.An intimate D-A interaction may facilitate efficient charge dissociation and transport to charge collecting electrodes [28].The main objective of this work is to synthesize and characterize photoactive nanocomposites in a single pot and to study the photoinduced charge transport in the nanocomposites.The electron donor and electron acceptor in the nanocomposite were intimately linked through a covalent bond.Another important question we intend to investigate is the role of the organic linker in the active material.We chose PTCDA because the reactive anhydride groups of PTCDA can readily form covalent bonds with amine functionalized QDs, providing close contact between them.Furthermore, fused phenyl rings of PTCDA and MWCNTs can interact through π-π stacking.Therefore, PTCDA bridges QDs and MWCNTs, which presumably aided in facilitating the charge transfer from photoexcited QDs to MWCNTs and collecting electrodes.
Infrared Spectroscopy Characterization
FTIR spectroscopy is a powerful analytical technique for qualitative and quantitative structural analysis of organic and inorganic molecules.We have utilized FTIR spectroscopy for probing the QD-PTCDA interface in the nanocomposite.The nanocomposite samples for IR investigations were prepared by depositing the nanocomposites suspended in chloroform onto a KBr disc.Chloroform was evaporated and another KBr disc was placed to sandwich the samples between two KBr discs.Importantly, the peak at 1770 cm −1 in the FTIR spectrum of the nanocomposite originates from C=O of the anhydride rings of the PTCDA (Figure 3A, Table 1) [29].Figure 3C shows the infrared vibrational peaks for CdSe QDs
Results and Discussion
The electron donor-acceptor morphology of the active materials in BHJSCs is crucial to controlling its interface [27,28], and it depends on a number of factors including, solvent evaporation rate, solubility and compatibility of donor and acceptor in the solvent used for processing, and substrate surface chemistry and morphology.An intimate D-A interaction may facilitate efficient charge dissociation and transport to charge collecting electrodes [28].The main objective of this work is to synthesize and characterize photoactive nanocomposites in a single pot and to study the photoinduced charge transport in the nanocomposites.The electron donor and electron acceptor in the nanocomposite were intimately linked through a covalent bond.Another important question we intend to investigate is the role of the organic linker in the active material.We chose PTCDA because the reactive anhydride groups of PTCDA can readily form covalent bonds with amine functionalized QDs, providing close contact between them.Furthermore, fused phenyl rings of PTCDA and MWCNTs can interact through π-π stacking.Therefore, PTCDA bridges QDs and MWCNTs, which presumably aided in facilitating the charge transfer from photoexcited QDs to MWCNTs and collecting electrodes.
Infrared Spectroscopy Characterization
FTIR spectroscopy is a powerful analytical technique for qualitative and quantitative structural analysis of organic and inorganic molecules.We have utilized FTIR spectroscopy for probing the QD-PTCDA interface in the nanocomposite.The nanocomposite samples for IR investigations were prepared by depositing the nanocomposites suspended in chloroform onto a KBr disc.Chloroform was evaporated and another KBr disc was placed to sandwich the samples between two KBr discs.Importantly, the peak at 1770 cm −1 in the FTIR spectrum of the nanocomposite originates from C=O of the anhydride rings of the PTCDA (Figure 3A, Table 1) [29].Figure 3C shows the infrared vibrational peaks for CdSe QDs (black spectrum) and QD-PTCDA species (red spectrum).The ODA peaks at 1452 cm −1 , 1590 cm −1 , and 2917-2848 cm −1 are attributed to C-H bending, N-H bending from NH 2 , and C-H stretching from C 18 alkyl chain, respectively [30].ODA (C 18 -NH 2 ) is believed to be interdigitated with TOPO through hydrophobic-hydrophobic interac-tions [31,32].The NH 2 groups present on the QD surface were utilized for covalent bonding with anhydride groups of PTCDA through a spontaneous reaction, yielding amide bonds at the QD-PTCDA interface.The IR spectrum in Figure 3C shows that the peaks at 1460 cm −1 , 1592 cm −1 , 1664 cm −1 , and 2911-2848 cm −1 correspond to C-H bending, N-H bending, C=O amide stretching, and C-H stretching, respectively.Importantly, the C=O from the anhydride ring at 1770 cm −1 of the PTCDA disappeared, but the appearance of a new peak at 1664 cm −1 corresponding to amide bonds in the nanocomposites confirmed the covalent bonding between PTCDA and ODA.The proposed covalent binding of QD with PTCDA is depicted in Figure 3D.
The IR spectrum in Figure 3C shows that the peaks at 1460 cm −1 , 1592 cm −1 , 1664 cm −1 , and 2911-2848 cm −1 correspond to C-H bending, N-H bending, C=O amide stretching, and C-H stretching, respectively.Importantly, the C=O from the anhydride ring at 1770 cm −1 of the PTCDA disappeared, but the appearance of a new peak at 1664 cm −1 corresponding to amide bonds in the nanocomposites confirmed the covalent bonding between PTCDA and ODA.
The proposed covalent binding of QD with PTCDA is depicted in Figure 3D.
Transmission Electron Microscopy (TEM) Analysis of QD-PTCDA-MWCNTs
Figure S2 shows a TEM of the CdSe QDs prepared by the solution-phase method but without the addition of MWCNTs and PTCDA in the reaction vessel.We estimate the QD attachment density on the MWCNT surface with and without the presence of PTCDA linker molecule.The area of MWCNT in Figure 4A,B was estimated to be 4.9 × 10 −13 m 2 and 7.2 × 10 −14 m 2 , respectively.The average number of QDs functionalized in the presence of PTCDA on a 1 µm × 0.1 µm (length × diameter of the MWCNTs) was ~2600, whereas only ~20-25 QDs were observed in the absence of QD functionalization with PTCDA.Specifically, ~100-fold increase in the QD attachment on MWCNTs was observed following the covalent functionalization of QDs with PTCDA.These results are consistent with our previous report, where in the presence of a linker organic molecule ethanethiol-perylene tetracarboxylic diimide (ETPTCDI, which is similar in chemical structure to PTCDA), a large enhancement in the QD attachment to MWCNTs was exhibited, whereas the observed QD density on MWCNTs in the absence of ETPTCDI was significantly lower [36].Backscattering electron imaging (BSE) measurements also exhibited a large contrast in the micrograph due to QD attachment to MWCNTs (Figure 4C), where brighter areas in the micrograph correspond to heavier atomic elements (Cd and Se).Overall, these results confirmed a significant enhancement in the attachment of the QDs on MWCNTs by utilizing molecular interfacial engineering through the employment of a molecular linker between MWCNTs and QDs.
The elemental composition measurements using EDS showed the elemental Cd, Se, and C peaks corresponding at 3.133 keV, 1.379 keV, and 0.277 keV, respectively (Figure 4D).Whereas the elemental signals of Cd, Se, P, and O, in the EDS spectrum originated from CdSe QDs, the carbon signal is expected to come from QDs, MWCNTs, and organic ligands in the nanocomposites.Overall, our TEM, BSE, and EDS measurements indicated that the QDs density on the MWCNTs in the presence of PTCDA was significantly increased as compared to nanocomposites in the absence of the molecular linker PTCDA.These results conclusively confirmed the significance of an appropriate molecular linker at the QD-MWCNTs interface, which significantly enhanced the concentration of QDs on MWCNTs through short-order molecular interactions.
Thermogravimetric Analysis (TGA)
About 6.7 mg of nanocomposite was heated from 20 • C to 950 • C at a rate of 20 • C/min under the flow of N 2 gas.TGA analysis of the individual components was also performed to determine the decomposition rate and weight loss.TGA analysis provided quantitative decomposition information of individual components.In these experiments, the apparent decomposition temperatures (T d ) for the PTCDA and QDs were ~295 • C and 597 • C, respectively (Figure 5A,B), whereas only ~5% of MWCNTs initial mass was observed to decompose at ~670 • C (Figure 5C).Importantly, the weight loss for PTCDA, QDs, and MWCNTs showed different decomposition regions of 120-400 • C (Figure 5A), 480-656 • C (Figure 5B), and 608-950 • C (Figure 5C), respectively, without any significant overlap between them, which allowed for an accurate gravimetric analysis of the nanocomposites.The TGA analysis of the nanocomposite showed three major steps with T d of ~275 • C, 551 • C, and 656 • C, representing decomposition temperatures of low molecular weight organic species (TOPO, ODA, and PTCDA), QDs, and MWCNTs, respectively.The individual component ratio present in nanocomposite was estimated using the decomposition curve in Figure 5D.Due to the sufficient separation in the mass loss-temperature profiles for organic, QDs, and MWCNTs, the TGA data provided a convenient way of estimating the mass of various species in the nanocomposites.The weight loss ratio for the organic species:QD:MWCNTs in the nanocomposite was estimated to be 76:51:1.Whereas the weight loss for the MWCNTs was minimum (<1.5%) for 25-950 • C region, which suggested that the weight loss of the organic molecules contributed to >99% of the total weight loss in the 25-390 • C region in the nanocomposites.Therefore, for the present discussion, we neglect the contribution of weight loss due to MWCNTs to the total weight loss.Considering a complete coverage of QDs (average diameter = 5 nm) with a close packing of the organic species, we estimate organic-to-QD mass ratio of ~0.5.However, the experimental organic-to-QD mass ratio of ~1.5, which is about three times larger than that of the estimated organic-to-QD mass ratio value.This simple analysis suggests that the QD-PTCDA-MWCNTs nanocomposite likely contains additional organic molecules than that is presumed in the calculations.The absence of an accurate compositional ratio of different organic species (PTCDA, ODA, and TOPO) due to their similar thermogravimetric response, adds to the uncertainty in the estimation of accurate weight loss ratio of organic molecules to QDs.Furthermore, the estimation of the expected organic-to-QD mass ratio involved close packing of molecules on a flat surface of the same surface area as that of 5 nm diameter QDs surface area, which underestimates the organic molecule to QDs mass loss.Finally, the ratio of three major organic molecules was assumed to be 1:1:1, which is likely not the case in the experimental samples.Therefore, although the TGA measurements provided useful compositional information; we do not overemphasize mass loss differences between the experimental TGA results and that of the expected results based on geometric considerations.These results should be treated with caution.
Photophysical Quenching Studies of QD-PTCDA-MWCNTs Nanocomposites
Photophysical fluorescence quenching experiments for the QD-PTCDA-MWCNTs nanocomposites were performed to study the effect of PTCDA on photophysical properties of QDs and that of nanocomposites.We anticipate quenching of the QD emission is expected to be higher when QDs are intimately attached to the MWCNTs through PTCDA, as a molecular linker.This is due to enhanced electron and/or energy transfer between QDs and MWCNTs or through formation of ground-state dark donor-acceptor complexes.Stern-Volmer relationship was used to estimate the fluorescence emission quenching efficiency between donors and acceptors using Here, I0 and I represent the donor emission intensity in the absence and presence of quencher, respectively [37], and Ksv is the Stern-Volmer constant which was obtained from the slope of the Stern-Volmer plot.Ksv is a measure of quenching efficiency and provides the accessibility of acceptor to donors-higher Ksv represents more efficient donor emission quenching by the acceptors.Ksv values for four different D/A pairs are given in Table 2. Fluorescence quenching constant (Ksv) for the QDs and MWCNTs pair was ~7 times larger in the presence of PTCDA than in the absence of PTCDA (Table 2 and Figure 6).QD/MWCNTs, QD/PTCDA, PTCDA/MWCNTs, and QD-PTCDA/MWCNTs pairs exhibited Ksv values of 0.16, 0.25, 1.05, and 1.09, respectively (Table
Photophysical Quenching Studies of QD-PTCDA-MWCNTs Nanocomposites
Photophysical fluorescence quenching experiments for the QD-PTCDA-MWCNTs nanocomposites were performed to study the effect of PTCDA on photophysical properties of QDs and that of nanocomposites.We anticipate quenching of the QD emission is expected to be higher when QDs are intimately attached to the MWCNTs through PTCDA, as a molecular linker.This is due to enhanced electron and/or energy transfer between QDs and MWCNTs or through formation of ground-state dark donor-acceptor complexes.Stern-Volmer relationship was used to estimate the fluorescence emission quenching efficiency between donors and acceptors using Here, I 0 and I represent the donor emission intensity in the absence and presence of quencher, respectively [37], and K sv is the Stern-Volmer constant which was obtained from the slope of the Stern-Volmer plot.K sv is a measure of quenching efficiency and provides the accessibility of acceptor to donors-higher K sv represents more efficient donor emission quenching by the acceptors.K sv values for four different D/A pairs are given in Table 2. Fluorescence quenching constant (K sv ) for the QDs and MWCNTs pair was ~7 times larger in the presence of PTCDA than in the absence of PTCDA (Table 2 and Figure 6).QD/MWCNTs, QD/PTCDA, PTCDA/MWCNTs, and QD-PTCDA/MWCNTs pairs exhibited K sv values of 0.16, 0.25, 1.05, and 1.09, respectively (Table 2).Whereas the K sv for QD/MWCNTs pair was lowest of the four pairs studied, the highest K sv observed values were for PTCDA/MWCNTs and QD-PTCDA/MWCNTs pairs, which were >6 times larger than in the absence of MWCNTs.These results implied significantly enhanced quenching of the donor species (PTCDA and QD-PTCDA pair) to MWCNTs (acceptor), possibly either through the formation of ground state complexes (QD-MWCNTs and QD-PTCDA-MWCNTs) or through quenching of the excited-state donors to the MWCNTs.The fact that the presence of PTCDA for both the QD-PCTDA/MWCNTs and PTCDA/MWCNTs pairs showed the highest K sv values confirmed strong molecular interactions between QDs and MWCNTs in nanocomposites mediated through PTCDA molecular linkers.These results also implied the significance of PTCDA as a molecular linker for energy-and charge-transfer processes for nanocomposites created with QDs and MWCNTs [38,39].Overall, the quenching studies agree well with the results and conclusions obtained from IR, TEM, and EDS data.MWCNTs.The fact that the presence of PTCDA for both the QD-PCTDA/MWCNTs and PTCDA/MWCNTs pairs showed the highest Ksv values confirmed strong molecular interactions between QDs and MWCNTs in nanocomposites mediated through PTCDA molecular linkers.These results also implied the significance of PTCDA as a molecular linker for energy-and charge-transfer processes for nanocomposites created with QDs and MWCNTs [38,39].Overall, the quenching studies agree well with the results and conclusions obtained from IR, TEM, and EDS data.
Electrical Transport Characterization of the QD-PTCDA-MWCNTs Nanocomposites
Room temperature current-voltage (I-V) characteristics were carried out on a twoterminal device fabricated using QD-PTCDA-MWCNTs nanocomposites (Figure 7A). Figure 7B displays the current response observed with varied applied electric potential bias.At a bias below 0.5 V, the device displayed ohmic behavior where the current is directly proportional to the applied bias (inset of Figure 7B).As the bias was increased, the current response deviates from the ohmic nature and follows a power law [40], of the form I α V 2 (inset of Figure 7B).Typically, this type of behavior is seen in space charge limited currents (SCLC), where the current is governed by the Mott-Gurney equation [41]: Here, J is the current density, µ is the charge-carrier mobility, 0 is the free-space permittivity, r is the dielectric constant, V is the applied voltage, and L is the separation between the contacts.
where the current is governed by the Mott-Gurney equation [41]: J = 9 8 0 2 3 .Here, J is the current density, μ is the charge-carrier mobility, 0 is the free-space permittivity, is the dielectric constant, V is the applied voltage, and L is the separation between the contacts.
Temperature dependence of the electrical resistance over a temperature range (190 K ≤ T ≤ 300 K) was measured for MWCNTs and QD-PTCDA-MWCNTs samples (Figure 7C).The temperature dependence of resistance curve for the MWCNTs displayed a metal-like behavior, whereas the QD-PTCDA-MWCNTs sample showed a semiconducting behavior.These behaviors are consistent with material properties of MWCNTs and that of QD-PTCDA-MWCNTs nanocomposites.Importantly, I-V curves for the QD-PTCDA-MWCNTs nanocomposites suggested that MWCNTs were covered with semiconducting material on their surface and that there was little or minimum ohmic MWCNTs-MWCNTs electronic pathways in the devices.The electrical resistance measured for the QD-PTCDA-MWCNTs sample increased slowly with the decreasing temperature.The resistance of the device was observed to be 80.4 kΩ at 300 K, which was increased to 2.46 MΩ at 190 K.For further analysis, the data were fitted using the Arrhenius model for the temperature dependence of resistance, which is governed by the equation: R(T) = R0 exp(Eg/kBT), where Eg is the activation energy of the charge transport in the devices, and kB is the Boltzmann constant [42].Natural logarithm of resistance ln(R) versus inverse of temperature (T −1 ) was plotted as shown in the inset of Figure 7C.ln(R) versus T −1 plot was best fit to a straight-line emphasizing bandgap dominated Arrhenius-like temperature dependence.The linear fit of the data shown in Figure 7C inset yielded an activation energy of the order of 165 meV.The activation value obtained in the experiments is of a similar order of magnitude to those for 2D materials.For example, the activation energy on the order of 70 meV was obtained for the reduced graphene oxide.The temperature dependent electrical transport of disordered reduced graphene oxide [42] and individual B-doped nanotubes exhibited an activation energy of 55-70 meV [43].Similarly, Johnston et al. have shown that the activation energy values for carbon nanotubes can be ~150 meV [44].Temperature dependence of the electrical resistance over a temperature range (190 K ≤ T ≤ 300 K) was measured for MWCNTs and QD-PTCDA-MWCNTs samples (Figure 7C).The temperature dependence of resistance curve for the MWCNTs displayed a metal-like behavior, whereas the QD-PTCDA-MWCNTs sample showed a semiconducting behavior.These behaviors are consistent with material properties of MWCNTs and that of QD-PTCDA-MWCNTs nanocomposites.Importantly, I-V curves for the QD-PTCDA-MWCNTs nanocomposites suggested that MWCNTs were covered with semiconducting material on their surface and that there was little or minimum ohmic MWCNTs-MWCNTs electronic pathways in the devices.The electrical resistance measured for the QD-PTCDA-MWCNTs sample increased slowly with the decreasing temperature.The resistance of the device was observed to be 80.4 kΩ at 300 K, which was increased to 2.46 MΩ at 190 K.For further analysis, the data were fitted using the Arrhenius model for the temperature dependence of resistance, which is governed by the equation: R(T) = R 0 exp(E g /k B T), where E g is the activation energy of the charge transport in the devices, and k B is the Boltzmann constant [42].Natural logarithm of resistance ln(R) versus inverse of temperature (T −1 ) was plotted as shown in the inset of Figure 7C.ln(R) versus T −1 plot was best fit to a straight-line emphasizing bandgap dominated Arrhenius-like temperature dependence.The linear fit of the data shown in Figure 7C inset yielded an activation energy of the order of 165 meV.The activation value obtained in the experiments is of a similar order of magnitude to those for 2D materials.For example, the activation energy on the order of 70 meV was obtained for the reduced graphene oxide.The temperature dependent electrical transport of disordered reduced graphene oxide [42] and individual B-doped nanotubes exhibited an activation energy of 55-70 meV [43].Similarly, Johnston et al. have shown that the activation energy values for carbon nanotubes can be ~150 meV [44].
Photoinduced Current Characterization
The photoinduced charge generation characteristics of the QD-PTCDA-MWCNTs devices were characterized by photoexcitation of donors in a two-electrode device using a Newport solar simulator (model 67005) under AM 1.5 conditions.Figure 8A displays the observed current response with an applied bias under light ON and OFF conditions.The nonlinear behavior of the photoinduced current (I ph ) was seen with an increase in voltage in both cases of light ON and OFF.I ph for the devices created with QD-PTCDA-MWCNTs was enhanced by ~130% when the photon power intensity was increased from 20 mW/m 2 to 100 mW/m 2 (Figure 8B).I ph as high as 140 nA was observed with an illumination power of 130 mW/m 2 .The dependence of the I ph on the light power intensities P (20 mW/m 2 ≤ P ≤ 100 mW/m 2 ) showed fractional power dependence of the form of I ph ~Pγ with γ = 0.587 (inset of Figure 8B).The power exponent γ can shed light on the various recombination mechanisms that are associated during the photoconduction process.I ph response in disordered semiconductors typically follows a fractional power law with power exponent γ between 0.5 and 1 (0.5 < γ <1).Furthermore, γ value of close to 1 denotes monomolecular recombination processes and a value of γ = 0.5 is associated with bimolecular recombination processes.Broadly speaking, however, any fractional value of γ is associated with the presence of the modulation of trap states and implicates their role in the recombination processes that occur during the charge-conduction process [45].Typical fractional γ values that are reported for a variety of low dimensional materials range from 0.25 in graphene [46] to 0.4 in single layer WS 2 [47].These mechanisms are also demonstrated in other photoactive 2D materials, as well [47,48].Another plausible explanation for the observed low power exponent values measured on devices in the studies is the manifestation of photogating effect, i.e., the substrate generated photovoltage that can act as an electrical gate on the measured device which may modulate trap states [49,50].We believe that the fractional γ values seen in our measurements are due to the presence of trap states in the composite materials originating from defects, disorders, and dissimilar interfaces that are being created during the synthesis process.
Temperature-Dependent Photocurrent Characteristics
In a semiconductor, Iph depends on both the illumination power and temperature.An increase in the temperature may enhance Iph due to thermal agitation, which provides energy to overcome the barrier for photoconduction [51].At high temperatures, however, the reverse can be true.Specifically, at elevated temperatures, the internal collision of charges with atoms increases, resulting in the enhanced recombination of charges [52].The charge collisions lead to an increase in the resistance for charge propagation.The photocurrent-temperature characteristics of the QD-PTCDA-MWCNTs photodiodes were studied between 273 K and 350 K.The normalized Iph showed an exponential increase (~50%) up to 338 K followed by plateau for a temperature above 338 K (Figure 8C).This behavior is attributed to the losses of charges due to the collisions at higher temperature.Similar studies performed on a graphene-based photodetector reported >50% increase in photocurrent in the range of 150-400 K [53].
Fabrication and Characterization of BHJSC That Comprise the QD-PTCDA-MWCNTs Nanocomposite
Figures 9A,B show schematics of the electron transfer between individual components of the nanocomposite after photoexcitation of CdSe and PTCDA in the nanocomposite.A schematic of the BHJSC devices created with QD-PTCDA-MWCNTs nanocomposite is shown in Figure 9.In the first step, the photoexcited electrons were transferred from the excited energy level of the QDs to the bridging molecule PTCDA.The electron in the excited state energy level (LUMO) of the PTCDA was transferred to the MWCNTs.The last step in the electron transport in the nanocomposite is the extraction of electrons at the
Temperature-Dependent Photocurrent Characteristics
In a semiconductor, I ph depends on both the illumination power and temperature.An increase in the temperature may enhance I ph due to thermal agitation, which provides energy to overcome the barrier for photoconduction [51].At high temperatures, however, the reverse can be true.Specifically, at elevated temperatures, the internal collision of charges with atoms increases, resulting in the enhanced recombination of charges [52].The charge collisions lead to an increase in the resistance for charge propagation.The photocurrent-temperature characteristics of the QD-PTCDA-MWCNTs photodiodes were studied between 273 K and 350 K.The normalized I ph showed an exponential increase (~50%) up to 338 K followed by plateau for a temperature above 338 K (Figure 8C).This behavior is attributed to the losses of charges due to the collisions at higher temperature.Similar studies performed on a graphene-based photodetector reported >50% increase in photocurrent in the range of 150-400 K [53].A schematic of the BHJSC devices created with QD-PTCDA-MWCNTs nanocomposite is shown in Figure 9.In the first step, the photoexcited electrons were transferred from the excited energy level of the QDs to the bridging molecule PTCDA.The electron in the excited state energy level (LUMO) of the PTCDA was transferred to the MWCNTs.The last step in the electron transport in the nanocomposite is the extraction of electrons at the cathode (gold/palladium).Electrons from the excited level of the QDs can also be transferred directly to the MWCNTs (Figure 9B).A similar photoinduced electron generation in PTCDA can also occur where the excited state electron in the PTCDA will be transferred to MWCNTs.The holes generated in QDs were transferred to MWCNTs through the PTCDA mediator.Similarly, the holes formed in PTCDA can also be directly transferred to MWCNTs as shown in Figure 9B.Thus, the MWCNTs in the nanocomposite can carry both electrons and holes generated in the nanocomposites after photoexcitation.This is likely to contribute to a low photon-electron conversion efficiency in these nanocomposites.A thin layer of PEDOT:PSS electron blocking material was used at the anode surface to reduce electron transport to the anode surface.
Molecules 2023, 28, x FOR PEER REVIEW 13 of 18 cathode (gold/palladium).Electrons from the excited level of the QDs can also be transferred directly to the MWCNTs (Figure 9B).A similar photoinduced electron generation in PTCDA can also occur where the excited state electron in the PTCDA will be transferred to MWCNTs.The holes generated in QDs were transferred to MWCNTs through the PTCDA mediator.Similarly, the holes formed in PTCDA can also be directly transferred to MWCNTs as shown in Figure 9B.Thus, the MWCNTs in the nanocomposite can carry both electrons and holes generated in the nanocomposites after photoexcitation.This is likely to contribute to a low photon-electron conversion efficiency in these nanocomposites.A thin layer of PEDOT:PSS electron blocking material was used at the anode surface to reduce electron transport to the anode surface.A typical I-V curve of a BHJSC device fabricated using QD-PTCDA-MWCNTs nanocomposite under light illumination (AM 1.5) is shown in Figure 9D.The typical values of the short-circuit current (Isc), open-circuit voltage (Voc), fill factor (FF), and photon conversion efficiency (PCE) were 0.25 mA cm ─2 , 0.77 V, 0.43, and 0.3%, respectively.These devices exhibited FF compared to the reported FF values range of 0.4-0.7 for some organic solar cells [54].A PV cell system comprising graphene/CdSe and graphitic carbon/CdTe showed a PCE of 1.25 and 1.36%, respectively [55,56].The BHJSCs fabricated using a nanocomposite without PTCDA, however, displayed a much lower PCE (0.1%), suggesting that the PTCDA molecular linker plays an important role in improving the charge generation and transfer between QDs and MWCNTs.Importantly, PTCDA increased the QD concentration on MWCNTs by about two orders of magnitude as observed in the TEM measurements.Therefore, both the intimate proximity of QDs and MWCNTs through PTCDA as a bridge molecule and energetic considerations were likely to contribute to the enhanced charge transfer and charge transport between QDs and MWCNTs.These results are consistent with our previous studies where a perylenediimide derivative was used as a bridging molecule for QD and MWCNTs [36].The typical PCE value of the QD-PTCDA-MWCNTs nanocomposite devices was ~0.3%, suggesting that only a small fraction of incident photons on the A typical I-V curve of a BHJSC device fabricated using QD-PTCDA-MWCNTs nanocomposite under light illumination (AM 1.5) is shown in Figure 9D.The typical values of the short-circuit current (I sc ), open-circuit voltage (V oc ), fill factor (FF), and photon conversion efficiency (PCE) were 0.25 mA cm -2 , 0.77 V, 0.43, and 0.3%, respectively.These devices exhibited FF compared to the reported FF values range of 0.4-0.7 for some organic solar cells [54].A PV cell system comprising graphene/CdSe and graphitic carbon/CdTe showed a PCE of 1.25 and 1.36%, respectively [55,56].The BHJSCs fabricated using a nanocomposite without PTCDA, however, displayed a much lower PCE (0.1%), suggesting that the PTCDA molecular linker plays an important role in improving the charge generation and transfer between QDs and MWCNTs.Importantly, PTCDA increased the QD concentration on MWCNTs by about two orders of magnitude as observed in the TEM measurements.Therefore, both the intimate proximity of QDs and MWCNTs through PTCDA as a bridge molecule and energetic considerations were likely to contribute to the enhanced charge transfer and charge transport between QDs and MWCNTs.These results are consistent with our previous studies where a perylenediimide derivative was used as a bridging molecule for QD and MWCNTs [36].The typical PCE value of the QD-PTCDA-MWCNTs nanocomposite devices was ~0.3%, suggesting that only a small fraction of incident photons on the device converted into a useful photoinduced current.However, a major portion of photons were converted into heat and/or other non-electrical form.We attribute the low PCE for the devices reported here is dominated by charge recombination mechanism.The randomly dispersed MWCNTs may carry both the electrons and holes, leading to the enhanced charge recombination.Rationally designing charge transporting pathways, that would minimize the charge recombination by routing electrons and holes through separate non-communicating pathways, is likely to enhance the PCE.Since the PCE of a device also depends on the density of photoactive components and processing parameters, as well; further optimization of the density of charge generating species (for example, QDs and PTCDA) may potentially also improve the PCE.BHJSCs in the present studies were fabricated under ambient conditions, and they were not hermetically sealed to external environmental factors.The presence of oxidative agents and other reactive species can adversely affect the device performance [36].Fabricating and processing devices in an inert atmosphere and protecting them from external species is also crucial to the long-term stability of the devices.Some of these experiments are underway and will be reported in the future.
Methods
The characterization of QDs was performed using UV-Visible spectroscopy (Perkin-Elmer Lambda 25, Shelton, CT, USA), fluorescence spectroscopy (Perkin-Elmer LS-55, Shelton, CT, USA), scanning electron microscopy (Thermo Fisher Quanta 450, Hillsboro, OR, USA), and transmission electron microscopy (Hitachi 7650, Tokyo, Japan).Temperaturedependent electrical transport data were acquired using a home-built data acquisition system comprising Janis SHI-4-1 high vacuum Closed Cycle Refrigerator System (Lake Shore, Westerville, OH, USA) and Keithley 2400 source meter (Keithley, Solon, OH, USA), both of them were controlled by LabView programming codes.The device characterization was carried out using an AM 1.5 solar simulator (Newport Inc., Deere Ave Irvine, CA, USA) with a 150 W Xenon arc lamp.The measured photon flux for Xenon lamp was 130 mW/m 2 at 0.6 m from the source light as measured using a photodetector (Melles Griot, Carlsbad, CA, USA).Thermogravimetric analysis (TGA) was performed using a TA instrument Q50.For TGA analysis, the sample was heated to 900 • C at a heating rate of 20 • C/min under the flow of N 2 .The backscattering electron (BSE) measurements of the nanocomposites were performed using energy dispersive X-ray spectroscopy (Oxford X-Max 50, Abingdon, UK) and scanning electron microscopy (Thermo Fisher Quanta 450) by coating QD-PTCDA-MWCNTs nanocomposites on a clean glass substrate, which was thoroughly dried in the oven at 80 • C for 20 min.The dried nanocomposite film was coated with a thin conductive metal layer (~5-10 nm) using Denton Vacuum Desk III sputter coater.The conductive coating for SEM samples prevents charging of the specimen and improves the signal-to-noise ratio in the SEM imaging.For TEM analysis, the nanocomposites were deposited on a TEM grid, and the solvent was evaporated at 37 • C for ~2 h prior to imaging.
MWCNT Growth Procedure
The growth of the MWCNTs on SiO 2 substrate was achieved by the air-assisted chemical vapor deposition technique [57][58][59].A horizontal tube furnace was purged with argon gas and heated to 790 • C in an argon environment and a solution of ferrocene (catalyst precursor) and xylene (carbon source) was injected continuously into the furnace with a syringe pump at a constant rate of 12 mL/hr.The injected solution vaporized as it entered the furnace, and the vapor was carried into the reaction zone using a gas mixture of argon and hydrogen (85%/15% ratio) with a flow rate of ~400 sccm.During the growth process, a small amount of air (~2.5 sccm) was mixed with the reaction environment to maintain the catalyst activity.The reaction time was controlled by adjusting the feeding time of xylene/ferrocene solution.The quartz tube reactor was cooled down to room temperature after the growth time in the argon environment.An array of densely packed aligned MWCNTs was obtained on the SiO 2 substrate in both horizontal and vertical directions.
Synthesis of Photoactive Nanocomposites
CdSe QDs were prepared using a standard synthetic procedure [60].Octadecylamine (2.05 g) was heated in a two-neck 100 mL round-bottom flask to 150 • C for roughly 30 min to remove all the water from ODA.The flask was cooled down to room temperature, and a mixture of CdO (13 mg), SA (0.10 g), and TOPO (2.05 g) was added to the reaction flask.The flask was sealed and placed under a continuous flow of argon.The temperature of the reaction flask was raised to 300 • C.After the reaction mixture turned pale yellow, the temperature of the reaction mixture was reduced to 70 • C and MWCNTs (0.5 mg) suspended in dichloromethane (DCM) were added to it.The temperature of the QD-MWCNTs mixture was again increased to 300 • C, and Se powder (79 mg) dissolved in trioctylphosphine (1 mL) was swiftly injected into it.Following this, 1.0 g of powdered PTCDA was added to the reaction mixture and the reaction was left for 48 h.After 48 h, the nanocomposite sample was extracted with a glass pipette and air cooled to room temperature, which resulted in the product mixture to turn into a solid gel.Approximately 1 mL of chloroform was added to the nanocomposite, turning a dried solid state into a turbid liquid state.The test tube was filled with ethanol (~5 mL) and shaken vigorously to remove excessive amine, PTCDA, and TOPO from the solution.The test tubes containing nanocomposite, ethanol, and chloroform were centrifuged at 14k rpm for 15 min.The hydrophobic nanocomposite was precipitated as pellets at the bottom of the test tubes.The supernatant was decanted and discarded.In most cases, three purification cycles were performed using this procedure.The nanocomposite precipitate was suspended into 2-3 mL of fresh chloroform for future use. Figure 2 shows the solution-phase UV-Vis and fluorescence of QDs and PTCDA.The confirmation of CdSe QDs was also confirmed using XRD measurements (Figure S1).
Summary
In this work, we have synthesized and characterized the QD-PTCDA-MWCNTs nanocomposite in one-pot synthesis for the fabrication of BHJSCs.The interaction between QDs and MWCNTs through a molecular linker (PTCDA) was studied and characterized using spectroscopic and microscopic techniques.Whereas the FTIR analysis confirmed the covalent bonds between QDs and PTCDA, π-π stacking were formed between PTCDA and MWCNTs.Thus, these interactions led to about two orders of magnitude enhancement in the QD concentration on the MWCNTs as compared to nanocomposites created without PTCDA.Spectroscopic and microscopic characterization techniques were used to establish the nanoscale heterojunction interface in the nanocomposites.The PCE of the devices fabricated with QD-PTCDA-MWCNTs nanocomposite exhibited three times that of devices fabricated in the absence of PTCDA.With further optimization of the material properties and processing parameters, there are opportunities for further improvements in the performance of the BHJSCs fabricated with QD-PTCDA-MWCNTs nanocomposites synthesized in a single pot.
Figure 1 .
Figure 1.Schematic of QD-PTCDA-MWCNTs nanocomposite.MWCNTs (A) were added to a QD solution during QD synthesis (B).(C) QD-PTCDA-MWCNTs nanocomposite was formed through the addition of PTCDA molecules to the QD-PTCDA solution, which acted as a molecular linker between
Figure 1 .
Figure 1.Schematic of QD-PTCDA-MWCNTs nanocomposite.MWCNTs (A) were added to a QD solution during QD synthesis (B).(C) QD-PTCDA-MWCNTs nanocomposite was formed through the addition of PTCDA molecules to the QD-PTCDA solution, which acted as a molecular linker between MWCNTs and QDs." . . ."in (C) denote π-π interactions between PTCDA and MWCNTs, whereas the green oval represents the PTCDA molecule.(D) The chemical structure of a PTCDA.(E) A schematic representation of QDs.
Figure 2 .
Figure 2. UV-Visible absorption (A) and fluorescence emission (B) spectra of the PTCDA molecule in dimethyl sulfoxide (DMSO).UV-Visible absorption (C) and fluorescence emission (D) spectra of the QDs in chloroform solution.
Figure 2 .
Figure 2. UV-Visible absorption (A) and fluorescence emission (B) spectra of the PTCDA molecule in dimethyl sulfoxide (DMSO).UV-Visible absorption (C) and fluorescence emission (D) spectra of the QDs in chloroform solution.
Figure 3 .
Figure 3. FTIR characterization of QDs, PTCDA, MWCNT, and composite.(A) IR of PTCDA depicting C=O stretching peak at 1770 cm −1 from cyclic anhydride of the PTCDA molecule.(B) IR spectrum of MWCNTs.(C) The black IR absorbance spectrum of the QDs multiple peaks corresponding to N-H bending, N-H stretching, and C-H bending.The red IR spectrum of the QD-PTCDA-MWCNTs nanocomposite, that depicted C=O stretching at 1664 cm −1 from the amide bond, confirmed the formation of covalent bonds between QDs and PTCDA in the nanocomposite.(D) The proposed reaction mechanism for covalent bonding between amine containing QDs and anhydride groups of PTCDA molecule, yielding amide linking between QDs and PTCDA.
Figure 3 .
Figure 3. FTIR characterization of QDs, PTCDA, MWCNT, and composite.(A) IR of PTCDA depicting C=O stretching peak at 1770 cm −1 from cyclic anhydride of the PTCDA molecule.(B) IR spectrum of MWCNTs.(C) The black IR absorbance spectrum of the QDs multiple peaks corresponding to N-H bending, N-H stretching, and C-H bending.The red IR spectrum of the QD-PTCDA-MWCNTs nanocomposite, that depicted C=O stretching at 1664 cm −1 from the amide bond, confirmed the formation of covalent bonds between QDs and PTCDA in the nanocomposite.(D) The proposed reaction mechanism for covalent bonding between amine containing QDs and anhydride groups of PTCDA molecule, yielding amide linking between QDs and PTCDA.
FigureS2shows a TEM of the CdSe QDs prepared by the solution-phase method but without the addition of MWCNTs and PTCDA in the reaction vessel.Figure4Ashow a TEM
Figure 4 .Figure 4 .
Figure 4. TEM images of an MWCNT functionalized with QDs in the presence (A) and absence (B) of the PTCDA.The QD diameter = 5.2 nm ± 1.4 nm (n = 22).(C) A backscattering electron (BSE) image of the nanocomposite where the brighter areas indicate the presence of the QDs.(D) An EDS spectrum of the nanocomposites showing elemental X-ray analysis of C Ka at 0.277 keV, Fe Ka at 6.398 keV, Cd La at 3.133 keV, and Se La at 1.379 keV.2.3.Thermogravimetric Analysis (TGA)About 6.7 mg of nanocomposite was heated from 20 °C to 950 °C at a rate of 20 °C/min under the flow of N2 gas.TGA analysis of the individual components was also performed to determine the decomposition rate and weight loss.TGA analysis provided quantitative decomposition information of individual components.In these experiments, the apparent decomposition temperatures (Td) for the PTCDA and QDs were ~295 °C and 597 °C, respec-
Figure 6 .
Figure 6.Fluorescence quenching and Stern-Volmer measurements.The emission quenching of QDs for the QDs/MWCNTs (A) and the QDs-PTCDA/MWCNTs (B).(C) The emission quenching of PTCDA for the PTCDA-MWCNTs pair.The concentrations of QDs, PTCDA, and MWCNTs used for quenching experiments were 50 nM, 50 nM, and 0.01% w/w, respectively.The Ksv value estimated from the I/Io-[MWCNTs] plot (insets) of QDs/MWCNTs (A) was ~7 times smaller than that of the QDs-PTCDA/MWCNTs nanocomposites (B).The increase in Ksv indicated the presence of PTCDA enhanced fluorescence emission quenching through intimate QD-MWCNTs contact.Excitation wavelengths for
Figure 7 .
Figure 7. (A) Schematic of the photodetector device fabricated with QD-PTCDA-MWCNTs nanocomposites.(B) Nonlinear current-voltage behavior was observed for the prepared device with an applied bias.Inset shows the log-log plot of current-voltage curve, in which at lower bias, the sample displays
Figure 7 .
Figure 7. (A) Schematic of the photodetector device fabricated with QD-PTCDA-MWCNTs nanocomposites.(B) Nonlinear current-voltage behavior was observed for the prepared device with an applied bias.Inset shows the log-log plot of current-voltage curve, in which at lower bias, the sample displays ohmic behavior and it changes to space charge limited currents at higher bias.(C) Normalized electrical resistance versus temperature of MWCNTs and QD-PTCDA-MWCNTs nanocomposite plotted in the range of 190-300 K. Inset shows the plot of natural logarithm of resistance ln(R) versus inverse of temperature (T −1 ) of the QD-PTCDA-MWCNTs nanocomposite plotted in the range of 190-300 K.
Figure 8 .
Figure 8. (A) Current-voltage characteristics of QD-PTCDA-MWCNTs device with and without illumination of light (device area ~5 cm 2 , P = 130 mW/m 2 ).(B) Photocurrent-time dependence of the QD-PTCDA-MWCNTs nanocomposites plotted at various power values of the incident light.The light intensity was varied by changing the distance between the light source and the device.Inset shows the plot of photocurrent versus power of the incident light which fits to a fractional powerlaw fitting.(C) The data show the increase in photocurrent in the temperature range of 273-350 K under the light intensity of 130 mW/m 2 .
Figure 8 .
Figure 8. (A) Current-voltage characteristics of QD-PTCDA-MWCNTs device with and without illumination of light (device area ~5 cm 2 , P = 130 mW/m 2 ).(B) Photocurrent-time dependence of the QD-PTCDA-MWCNTs nanocomposites plotted at various power values of the incident light.The light intensity was varied by changing the distance between the light source and the device.Inset shows the plot of photocurrent versus power of the incident light which fits to a fractional power-law fitting.(C) The data show the increase in photocurrent in the temperature range of 273-350 K under the light intensity of 130 mW/m 2 .
2. 8 .
Figure 9A,B show schematics of the electron transfer between individual components of the nanocomposite after photoexcitation of CdSe and PTCDA in the nanocomposite.A schematic of the BHJSC devices created with QD-PTCDA-MWCNTs nanocomposite is shown in Figure 9.In the first step, the photoexcited electrons were transferred from the excited energy level of the QDs to the bridging molecule PTCDA.The electron in the excited state energy level (LUMO) of the PTCDA was transferred to the MWCNTs.The last step in the electron transport in the nanocomposite is the extraction of electrons
Figure 9 .
Figure 9. (A) A schematic of charge transport in QDs-PTCDA-MWCNTs nanocomposite.Although the MWCNTs in the QD-PTCDA-MWCNT nanocomposites are random, the depiction of a single MWCNTs strand in the nanocomposite in (A) is shown to aid the eyes.(B) The energy diagram showing the electron and hole transport in the nanocomposite after photoexcitation of donors.(C) A schematic of the BHJSC device fabricated using QDs-PTCDA-MWCNTs nanocomposite.(D) A typical I-V curve of the BHJSC fabricated using QDs-PTCDA-MWCNTs nanocomposites.Inset in (D) shows an optical photograph of a typical device.Open circuit voltage (Voc) and short circuit current (Isc) were 0.77 V and 0.25 mA cm ─2 , respectively.
Figure 9 .
Figure 9. (A) A schematic of charge transport in QDs-PTCDA-MWCNTs nanocomposite.Although the MWCNTs in the QD-PTCDA-MWCNT nanocomposites are random, the depiction of a single MWCNTs strand in the nanocomposite in (A) is shown to aid the eyes.(B) The energy diagram showing the electron and hole transport in the nanocomposite after photoexcitation of donors.(C) A schematic of the BHJSC device fabricated using QDs-PTCDA-MWCNTs nanocomposite.(D) A typical I-V curve of the BHJSC fabricated using QDs-PTCDA-MWCNTs nanocomposites.Inset in (D) shows an optical photograph of a typical device.Open circuit voltage (V oc ) and short circuit current (I sc ) were 0.77 V and 0.25 mA cm -2 , respectively.
Table 2 .
K sv values for different donor-acceptor pairs.
Table 2 .
Ksv values for different donor-acceptor pairs. | 11,018 | 2023-11-22T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics",
"Chemistry"
] |
Search for doubly and singly charged Higgs bosons decaying into vector bosons in multi-lepton final states with the ATLAS detector using proton-proton collisions at s\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{\mathrm{s}} $$\end{document} = 13 TeV
A search for charged Higgs bosons decaying into W±W± or W±Z bosons is performed, involving experimental signatures with two leptons of the same charge, or three or four leptons with a variety of charge combinations, missing transverse momentum and jets. A data sample of proton-proton collisions at a centre-of-mass energy of 13 TeV recorded with the ATLAS detector at the Large Hadron Collider between 2015 and 2018 is used. The data correspond to a total integrated luminosity of 139 fb−1. The search is guided by a type-II seesaw model that extends the scalar sector of the Standard Model with a scalar triplet, leading to a phenomenology that includes doubly and singly charged Higgs bosons. Two scenarios are explored, corresponding to the pair production of doubly charged H±± bosons, or the associated production of a doubly charged H±± boson and a singly charged H± boson. No significant deviations from the Standard Model predictions are observed. H±± bosons are excluded at 95% confidence level up to 350 GeV and 230 GeV for the pair and associated production modes, respectively.
Introduction
Experimental signatures with two leptons with the same electric charge (same-charge) or multi-lepton final states are extensively exploited in searches for physics beyond the Standard Model (BSM physics) at hadron colliders. In many models, heavy BSM particles may be produced in proton-proton collisions and decay into multiple massive Standard Model (SM) electroweak gauge bosons or top quarks. Subsequent decays of these gauge bosons into final states with leptons can occur with considerable branching ratios. These final states are favourable to the search for new phenomena since the yields predicted within the SM are generally low and the experimental effects are well understood. At the Large Hadron Collider (LHC) [1], signatures with two leptons of the same charge, or three or four leptons with a variety of charge combinations have been used by the ATLAS and CMS experiments to explore the landscape of possible SM extensions and their phenomenology [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21]. Among these proposed extensions, the addition to the SM sector of one weak gauge triplet of scalar fields with a non-zero vacuum expectation value of the neutral component is a compelling way to account for neutrino masses through the type-II seesaw mechanism [22]. is achieved by requiring the neutral components of the SM scalar doublet H and scalar triplet ∆ to acquire vacuum expectation values, v d and v t respectively. After the EWSB, the mixing between these fields results in seven scalar bosons: H ±± , H ± , A 0 (CP odd), H 0 (CP even) and h 0 (CP even). The h 0 boson is the SM Higgs boson, and a small mixing between the CP-even scalars is naturally obtained in the model. In addition, the triplet-neutrino Yukawa term provides non-zero neutrino masses proportional to the vacuum expectation value of the triplet v t . In this type-II seesaw model, constraints from electroweak precision measurements lead to an upper bound on v t of approximately 2.5 GeV, which is significantly lower than the electroweak scale and matches the need for small values suggested by the natural association of v t with the neutrino masses. Additional, theoretical constraints are the absence of tachyonic modes, requirements for the vacuum structure and stability of the potential, and unitarity requirements. A detailed discussion can be found in ref. [29]. The theoretical and experimental constraints help to choose allowed values for the parameters of the model from which the predominant production and decay modes follow. Some type-II seesaw models also extend the weak sector by introducing two scalar triplets and righthanded gauge bosons W R (left-right symmetric models). The model studied in this analysis corresponds to the left-handed triplet discussed in refs. [30][31][32][33][34].
Two scenarios with v t equal to 100 MeV are explored in this paper. In the first scenario, the mass difference between H ±± and H ± bosons is > 100 GeV, the H ± boson being the heavier one. Large mass splitting could still be allowed despite the |m H ± −m H ±± | < 40 GeV constraint deduced from the electroweak precision data [35] if radiative corrections reduce the tree-level contribution coming from the triplet vacuum expectation value. Only the pair production of H ±± bosons via the diagram shown in figure 1a (pp → γ * /Z * → H ±± H ∓∓ ) is considered. The triplet vacuum expectation value is taken to be v t = 100 MeV such that only the doubly charged H ±± boson decay into a pair of W bosons with the same charge, H ±± → W ± W ± , is possible [28]. The leptonic decays H ±± → ± ± are suppressed with increasing v t [36, 37] and not considered in this paper.
In the second scenario, the mass of the H ± boson is chosen to be at most 5 GeV different from the H ±± boson mass. The low mass difference ensures that the charged Higgs bosons do not decay into each other with a non-negligible branching fraction, and complies with -3 -
JHEP06(2021)146
the additional |m H ± − m H ±± | < 40 GeV constraint deduced from the electroweak precision data. In addition, the chosen mass difference between the doubly and singly charged Higgs bosons maximises the signal amplitudes. Only the associated production of H ±± and H ± bosons via the diagram shown in figure 1b (pp → W ± * → H ±± H ∓ ) is considered. The production cross sections for H ±± pair production with the same mass settings as for associated production of H ±± and H ± bosons can be large. However, this production mode is exactly the one described in the first scenario and it is not considered in the second scenario. This choice is motivated by the objective of the search, which is to study a characteristic signature with a benchmark parameter point and report its cross section. The H ±± boson decays into a pair of W bosons with the same charge, with a branching ratio of 100%. Only the bosonic decays of the singly charged bosons (H ± → W ± Z) are considered and, depending on the H ± boson mass, the branching ratio varies between 40% and 60% (see table 1). The branching fraction for the m H ±± = 300 GeVmass hypothesis is lower than for the neighbouring points because of the high dependency of this quantity on the mass of the scalar triplet ∆ [29]. Depending on the H ± boson mass, other decay modes are H ± → tb and H ± → W ± h 0 [28]. Studies at Monte Carlo generator level show that after the selection of at least two same-charge leptons or at least three leptons and no b-jet (where the b-jets are selected with 70% efficiency), the contribution from the other possible H ± decays is negligible. Similar conclusions are also reached after examining the results obtained for the various control regions used in the analysis. The effect on the mass below which a charged Higgs boson is excluded is negligible.
Pair production of H ± bosons is also possible, albeit with a much smaller production cross section than for H ±± pair production. Therefore, H ± pair production is not considered in this paper. The cross section for single H ±± production via vector-boson fusion (pp → W ± * W ± * → H ±± ) is proportional to v t , and hence negligible.
For the H ±± pair production mode, the mixing between the CP-even scalars is taken to be 10 −4 and the remaining five couplings in the potential are adjusted to obtain a given H ±± boson mass hypothesis while requiring h 0 to have a mass of 125 GeV. Similar settings are also used for the associated production mode. The branching fraction times crosssection calculation for the pair production of H ±± bosons and the associated production of H ±± and H ± bosons is performed for on-shell W and Z bosons, and therefore only the region m H ±± > 200 GeVis considered in the present analysis.
Extensive searches for leptonic decays have been performed at various experiments [38][39][40][41][42][43], excluding doubly charged H ±± bosons with masses up to about 870 GeV. The CMS collaboration published results for the H ±± → W ± W ± decay mode in the context of single H ±± production through vector-boson fusion with v t values of a few tens of GeV, for a model with two Higgs triplets [44][45][46]. In contrast, the H ±± → W ± W ± decay mode is not often investigated for v t values around 100 MeV. In this case, the difference is that single H ±± production is suppressed and only H ±± pair production is sizeable. A direct search for H ±± pair production with decays to W ± W ± pairs has been performed on a smaller data set by the ATLAS collaboration [23], validating the principle of such an approach.
ATLAS detector
The ATLAS experiment [47] at the LHC is a multipurpose particle detector with a forwardbackward symmetric cylindrical geometry and nearly 4π coverage in solid angle. 1 It consists of an inner tracking detector (ID) surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic (EM) and hadron calorimeters, and a muon spectrometer (MS). The inner tracking detector covers the pseudorapidity range |η| < 2.5. It consists of silicon pixel, silicon microstrip and transition radiation tracking detectors; the innermost layer is 33 mm from the beamline [48][49][50]. Lead/liquid-argon (LAr) sampling calorimeters provide electromagnetic energy measurements with high granularity. A steel/scintillator-tile hadron calorimeter covers the central pseudorapidity range (|η| < 1.7). The endcap and forward regions are instrumented with LAr calorimeters for EM and hadronic energy measurements up to |η| = 4.9. The muon spectrometer surrounds the calorimeters and is based on three large air-core toroidal superconducting magnets with eight coils each. The field integral of the toroids ranges between 2.0 and 6.0 T·m across most of the detector. The muon spectrometer includes a system of precision tracking chambers and fast detectors for triggering. A two-level trigger system [51] is used to select events. The first-level trigger is implemented in hardware and uses a subset of the detector information to keep the accepted rate below 100 kHz. This is followed by a software-based trigger that reduces the accepted event rate to 1 kHz on average depending on the data-taking conditions.
Monte Carlo event simulation
Monte Carlo (MC) event generators were used to simulate the signal and background events produced in the proton-proton collisions. For the H ±± pair production and the H ±± and H ± associated production signal processes, the events at particle level were generated with MadGraph5_aMC@NLO [53] using leading-order (LO) matrix elements (ME) and NNPDF3.0lo parton distribution functions (PDF) [54]. The events were subsequently showered using Pythia 8 [55] with a set of tuned parameters called the A14 tune [56]. Signal processes were simulated for different mass hypotheses between 200 and 600 GeV, and presented in table 1. For the pair and associated production, the cross section decreases rapidly with the charged Higgs boson's mass as shown in table 1. The event samples were normalised using calculations at next-to-leading order (NLO) in perturbative QCD [52]. The NLO K-factor increases the expected event yields by a factor of 1.25, independently of the mass of the charged Higgs bosons. 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upwards. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis. The pseudorapidity is defined in terms of the polar angle θ as η ≡ − ln tan(θ/2), and is equal to the rapidity y ≡ 0.5 ln ((E + pz)/(E − pz)) in the relativistic limit. Angular distance is measured in units of ∆R ≡ (∆y) 2 + (∆φ) 2 . The magnitude of the momentum in the plane transverse to the beam axis is denoted by p T . associated production modes at next-to-leading order in QCD [52]. The branching ratios (B) of the charged Higgs bosons to W ± W ± or W ± Z are included in the quoted values.
Process
Generator ME accuracy PDF Parton shower and hadronisation Parameter set Table 2. Summary of the event generators, parton shower models, and PDF sets used for the simulation of the background event samples. The notation V is used to refer to an electroweak gauge boson W or Z/γ * . In the final column, 'default' refers the to default parameter set provided with the event generator.
A summary of the MC event generators used to simulate the background is presented in table 2 and further details are given below. The notation V is used to refer to an electroweak gauge boson W or Z/γ * . [57] generator depending on the process, including offshell effects and Higgs boson contributions, where appropriate. The V V and V γ processes were simulated using matrix elements at NLO accuracy in QCD for up to one additional parton and at LO accuracy for up to three additional parton emissions. The electroweak production of dibosons in association with two jets (V V -EW jj) was generated using LO accuracy matrix elements. The matrix element calculations were matched and merged -6 -
JHEP06(2021)146
with the Sherpa parton shower based on Catani-Seymour dipole factorisation [58,59] using the MEPS@NLO prescription [60][61][62][63]. The virtual QCD corrections were provided by the OpenLoops library [64,65]. The production of triboson (V V V ) events was simulated with the Sherpa v2.2.2 generator using factorised gauge-boson decays. All diboson and triboson processes were generated using the NNPDF3.0nnlo PDF set [54], along with the dedicated set of tuned parton-shower parameters developed by the Sherpa authors. The cross sections from the event generator were used for the normalisation. The production of an electroweak gauge boson or virtual photon in association with jets (V +jets) was simulated with the Sherpa v2.2.1 generator using NLO matrix elements for up to two partons, and LO matrix elements for up to four partons calculated with the Comix [58] and OpenLoops libraries. The event samples were generated using the NNPDF3.0nnlo PDF set, along with the dedicated set of tuned parton-shower parameters developed by the Sherpa authors. The V +jets event samples were normalised to a NNLO prediction.
Higgs boson production in association with a vector boson was simulated at LO with Pythia 8.186 and EvtGen [66] using the A14 tune, along with the NNPDF2.3lo PDF set. The Monte Carlo prediction was normalised to cross sections calculated at NNLO in QCD with NLO electroweak corrections for qq/qg → V H, and at next-to-leading-logarithm accuracy in QCD with NLO EW corrections for gg → ZH [67][68][69][70][71][72][73]. The production of ttH events was modelled using the Powheg Box v2 [74][75][76][77][78] generator at NLO with the NNPDF3.0nlo PDF set [54]. The events were interfaced to Pythia 8.230 using the A14 tune and the NNPDF2.3lo PDF set [54]. The decays of bottom and charm hadrons were performed by EvtGen v1.6.0. The production of ttV , tW Z and tZ events was modelled using the MadGraph5_aMC@NLO v2.3.3 generator at NLO with the NNPDF3.0nlo PDF. The events were interfaced to Pythia 8.210 (Pythia 8.212 for tW Z) using the A14 tune and the NNPDF2.3lo PDF set. The decays of bottom and charm hadrons were simulated using the EvtGen v1.2.0 program. The simulated ttH, ttV , tW Z and tZ events were normalised to cross-section calculations at NLO accuracy in QCD. In addition, NLO EW corrections were included for ttH, ttV and tW Z. Backgrounds from top-quark pair production (tt) and tW production were estimated at NLO accuracy in QCD using the hvq program [74] in Powheg Box v2. The event samples were generated using the A14 tune and NNPDF3.0nnlo PDF set. The interference between tt and tW production is neglected as it has a negligible impact on the analysis. The tt event sample was normalised to the cross-section prediction at NNLO in QCD including the resummation of next-to-next-to-leading logarithmic (NNLL) soft-gluon terms calculated using Top++2.0 [79][80][81][82][83][84][85]. The inclusive tW cross section was corrected to the theory prediction calculated at NLO in QCD with NNLL soft gluon corrections [86,87].
The production of tttt, ttt, ttW W and ttW Z events was modelled using the Mad-Graph5_aMC@NLO v2.3.3 generator at NLO with the NNPDF3.1nlo PDF [54]. The events were interfaced with Pythia 8.230 using the A14 tune and the NNPDF2.3lo PDF set. The tttt, ttW W and ttW Z contributions were normalised to theoretical cross sections at NLO in QCD including electroweak corrections [88]. The ttt production was normalised to a cross section calculated to LO in QCD.
JHEP06(2021)146
Backgrounds such as pp → W + W + W − W − or pp → W ± W ± ZW ∓ with 2 sc , 3 or 4 final states, which have the same signature as the considered signal, have very small production cross sections times branching ratio and their contribution in the preselection, control and signal regions was found to be negligible.
The signal and background events were passed through the GEANT4 [89] simulation of the ATLAS detector [90] and reconstructed using the same algorithms as are used for the data. The effect of multiple proton-proton interactions in the same or nearby bunch crossings (pile-up) is accounted for using inelastic proton-proton interactions generated by Pythia 8 [91], with the A3 tune [92] and the NNPDF2.3lo PDF set [93]. These inelastic proton-proton interactions were added to the signal and background event samples and weighted such that the distribution of the average number of proton-proton interactions in simulation matches that observed in the data.
Event reconstruction
The analysis is performed in pp collision data recorded by the ATLAS detector between 2015 and 2018. In this period, the LHC delivered colliding beams with a peak instantaneous luminosity up to L = 2.1 × 10 34 cm −2 s −1 , achieved in 2018, and an average number of pp interactions per bunch crossing of 34. After requirements on the stability of the beams, the operational status of all ATLAS detector components, and the quality of the recorded data, the total integrated luminosity of the data set corresponds to 139 fb −1 [94].
Proton-proton interaction vertices are reconstructed from charged-particle tracks with p T > 500 MeV [95,96] in the ID. The presence of at least one such vertex with a minimum of two associated tracks is required, and the vertex with the largest sum of p 2 T of associated tracks is chosen as the primary vertex.
The anti-k t algorithm [97,98] with radius parameter R = 0.4 is used to reconstruct jets up to |η| = 4.9. It uses as inputs particle-flow objects, combining tracking and calorimetric information [99]. The jets are calibrated as described in ref. [100]. Only jets with p T > 20 GeV and |η| < 2.5 are considered further. Events are removed if they contain jets induced by calorimeter noise or non-collision background, according to criteria similar to those described in ref. [101]. Jets with p T < 60 GeV and |η| < 2.4 from pile-up interactions are suppressed using a jet-vertex tagging discriminant [102].
Jets containing b-flavored hadrons are identified in the region |η| < 2.5 with the DL1r b-tagging algorithm based on a recurrent neural network [103,104]. It makes use of the impact parameters of tracks associated with the jet, the position of reconstructed secondary vertices and their compatibility with the decay chains of such hadrons. The b-tagging average efficiency is 70%, as measured in tt events.
Electron candidates are reconstructed as tracks in the ID matched to energy clusters in the EM calorimeter, within |η| < 2.47 [105]. Only electrons with p T > 10 GeV and not in the transition region between the barrel and endcap calorimeters (1.37 < |η| < 1.52) are considered. The electron identification is based on a multivariate likelihood-based discriminant that uses the shower shapes in the EM calorimeter and the associated track properties measured in the ID. The electron candidates must satisfy the 'Loose' identification criteria described in ref. [105]. Signal electrons are required to satisfy the 'Tight' -8 -
JHEP06(2021)146
identification [105] for better rejection of non-prompt electrons. The electron identification efficiency varies with increasing p T in Z → ee events, from 65% at p T = 10 GeV to 88% at p T = 100 GeV for the Tight operating point, and from 85% at p T = 20 GeV to 95% at p T = 100 GeV for the Loose operating point.
The longitudinal impact parameter of the electron track, z 0 , is required to satisfy |z 0 sin θ| < 0.5 mm, where θ is the polar angle of the track. The transverse impact parameter divided by its uncertainty, |d 0 |/σ(d 0 ), is required to be less than five. For all signal electrons there must be no association with a vertex from a reconstructed photon conversion [105] in the detector material. To further reduce the photon conversion background in the 2 sc channel, additional requirements are applied to the signal electrons [4]: i) the candidate must not have a reconstructed displaced vertex with radius r > 20 mm whose reconstruction uses the track associated with the electron, ii) the invariant mass of the system formed by the track associated with the electron and the closest track at the primary or a conversion vertex is required to be larger than 100 MeV; this selection is referred to as the photon conversion veto.
For the signal electrons, the identification criteria are complemented by an isolation requirement, which is based on the energy in a cone around the electron candidate calculated using either reconstructed tracks or energy clusters. Electrons with wrongly reconstructed charge (charge-flip) are suppressed using a boosted decision tree discriminant exploiting additional tracks in the vicinity of the electron and track-to-cluster matching variables [105].
Muon candidates are reconstructed in the region |η| < 2.5 from MS tracks matching ID tracks. The analysis only considers muons with p T > 10 GeV satisfying the 'Medium' quality requirements defined in ref.
[106]. The muon reconstruction efficiency is approximately 98% in Z → µµ events. The longitudinal impact parameter of the muon track must satisfy |z 0 sin θ| < 0.5 mm and the transverse impact parameter must satisfy |d 0 |/σ(d 0 ) < 3. For signal muons the candidate must satisfy calorimeter-and track-based isolation requirements.
Non-prompt electrons and muons from the decays of b-and c-flavored hadrons are further rejected using a boosted decision tree discriminant based on isolation and secondary vertex information, referred to as the non-prompt-lepton veto [107].
To avoid cases where the detector response to a single physical object is reconstructed as two different final-state objects, e.g. an electron reconstructed as both an electron and a jet, several steps are followed to remove such overlaps, as described in ref.
[13]. The overlap removal procedure is performed using candidate leptons.
The different lepton selections used in the analysis are summarised in table 3. Three types of signal lepton requirements are used for both the electrons and muons: 'tight', 'loose', and 'loose and minimally isolated'. The tight leptons, and the loose and minimally isolated leptons, are a subset of the loose leptons. The photon conversion veto is applied on top of the tight and loose electron selection requirements, only in the 2 sc channel. The Loose and FixedCutLoose isolation criteria applied for the loose and minimally isolated electrons and muons are described in refs.
The missing transverse momentum, with magnitude E miss T , is defined as the negative vector sum of the transverse momenta of all identified objects (muon candidates, electron -9 -
JHEP06(2021)146
Electrons Muons Table 3. The requirements applied to define the categories of candidate leptons: loose (L), loose and minimally isolated (L * ) and tight (T) leptons. The overlap removal procedure is not applied for the candidate leptons. In the 2 sc channel, the photon conversion veto is required in addition to the loose and tight criteria.
candidates and jets) and an additional soft term [108, 109]. The soft term is constructed from all tracks that are matched to the primary vertex and are not associated with any other objects.
Event selection
Candidate events are selected for read-out using lepton triggers that require the one electron or muon to satisfy identification criteria similar to those used in the offline reconstruction, isolation criteria, and a transverse momentum requirement of p T > 26 GeV [110,111]. With increasing p T the requirements on identification and isolation become less stringent. All events must contain at least one offline tight lepton with p T > 30 GeVthat triggered the event. The event selection proceeds in two steps: the preselection and the signal regions (SRs) selection. The preselection is defined in table 4; the three channels (2 sc , 3 and 4 ) are defined to be mutually exclusive. Events are selected only if the absolute value of the sum of charges of the leptons is two, one, and two or zero for the 2 sc , 3 and 4 channels, respectively. In the 2 sc channel, the second-highest-p T lepton is required to have p T > 20 GeVand both leptons are required to be tight. Similarly in the 3 channel, each lepton in the pair of leptons of the same charge is required to have p T > 20 GeVand be tight.
Further preselection requirements are based on E miss T , the jet multiplicity N jets and the number of b-jets, N b-jet . The lowest E miss T value is 30 GeV in the 3 and 4 channels, and 70 GeV in the 2 sc channel; this selection helps to reduce the non-prompt lepton, electron charge-flip and V V backgrounds. In the 2 sc (3 ) channel only events with at least three (two) jets are considered. The background from top-quark production is highly reduced by requiring zero b-jets in the event. In order to reduce the background from Drell-Yan processes and neutral mesons, the invariant mass of same-flavour opposite-charge lepton pairs is required to be greater than 15 GeV for the 3 and 4 channels. The invariant mass of the same-flavour opposite-charge lepton pair must differ from the nominal Zboson mass by 10 GeV. For the 2 sc channel, the Z-boson invariant mass veto is also -10 -
JHEP06(2021)146
Selection criteria 2 sc 3 4 At least one offline tight lepton with p T > 30 GeVthat triggered the event GeV Table 4. The preselection criteria for the 2 sc , 3 and 4 analysis channels. The leptons are required to pass the loose (L), loose and minimally isolated (L * ) or tight (T) requirements. The leptons are ordered by decreasing p T ( 1 , 2 , . . .) in the 2 sc and 4 channels, while for the 3 channel 1 and 2 denote the two same-charge leptons and 0 denotes the lepton with a charge opposite to the total lepton charge. Q denotes the charge of each lepton. SFOC refers to same-flavour opposite-charge lepton pairs. The symbol "-" means no requirement is made. The equal sign (=) is used to emphasise that the selection criterion has to be exactly the given number.
applied to e ± e ± events, in order to reduce the contributions originating from electron charge misidentification.
In addition to E miss T the following variables are used to define SRs: • The invariant mass of all selected leptons in the event, m x , where x can be 2, 3 or 4 corresponding to the 2 sc , 3 or 4 channels.
• The invariant mass of all jets in the event, m jets . When there are more than four jets in the event, only the leading four jets are used. This variable is only used for the 2 sc channel.
• The distance in η-φ between two same-charge leptons, ∆R ± ± . It is used for the 2 sc and 3 channels. In the 4 channel, two such variables can be calculated per event, ∆R min ± ± and ∆R max ± ± , denoting the minimum and maximum values, respectively.
• The transverse momentum of the highest-p T jet, p leading jet T . This variable is used in the 3 channel.
• The transverse momentum of the highest-p T lepton, p 1 T . This variable is used in the 4 channel.
• The azimuthal distance between the dilepton system and E miss T , ∆φ ,E miss T . It is only used in the 2 sc channel.
JHEP06(2021)146
• The smallest distance in η-φ between any lepton and its closest jet, ∆R jet . This variable is used in the 3 channel.
• The variable S, used for the 2 sc channel to describe the event topology in the transverse plane, and defined using the spread of the φ angles of the leptons, E miss T , and jets as follows: where the R is the root mean square that quantifies the spread, These variables are found to discriminate between the signal and the background. They exploit both the boosted decay topology of the charged Higgs bosons and the high energy of the decay products. Signal regions are defined for each channel, as summarised in table 5. The selection criteria defining the signal regions result from a scan of the multidimensional parameter space of the discriminating variables mentioned above. The SRs were designed by optimising the sensitivity for the H ±± pair production mode, using the m H ±± = 200, 300, 400 and 500 GeV mass hypotheses. The same SRs are used to study the H ± associated production mode; even though this approach is not optimal, it is still preferred instead of significantly increasing the number of signal regions. Additional mass hypotheses are used in the signal region defined for the lower mass hypotheses, since the signal discrimination power does not vary significantly in this regime. In particular, for the H ±± pair production mode the m H ±± = 300 GeVsignal regions are also used for m H ±± = 350 GeV. For the H ±± and H ± associated production mode, the m H ±± = 200 GeV, 400 GeV and 500 GeV signal regions are also used for m H ±± = 220 GeV, 450 GeV and 550 GeV, respectively. The SRs defined for the 2 sc channel are further divided into ee, eµ and µµ final states. Events in the 3 SRs are separated into two categories according to whether or not a same-flavour opposite-charge lepton pair exists in the event. This procedure further improves the expected significance by exploring differences in background composition and lepton-flavour composition between signal and background. The number of events observed in data is shown together with the expected signal and estimated background yields in section 9.
Background estimation
The background sources can be divided into two main categories. One category is populated by the SM events which contain only reconstructed charged (prompt) leptons originating from leptonic decays of W and Z bosons. The second category is formed by non-prompt leptons and charge-flip electrons. The non-prompt-lepton category refers to the leptons that originate from decays of b-and c-hadrons, or single pions that mimic electron signatures. The electrons from hadron decays into photons which convert into pairs of electrons in the beam pipe or detector material also enter this category. Lepton candidates reconstructed from these different sources share the properties of being less isolated, having larger impact parameters relative to the primary vertex and being less likely to satisfy the lepton identification criteria. The backgrounds from SM processes with prompt leptons are estimated with MC simulations, except for background from W Z production, for which the normalisation is corrected using data in a dedicated control region. Background from non-prompt leptons and electron charge-flip are estimated using data-based methods. Background from V γ production can contribute in the SRs if the photon converts to an electron-positron pair, and is estimated using MC simulations. Background from W W production is estimated from simulation if the two W bosons have the same electric charge and from data if the two W bosons have opposite electric charge, since the latter only contributes through electron charge-flip and non-prompt leptons.
Background from W Z production
The W Z process is a dominant source of background in the 2 sc and 3 SRs. To correct a mismodelling seen in the jet multiplicity distribution [112], a normalisation factor is computed and then applied to the W Z background events containing two or more jets. The normalisation factor is measured in a dedicated W Z control region. It is selected by requiring exactly three tight leptons with p T > 20 GeV, at least two jets and no b-jet in the event. Finally, there must be at least one pair of same-flavour opposite-charge leptons with a mass compatible with the Z-boson mass (|m oc − m Z | < 10 GeV). The latter criterion ensures that the W Z control region and the 3 SRs are orthogonal.
The normalisation factor is derived in a fit of a first-order polynomial, as a function of jet multiplicity, to ratios of the data event yield (subtracting all non-W Z contributions) to the W Z event yield predicted in MC simulation. The value of the polynomial at N jets = 0 is used to scale the predicted W Z yields and found is to be 0.83 ± 0.07, where the uncertainty includes the statistical and systematic components. The different sources of systematic uncertainty are discussed in section 8. The jet multiplicity distribution in the W Z control region is shown in figure 2a, and for illustration the distribution in the 3 preselection region is also shown in figure 2b. The normalisation factor is applied to the W Z MC contribution. The sum of estimated backgrounds agrees with data within the assigned systematic uncertainties.
Electron charge-flip background
In the 2 sc channel, a background contribution is expected from events with oppositecharge lepton pairs where the charge of one of the leptons is misidentified. The charge-flip background is only significant for electrons and is mainly due to interactions of the electron with the ID material.
The misidentification rate is measured using a large data sample of dielectron events originating mainly from Z → e + e − decays selected by requiring two tight electrons with an invariant mass between 80 and 100 GeV. The sample contains mostly opposite-charge dielectron pairs, with a small fraction of same-charge dielectron pairs. The fraction of samecharge dielectron events is used to extract the charge misidentification rate as a function of the electron p T and η, using the method described in ref.
[105]. This rate is found to range between 0.01% and 4%, where higher values are obtained at large rapidities due to the larger amount of material traversed by the electrons. The statistical uncertainty of this estimate varies between 2% and 26% and is taken as a systematic uncertainty in the charge misidentification rate. The background in both the opposite-charge and same-charge samples is estimated in a cubic polynomial function fit of the high (100-120 GeV) and low (60-80 GeV) m sidebands. The uncertainty in the background is estimated by varying the sidebands and propagates to a 3% uncertainty in the estimated charge misidentification rate. The final systematic uncertainty combines all the sources mentioned.
The charge-flip background in a given region is estimated by selecting data events with opposite-charge dilepton pairs, but otherwise identical selection, and weighting them by the probability that the charge of the electrons is misidentified. Another source of systematic uncertainty is estimated by comparing, in simulated V +jets, tt, and W W events, the number of same-charge events estimated from opposite-charge events with the prediction; it accounts for differences between the charge misidentification rates in different processes and is found to be approximately 10%. For this test, the misidentification rate was measured using Z → e + e − MC simulations, using the same method as for the measurement performed with data.
To estimate the electron charge-flip background in the regions defined with loose electrons the methodology described above is applied. For this estimation, dedicated electrons misidentification rates measured with a Z → e + e − sample selected by requiring one tight and one loose electrons are used.
Non-prompt-lepton background
The composition of the non-prompt-lepton background in the SRs varies considerably among the analysis channels. Therefore, the methods to estimate these contributions are different for the 2 sc , 3 and 4 channels. In the 2 sc and 3 channels the non-prompt-lepton background is estimated using a fake-factor method [23], while the simulation is corrected with scale factors measured in data for the 4 channel.
Non-prompt-lepton background estimate for the 2 sc channel. The estimate of the background from non-prompt leptons assumes that these contributions can be extrapolated from a control region enriched in non-prompt leptons with a so-called fake factor.
JHEP06(2021)146
The control region is selected using the kinematic requirements of the preselection or the signal regions but alternative lepton selection criteria. The latter means that at least one of the selected leptons is required to satisfy the loose but not the tight lepton requirements.
The fake factors are calculated separately for electrons and muons in control regions with kinematic selections designed to enhance their content in non-prompt leptons. This is achieved by applying the preselection requirements of the 2 sc channel, except that E miss T must be lower than 70 GeV. Only events with electron and muon same-charge pairs are then used. The fake factor is defined as ratio of the number of events in the control region with all the selected leptons required to pass the tight signal requirements, to the number of events in the same region but with one of the selected leptons satisfying alternative lepton requirements. The measurement is performed as a function of the lepton p T . The fake factor dependency on the lepton η was also checked and found to be negligible. The SM processes with prompt leptons and the charge-flip contributions are subtracted in the control region. For the electron and muon fake-factor measurements, the lepton with the second highest p T is assumed to be the non-prompt one.
The measured fake factors are 0.03 ± 0.01 for electrons and muons up to p T = 40 GeV, and increase to 0.16 ± 0.05 and 0.09 ± 0.02, respectively, for electrons and muons with p T > 60 GeV. The uncertainties are statistical only. A systematic uncertainty of 20% (10%) in the electron (muon) fake factor is estimated by studying the variation of the fake factor with E miss T . For this measurement two E miss T bins are considered, < 70 GeV and > 70 GeV. This uncertainty accounts for the different compositions of the non-prompt leptons in the control region and the SRs. Another source of systematic uncertainty accounts for how often the non-prompt lepton is actually the one with the highest lepton p T and not the one with the second highest p T , as assumed. It is estimated from generatorlevel studies performed with MC simulations. This source is dominant in the region with the lepton p T > 60 GeV, where the uncertainty reaches 45% (80%) for electrons (muons). Uncertainties in the SM processes with prompt leptons (approximately 20%) and electron charge-flip (15%) background subtraction are also included. The overall uncertainty amounts to approximately 30% (20%) and 55% (85%) for the fake factors for electrons (muons) with 20 < p T < 60 GeV and p T > 60 GeV, respectively.
Non-prompt-lepton background estimate for the 3 channel. The same method as the one employed for the 2 sc channel is used in the 3 channel. Here the opposite-charge lepton 0 passes the loose selection, and it is assumed to be prompt, an assumption that is found to be valid in MC simulation. The control region used to calculate the fake factors uses the 3 preselection requirements, except that exactly one jet is required. Only events with electron and muon same-charge pairs are then used. The muon fake factor is found to be 0.03±0.01 and the electron fake factor is found to be 0.02±0.01, where the uncertainties are statistical only. A systematic uncertainty of 15% is estimated by measuring the lepton fake factor in a control region enriched in events from tt production. This uncertainty accounts for the different compositions of the non-prompt leptons in the control region and the SRs. Uncertainties in the subtraction of the SM processes with prompt leptons are found to be approximately 55% (45%) for the electron (muon) fake factor. Another 20%
JHEP06(2021)146
Sample Z+jets-enriched region tt-enriched region GeV - Table 6. The selection criteria that define the control regions enriched in non-prompt leptons used to determine the MC scale factors for the 4 channel. The leptons are required to pass the loose and minimally isolated (L * ) requirements. The transverse mass, m T , is calculated as the invariant mass of the vector sum of the transverse momentum of the non-prompt-lepton candidate and the missing transverse momentum. The symbol "-" means no requirement is made. The equal sign (=) is used to emphasise that the selection criterion has to be exactly the given number. uncertainty comes from generator-level studies performed with MC simulations to test the assumption that 0 is a prompt lepton. The fake factors' dependency on the lepton p T is also studied, and the deviations from the nominal fake factors are within the statistical uncertainty. When all sources of systematic uncertainty are combined, the total systematic uncertainty of the electron (muon) fake factor is found to be 60% (50%). -17 -
JHEP06(2021)146
The scale factors are measured to be λ e HF = 0.98 ± 0.18, λ e LF = 1.34 ± 0.17 and λ µ HF = 0.94 ± 0.04, where the uncertainties are statistical only. Systematic uncertainties of 5%, 4% and 2% for the λ e HF , λ e LF and λ µ HF scale factors, respectively, are estimated by varying the jet multiplicity and the lepton p T threshold in the nominal Z+jets-and ttenriched control regions. These uncertainties account for the different compositions of the non-prompt leptons in the control regions and the SRs. Uncertainties in the prompt-lepton subtraction are dominated by theoretical uncertainties. The total systematic uncertainties combine all the sources mentioned and are approximately 6%, 15% and 4% for the λ e HF , λ e LF and λ µ HF scale factors, respectively.
Validation
The data-based methods employed to estimate the backgrounds are validated by comparing the event yields in data with the combined predictions for these backgrounds, added to MC predictions for SM processes with prompt signal leptons. Distributions of selected variables in the 2 sc , 3 and 4 channels are shown in figures 3, 4 and 5, respectively, after the preselection requirements from table 4 are applied. Good agreement is observed in both normalisation and shape, demonstrating that the background contributions are well estimated. The expected distributions of both signal models are shown for m H ±± = 300 GeV to illustrate the discrimination power of the selected variables.
The signal contamination at the preselection level was studied for m H ±± = 300 GeV and m H ±± = 200 GeV mass hypotheses, corresponding to the H ±± pair production mode and the H ±± and H ± associated production mode, respectively. These mass hypotheses were selected because they are close to the sensitive mass range of the current analysis. A maximum value of 8.5% (2%) H ±± H ∓∓ (H ±± H ∓ ) signal contamination was found in all the individual 2 sc , 3 and 4 channels at preselection level.
Systematic uncertainties
Uncertainties in the signal and background yields arise from experimental uncertainties and from the theoretical accuracy of the prediction of the SM background yields. The experimental uncertainties arise from the luminosity determination, modelling of pile-up interactions, the reconstruction and identification of electrons, muons and jets and from the uncertainties associated with the data-based methods that are used to estimate the non-prompt lepton and charge-flip electron backgrounds.
The uncertainty in the combined 2015-2018 integrated luminosity is 1.7% [113], obtained using the LUCID-2 detector [114] for the primary luminosity measurements. Uncertainties in the modelling of pile-up interactions are estimated by reweighting the distribution of the average number of pp interactions per bunch crossing in the simulation, such that the average number of interactions changes by ±9%. The impact of these uncertainties on the background event yields estimated from MC simulation is found to be lower than 2%.
The uncertainties related to event reconstruction include the lepton [106, 115] and jet [100] The overall impact on the background from SM processes with prompt leptons and the signal yields in the SRs from these systematic uncertainties is found to be lower than 10%. The dominant contribution comes from the jet energy scale component.
The uncertainties in the efficiencies of the electron [116] and muon [106] reconstruction, identification and trigger are also included. Their impact on the estimated yields in the SRs for the signal and the background from SM processes with prompt leptons is lower than 5%. The uncertainties in the b-jet identification are found to be negligible. rections are evaluated by varying the renormalisation and factorisation scales independently by factors of two and one-half, and removing combinations where the variations differ by a factor of four. The uncertainties due to the PDF and the α s value used in the PDF determination are evaluated using the PDF4LHC prescription [117]. In the SRs defined for m H ±± = 300 GeV, the theory uncertainty in the background yields from ZZ and W W processes varies between 15% and 40%. For the background from V V V it is approximately 10%, for the background from V γ it is approximately 35% and for the tZ and ttX processes it is approximately 14%. The total uncertainty in the estimated W Z yields is 9% and includes the statistical uncertainty as well as two sources of theoretical uncertainty. Systematic uncertainties due to higher-order QCD corrections are evaluated using the same prescription as for the other diboson processes, and are found to be 3%. The second source is evaluated by comparing the results obtained with the linear fit model (section 7.1) in the W Z control region, and in a region defined with the W Z control region requirements except that three jets must be in the event. This uncertainty is found to be 8.4%. To validate the assigned uncertainty, several checks are performed. The linear fit function is changed to a quadratic one to check the quality of the fit model. The fit parameterisation is studied in the W Z-enriched region and in a region defined with the W Z control region requirements except that three jets must be in the event. The differences between the obtained results are found to be covered by the total uncertainty. The choice of parton shower model is studied in the W Z control region by comparing event samples simulated with Sherpa v2.2.1 and MadGraph5_aMC@NLO, and the differences are found to be covered by the total uncertainty. For the W Z background in regions with lower jet multiplicities, the uncertainty is estimated with the same methodology as for the other diboson processes described above.
An uncertainty of 50% is assigned to the other backgrounds (ttt, tttt, ttW W and V H), This large value is assigned to cover uncertainties from missing higher-order corrections and the PDF sets. Since these processes produce a larger number of jets at the first order of the perturbative expansion, they are less sensitive to parton shower modelling uncertainties.
The relative uncertainty in the background yields obtained in a fit of background to the observed data is shown in figure 6 for all SRs. The statistical uncertainties originate from the limited number of preselected and opposite-charge data events used in the fake-
JHEP06(2021)146
factor method and the charge-flip electron background estimate, respectively, as well as the effect of the limited number of simulated events for SM processes with prompt leptons. The total uncertainty is computed using all sources of uncertainty, and they are treated as uncorrelated. The uncertainties range from 10% to 30% and are dominated by the statistical uncertainties in the non-prompt-lepton estimate and the theory uncertainties. An exception is the 2 sc SR defined for m H ±± = 300 GeV, where the uncertainties from most sources are of similar size. The uncertainties associated with the charge-flip background are small in all 2 sc SRs. In the 4 channel, the statistical uncertainty of the non-prompt leptons comes from the limited number of events in the Z+jets, tt and tW MC simulation samples.
The theoretical uncertainties in the predicted signal yields arise from the parton shower model, missing higher-order corrections, and parton distribution functions. The systematic uncertainty due to the parton shower model is evaluated by comparing event samples simulated by Pythia 8 using the A14 tune with samples from Herwig using the H7-uemmht underlying-event tune [118], and is found to be less than 5%. The uncertainties due to the PDFs are found to be less than 5%. The uncertainty due to missing higher-order corrections is less than 10% [52]. When those uncertainties are combined in quadrature, an overall uncertainty in the signal yields of approximately 10% is obtained for the signal normalisation.
Results
The observed data event yields and the corresponding estimates for the backgrounds in the SRs are shown in figure 7. More details of the event yields in the signal region defined for m H ±± = 300 GeV are given in table 7. No significant excess over the expected yields is observed in any of the SRs. Table 7 includes the acceptance for the signal from pair production of H ±± bosons, A PP , and from associated production of H ±± and H ± bosons, A AP . It is defined as the number of selected reconstructed events divided by the number of events at the event generation stage and represents the signal reduction due to phase-space acceptance, branching ratio and detector efficiency. The results in all the SRs are shown in the appendix A.
The E miss T distribution is shown in figure 8 for the SRs of the m H ±± = 300 GeV signal mass hypothesis, where the selection requirement on E miss T has been removed. The other criteria summarised in table 5 used to define the SR are applied. Good agreement between data and the expected event yields is observed for low values of E miss T , demonstrating that the background contributions are also well estimated close to the SRs, where the requirements on the kinematic variables are tighter than at preselection.
The statistical interpretation is based on a likelihood ratio test [119] using the CL s method [120]. The signal strength, a free parameter in the fit, modifies the cross section of the signal hypothesis under investigation. A separate likelihood function is constructed for every signal hypothesis as the product of the Poisson probability distributions of the six channels of the corresponding signal region. Gaussian distributions are used to constrain the nuisance parameters associated with the systematic uncertainties. The widths of the Table 7. The expected background and the observed data event yields in the signal region defined for the m H ±± = 300 GeV mass hypothesis. The signal yield is for the corresponding mass point and is normalised to the luminosity of 139 fb −1 . The displayed numbers include all sources of statistical and systematic uncertainties. The overall signal acceptances A PP and A AP and the observed upper limit on extra contributions to each signal region at 95% confidence level, n 95 , are also presented. Selections with µµ, three or four leptons are not affected by the electron charge-flip background, so these contributions are denoted by "-". Gaussian distributions correspond to the magnitudes of these uncertainties. Statistical uncertainties in the backgrounds are estimated using Poisson distributions.
The expected and observed upper limits on the H ±± pair production and the H ±± and H ± associated production cross sections times branching fraction at 95% confidence level (CL) for the five mass hypotheses are shown in figures 9a and 9b, respectively. They are obtained from the combination of 2 sc , 3 and 4 SRs. Assuming a linear dependence of the cross-section limit between neighbouring mass points, the observed 95% CL lower limit on the mass of the H ±± boson is 350 GeV for the pair production mode and 230 GeV for the associated production mode. To confirm the validity of the linear extrapolations close to the exclusion threshold, upper limits on the H ±± pair production cross section times branching fraction are also computed for the complementary mass hypothesis of 350 GeV, as shown in figure 9a, and are found to match the extrapolated values well. A similar test was also done for the H ±± and H ± associated production mode, and the upper limits on the cross section times branching fraction are computed for the 220 GeV, 450 GeV and 550 GeV complementary mass hypotheses. The results are shown in figure 9b, and again good matching is found.
A tighter limit on the H ±± boson mass is obtained for the H ±± pair production mode mainly due to the cross sections times branching fraction for H ±± pair production being higher than those for associated production of H ±± and H ± . Other important reasons are the different branching ratios of H ±± H ∓ (≈ 16%) and H ±± H ∓∓ (≈ 26%) decays into 2 sc , 3 and 4 leptons, and the signal acceptance, which is higher for the pair production mode than for the associated production mode (see tables 7 and 8). The definition of common signal regions also plays a role. As discussed in section 6, the SRs are not optimal for associated production of H ±± and H ± . For similar reasons, the upper limits on the and H ± associated production cross section times branching fraction at 95% CL obtained from the combination of 2 sc , 3 and 4 channels. The region above the observed limit is excluded by the measurement. The bands represent the expected exclusion curves within one and two standard deviations. The theoretical prediction [29] including the NLO QCD corrections [52] is also shown.
charged Higgs boson production cross sections times branching fraction are weaker for associated production of H ±± and H ± than for H ±± pair production.
For associated production of H ±± and H ± , the limit on the H ±± boson mass implies a constraint on the H ∓ boson mass which is at most 5 GeV different from the H ±± mass. Upper limits on the H ±± pair (H ±± and H ± associated) production cross section times branching fraction range from 15 fb to 5 fb (40 fb to 10 fb) when the assumed H ±± mass varies from 200 GeV to 600 GeV. Observed upper limits at 95% CL on the number of BSM events for each signal region are derived using the CL s prescription and the results are shown in table 7 for m H ±± = 300 GeV and for all SRs in the appendix A.
Conclusion
A search for pair production of H ±± bosons, and for associated production of H ±± and H ± bosons, in the context of a type-II seesaw model, is presented. For the H ±± pair production mode the H ±± boson mass is at least 100 GeV lower than the H ± boson mass, while for the associated production of H ±± and H ± bosons the mass difference between the doubly and singly charged Higgs bosons is at most 5 GeV. Only the bosonic decays H ±± → W ± W ± and H ± → W ± Z are studied. Dedicated signal regions in the 2 sc , 3 and 4 channels are defined as a function of different mass hypotheses within the model to look for evidence of H ±± and H ± bosons in the √ s = 13 TeV proton-proton collision data sample collected between 2015 and 2018 with the ATLAS detector at the LHC. The data sample corresponds to an integrated luminosity of 139 fb −1 . The increase in the total integrated luminosity allowed significant improvements in the lepton fake factors measurement and the estimation of the associated uncertainties. The data are found to be in good agreement with the estimated background for all channels investigated. Combining those channels, the model considered is excluded at 95% confidence level for H ±± boson masses below 350 GeV for the pair production of H ±± bosons, and below 230 GeV for the associated -26 -
JHEP06(2021)146
production of H ±± and H ± bosons. Upper limits on the pair (associated) production cross section times branching fraction range from 15 fb to 5 fb (40 fb to 10 fb) when the assumed H ±± mass varies from 200 GeV to 600 GeV. The results obtained for the H ±± pair production scenario raise the exclusion limits beyond those from a previous, similar search by ATLAS using a smaller 13 TeV data set by approximately 130 GeV. In addition, the associated production of H ±± and H ± bosons is also explored, extending the previous search.
A Supplementary results
The | 13,599.6 | 2021-06-01T00:00:00.000 | [
"Physics"
] |
Non-Stationary and Resonant Passage of a System: A High-Frequency Cutoff Noise
We study non-equilibrium behaviors of a particle subjected to a high-frequency cutoff noise in terms of generalized Langevin equation, where the spectrum of internal noise is considered to be of the generalized Debye form. A closed so-lution is impossible even if the equation is linear, because the Laplace transform of the memory kernel is a multi-value function. We use a numerical method to calculate the velocity correlation function of a force-free particle and the probability of a particle passing over the top of an inverse harmonic potential. We indicate the nonergodicity of the second type, i.e., the auto-correlation function of the velocity approaches to non-stationary at large times. Applied to the barrier passage problem, we find and analyse a resonant phenomenon that the dependence of the cutoff frequency is nonmonotonic when the initial directional velocity of the particle is less than the critical value, the latter is determined by the passing probability equal to 0.5.
Introduction and Model
The well-known Debye spectrum of noise is a common expression to study dynamic characteristics of lattices in solid physics [1] [2].It has been successful to solve incoherent scattering cross section in [3], vibrational relaxation of impurities in solids [4] and the dynamics of glasses and liquids [5] [6].Besides, many chemical reactions can be modeled by a single coordinate buffered by the random force and corresponding to the memory friction, both obeying the fluctuation-dissipation theorem.Starting out from the system-plus-reservoir model, this dynamics can conveniently be described by a generalized Langevin equation (GLE) [7] [8] of Mori ( ) U x is the external potential.The motion of the particle is affected by dissipative influence of a disordered medium.The non-Ohmic model can be described by a rich variety of frequency-dependent friction mechanisms [10], which arises from the spectral density ( ) J ω [11].The relationship can be described as ( ) ( ) ( ) The truncated form of spectral density of noise is usually chosen to be lowfrequency [12] or with channel-frequency cutoff [13] [14].If the spectrum of noise is replaced by the generalized Debye form, non-equilibrium properties, the fundamental variables and other properties in a such thermal fluctuation environment can be further obtained.Moreover, the non-equilibrium characteristics in such system are investigated, but its dynamic motions still remain open.In the present work, the environment spectrum ( ) J ω takes the generalized Debye form.
( ) ( ) This corresponds, e.g., to the long-wavelength limit of one-dimensional acoustic phonons.For a noise originated from a coupled oscillator chain, s ω is the Debye phonon frequency, ω denotes a reference frequency allowing for the friction to have the dimension of a viscosity of any δ .The mean square displacement of a force-free particle in the non-Ohmic thermal bath is proportional to the fractional power of the time at long times, namely, ( ) x t t δ ∝ .The cases of 0 1 δ < < and 1 2 δ < < are sub-Ohmic and super-Ohmic baths, which result in sub-diffusion and super-diffusion, respectively; 1 δ = is the Ohmic damping leading to the normal diffusion.
In this work, we pay attention to the nonergodicity of second type, which manifests that the auto-correlation function of the velocity approaches non-stationarity at large times and its dynamical effect.The paper is organized as follows.In Sec.II, we describe our GLE model subjected to a high-frequency cutoff noise.In Sec.III, the velocity auto-correlation functions of a force-free particle and a harmonic particle are calculated numerically, respectively.In Sec.IV, we address a barrier-passage problem and show a resonant phenomenon.The summary is given in Sec.V.
Nonergodicity of the Second Type
Within the GLE dynamics, the criterion of non-ergodicity is that the velocity autocorrelation function ( ) ( ) ( ) of a force-free particle does not vanish in the long-time limit.In the calculations, in order to make the particle For such a finite spectrum, the physical dynamic properties of the particle in the free filed ( ) 0 U x = need to be studied first.In Figure 1, we present the velocity correlation function obtained for subdiffusion ( 0.5 δ = ), normal diffusion ( 1.0 δ = ), and superdiffusion ( 1.5 δ = ) for various s ω .This measures the im- portance of the nonequilibrium of the system, which causes ( ) v C t to oscillate over the time for large t.Also, the behaviour of the oscillation of ( ) v C t strongly depends on the values of the Debye phonon frequency s ω .Due to the ab- sence of the high frequency, the energy exchange between particles and the environment is not sufficient.Thus the system can not reach to the equilibrium in the long time limit.If the frequency cutoff s ω is small, the lower frequency cu- toff of the system becomes weaker.It is conducive to the particle to dissipate kinetic energy.On the other hand, if the frequency cut s ω is large, the damping becomes bigger.Thus the energy dissipation of the particle is increased and the energy exchange between particles and the environment becomes larger.Thus the oscillation of ( ) v C t is weaker when increasing the frequency cutoff.Meanwhile, it is worth pointing out that this phenomenon of nonequilibrium is divided by the diffusion index δ , which indicates how fast the diffusion oc- curs.We show the relationship of ( ) v C t with δ in detail in Figure 2 when frequency cut is small.It is seen that the amplitude of oscillation of ( ) v C t becomes weaker when the diffusion is strong.Namely, faster diffusion results in the non-equilibrium of the system being not obvious.As δ decreases, the par- ticle's memory of initial state gradually increases.The fast diffusion of the particles compensates the lack of systematic dissipation.As a consequence, slow diffusion makes it harder for the system to achieve equilibrium.
For a general potential, the system will reach thermal equilibrium where the single oscillator is coupled to a finite bath of the harmonic oscillator [15].What will happen if the particle is found in a harmonic oscillator potential ( ) with Debye spectrum?Can the system reach thermal equilibrium at large t?In Figure 3, the velocity correlation function of the particle in a harmonic oscillator potential is presented.Obviously, the oscillation behavior is weaker when the frequency of the harmonic oscillator potential 0 ω increases.
But it can not reach to the thermal equilibrium considering the effects of the external harmonic oscillator potential.Ref. [16] also revealed similar effects of the harmonic potential.It is demonstrated that non-equilibration emerges because of the formation of bound states in the coupled system-plus-bath using the microscopic model of a bath as a collection of oscillator.
We now investigate nonergodicity of the second kind of a harmonic particle subjected to an internal colored noise with generalized Debye spectrum, namely, the asymptotical result of a dynamical variable does not approach a constant.
The mean energy of the particle is determined by The results are plotted in Figure 4, where the initial variance displacement width ( ) and the initial velocity width ( ) Here, 0 T is the initial temperature of such a system.As we
( )
E t strongly depends on the initial prepara- tion of the particle in the limit of large times [18] [19].Another distinct feature is revealed that the mean energy of the particle in such a system oscillates around a fixed value in the limit of time, which has the similar behaviour as already mentioned above.The stationary state can not arrive at the equilibrium one in a large time limit due to the of insufficient dissipation.
The Barrier-Passing Probability
For the barrier passage process, the potential ( ) U x around the saddle point is treated to be an inverted parabola where ( ) x t and ( ) x t σ are average position and variance of the particle and are given by ( ) ( ) ( ) and respectively.The response function δ φ results from its Laplace transfrom ( ) given by ( ) ( ) is the Laplace transform of the friction memory kernel ( ) t γ .In particular, for GLE, ( ) t γ can be given by ( ) ( ) ( ) As usual, the average position and variance can be given by the Laplace transform and the residue theorem if the spectra of the noise is continuous [23] [24].However, in order to form the current Debye spectra in this paper.( ) ˆz γ can be given as ( ) ( ) ˆ~arctan z z γ [25].In order to get δ φ , the residue theorem needs to be used.The roots of equation ( ) Nevertheless, the equation has an infinite number of roots because special function ( ) arctan z is a multi-valued function.Thus the analytical result for the passing probability can not be given in a closed form [26].For the sake of the effects of the absence of high frequency on the passing probability, numerical simulation is necessary and of great importance.
In order to study the thermally activated escape of a particle over a potential barrier, average position and variance of the particle must be obtained from above discussion.In the previous section, we have shown that the behaviour of dynamic variables in such a Debye spectrum is oscillation.Do these two variables of the particle still oscillate when crossing the potential barrier?How does the phenomenon of passing probability evolved over time?In Figure 5, we X.Y. Shi ( ) x t σ do not oscillate with time.Nevertheless, in the case of the inverse mechanism, the system is not equilibrated, at least with respect to a collective degree of freedom such as the reaction coordinate.The change of dynamic variables with time is not the oscillation.This behavior appears because the interaction of the particle with the external potential field occurs very quickly and the reaction time of the particle passing process is too short to observe the oscillation of dynamic variables with time.This implies that for the environment of the absence of the high frequency, e.g.Debye spectrum, the passing probability approaches a constant in the large time limit.This also provides a favorable basis for further research on how the truncation of high frequency affects the passing probability.
In Figure 6, we plot the passing probability of the sub-Ohmic particle as a function of high frequency cut s ω with different initial energy.It is seen that the probability of passage varies nonmonotonically with the truncated frequency s ω as the initial energy of the particle is small.For 0 s ω = , i.e., without thermal fluctuations.It is clear that particle will not be able to pass over the barrier if the initial particle energy is smaller than the height of the potential barrier.Once s ω is increased to a small value, thermal fluctuations arise.As a consequence, the particle has more probability to overcome the potential barrier.Nevertheless, increasing the high frequency is not conducive to the particle crossing the barrier.It can be understood well by the friction of the system.In the case of larger high frequency cut.Namely, the vibration frequency of the environmental oscillator is very high.The damping in such environment is going to be very large.The recovery of system instability will be faster.Because of the much energy consumption of the particle before crossing the barrier, the passing probability is reduced.It is worth pointing out that this nonmonotonic phenomenon can only occur when the initial energy is less than the barrier height.Obviously, the probability of passage decreases monotonically with truncated frequency if the initial energy is larger than barrier height.
Summary
Considering the truncation property of the generalized Debye spectrum, nonequilibrium characteristics of the system and the dynamics of the particle have been investigated in this paper.The oscillation behavior of the velocity autocorrelation function with time in a free field is analyzed.Remarkably, this oscillation is related to the speed of particle diffusion.Slower diffusion of the particle is favorable to the nonequilibrium observation of the system.In order to analyze further the equilibrium characteristic behavior subjected by such finite noise spectrum, we put the particle in an external potential, for instance a harmonic potential, respectively.For the case of a harmonic potential, we show that the system will not equilibrate when the noise spectrum is of the form of a Debye spectrum.As we expected, we find that making the system nonlinear restores thermal equilibration.Besides, the nonergodicity of the second kind of a harmonic particle subjected to an internal colored noise with generalized Debye spectrum is investigated.Moreover, by using numerical simulation techniques, the stable passing probability turns out to be a nonmonotonic function of the Debye phonon frequency.This phenomenon can be only observed when the initial energy is less than the barrier height.It is due to the fact that the larger lack of high frequency, the smaller the corresponding damping.Thus, the particle can easily to overcome the barrier and run to the other side of the potential.
X
. Y. Shi Journal of Modern Physics having enough time to reach equilibrium, statistic averaging is performed over the ensemble consisting of 5 × 10 4 particles.The test particles start from zero and their velocities are sampled from the Gaussian distribution with zero mean and width ( )
Figure 4 .
Figure 4. Time dependence mean energy equation can be exactly solved, see Refs[20] [21], and leads necessarily to a Gaussian distribution.The timedependent passing probability is simply given by[22]
Figure 5 .
Figure 5.The average trajectory (solid lines) and variance (dotted lines) as function of time for initial velocity 0 3.0,1.0v = from top to bottom (black solid line: 0 3.0 v = ; blue solid line: 0 1.0 v = ; red dotted line: 0 3.0 v = ; purple dotted line: 0 1.0 v = ).The average trajectory for smaller initial velocity is plotted in the inset.The parameters used are 0.5 T = , 0.5 s ω =
Figure 6 .
Figure 6.The passing probability as a function of s ω for various initial velocity at fixed -Lee form: | 3,266.2 | 2023-01-01T00:00:00.000 | [
"Physics"
] |
Observation of nuclear dechanneling length reduction for high energy protons in a short bent crystal
Deflection of 400 GeV / c protons by a short bent silicon crystal was studied at the CERN SPS. It was shown that the dechanneling probability increases while the dechanneling length decreases with an increase of incident angles of particles relative to the crystal planes. The observation of the dechanneling length reduction provides evidence of the particle population increase at the top levels of transverse energies in the potential well of the planar channels. energy charged particles a with small angles relative
Deflection of 400 GeV/c protons by a short bent silicon crystal was studied at the CERN SPS. It was shown that the dechanneling probability increases while the dechanneling length decreases with an increase of incident angles of particles relative to the crystal planes. The observation of the dechanneling length reduction provides evidence of the particle population increase at the top levels of transverse energies in the potential well of the planar channels. When high energy charged particles enter a crystal with small angles relative to the crystal planes, θ o << 1, their motion is governed by a crystal potential, U (x), averaged along the planes [1]. If the angles are smaller than the critical angle θ o < θ c = E-mail address<EMAIL_ADDRESS>(A.M. Taratin).
(2U o /pv) 1/2 , where p, v are the particle momentum and velocity, respectively, and U o is the well depth of the averaged planar potential, the particles can be captured into the channeling regime. its bend radius is larger than the critical value, R > R c = pv/eE m [2], where E m is the maximum strength of the electric field in the planar channel. In a bent crystal, the particle motion is governed by the effective potential U eff (x, R) = U (x) + x · pv/R. The averaged planar potential provides an approximate description of channeling, in which the transverse energy of particles is the integral of motion. Incoherent (multiple and single) scattering by the crystal electrons and nuclei changes the transverse energy of channeled particles and they leave the channels, that is dechanneling occurs. The density of atomic nuclei reduces quickly with the distance x from the planes according to a Gaussian distribu- 2 u 1 and u 1 is the amplitude of thermal vibrations of the crystal atoms. Therefore, the dechanneling process has two stages for most of channeled particles entering the crystal sufficiently far from the channel walls. In the first slow stage particles increase their transverse energy due to multiple scattering on the crystal electrons. The experimental data [3] have shown that a good approximation for the critical approach distance to the channel walls is r c = 2.5u 1 where the fast dechanneling stage due to multiple scattering by atomic nuclei begins ("nuclear corridor"). The value of the planar potential at the distance r c determines the critical transverse energy for the stable channeling states E xc = U eff (x = r c ). The dechanneling process has an exponential character in the first approxima- The dechanneling length measured in the experiment [4] is about 10 cm for 200 GeV/c protons in the (110) channels of a 44 mm long straight silicon crystal. It determines the reduction of particles in the stable channeling states due to multiple scattering on the crystal electrons, the electron dechanneling length S e . The dechanneling length is approximately proportional to the particle energy.
Short crystals with length L << S e are required to study the fast mechanism of nuclear dechanneling. The crystal bend provides the angular unfolding of the dechanneling process. The first measurement of the nuclear dechanneling length was realized in the experiment [5] using a 2 mm long silicon crystal bent along the (110) planes with a bend radius R = 40 m for 400 GeV/c protons. The measured nuclear dechanneling length was about 1.5 mm which is more than 100 times smaller than the electron dechanneling length for this energy of protons.
It should be noted that for the planar channeling of negative particles nuclear multiple scattering is the main mechanism of dechanneling because all the particles oscillate around the crystal planes. The dechanneling length for 150 GeV/c π − mesons was measured in the experiment [6]. Its value, S n ≈ 1 mm, is the same order of magnitude as the nuclear dechanneling length for positive particles with the same energy.
is the potential well depth, determines the particle fraction undergoing dechanneling due to strong multiple scattering by the crystal nuclei. One would expect that dechanneling should be faster for particles when their transverse energy is closer to the well depth E xm . The number of such near-barrier particles should increase with an increase of the beam orientation angle relative to the planes at the crystal entrance.
In this paper, the results of the experiment at the CERN SPS on the deflection of 400 GeV/c protons by a short bent silicon crystal are considered. The analysis of different beam fractions for the crystal orientation optimal for channeling allows to observe the reduction of the nuclear dechanneling length with an increase of the orientation angle of the considered beam fraction relative to the planes at the crystal entrance.
mines the range where dechanneling of particles occurs due to multiple scattering by the atomic nuclei of the crystal. The experimental setup was the same described in [7]. Four microstrip silicon detectors, two upstream and two downstream of the crystal, were used to detect the particle trajectories with an angular resolution of 3 μrad, which is limited by the multiple scattering of particles in the detectors and air. A 70 × 1.94 × 0.5 mm 3 silicon strip crystal with the largest faces parallel to the (110) crystallographic planes was fabricated according to the methodology described in [8,9]. The strip-crystal was bent along its length and placed vertically, so that the induced anticlastic bending along the crystal width was used to deflect particles in the horizontal plane (see Fig. 2b in [7]). The beam of 400 GeV/c protons had the RMS value of the horizontal angular divergence of σ x = (9.34 ± 0.06) μrad. A high precision goniometer, with an accuracy of 2 μrad, was used to orient the (110) crystal planes parallel to the beam direction. An angular scan was performed and the optimal orientation was found, which gives the maximum of the deflected beam fraction. Fig. 2 shows the intensity distribution of 400 GeV/c protons passed through the bent silicon crystal when its orientation is optimal for channeling in the deflection angles θ x as a function of incidence angle θ in of particles relative to the (110) plane at the crystal entrance. The beam divergence value allows to observe the different interaction mechanisms with the crystal for the different beam fractions. The fraction 1 consists of the particles passed through the crystal which experienced the same multiple scattering as in the amorphous substance. The particles which passed the full length of the crystal in the channeling regime and deflected by the bend angle represent the fraction 2. The fraction 3 consists of the particles deflected to the side opposite to the crystal bend due to volume reflection. The dechanneled particles, which are found in the angular interval between the initial direction and that of the channeled fraction, represent the fraction 4.
Let us consider the different beam fractions with the incident angles θ in ± θ in , where θ in = 1.75 μrad. Fig. 3a shows the de- ing) efficiency, it is P ch = 77%. The peak on the left is formed by the particles, which were not captured into the channeling regime. The peak is shifted to the side opposite to the crystal bend by the angle of θ VR due to volume reflection. The peak boundary at θ VR + 3σ VR , where σ VR is the RMS deviation of a Gaussian fit of this peak, is shown by the dot-dashed line. The particles with deflection angles within the angular interval indicated in Fig. 3 by two dot-dashed lines are the dechanneled ones N dc . The particle deflection angle is determined by the distance S passed in the channeling regime, θ xs = S/R. The total number of particles in the channeling peak and the dechanneling region represents those particles captured into the channeling regime at the crystal entrance N ch (0). The dechanneling probability is defined as the ratio P dc = N dc /N ch (0). For the case under consideration, P dc = 7.2%. The solid line shows an exponential fit of the central dechanneling region, which gives the dechanneling length value, S n = 1.38 ± 0.24 mm. Fig. 3b shows the distribution of deflection angles for the beam fraction with an incident angle θ in = 8.75 μrad, which is close to the critical channeling angle, θ cb (R) = 9.8 μrad, for the considered bend radius R. In this case, the channeling peak is additionally shifted because of the angle θ in relative to the planes at the crystal entrance. The upper part of the peak becomes more flat because the number of particles with small oscillation amplitudes decreases (the simulations actually show the two-headed peak, which is also just visible here). The dechanneling probability becomes significantly higher, P dc = 23.5% and the dechanneling length becomes smaller, S n = 0.81 ± 0.09 mm. The dechanneling length decreases approximately by a factor of two in the angular range considered.
Simulation of the proton beam passage through the bent silicon crystal for the conditions of the considered experiment has been carried out using the CRYD model suggested in [10]. The proton trajectories were found by numerical solution of the equations of The channeling peak is on the right. The distribution of dechanneled particles is located between two dot-dashed lines. The solid line shows the exponential fit, which determines the dechanneling length. motion in the effective bent planar potential. The change of the transverse velocity of a particle due to multiple scattering on the crystal electrons and nuclei was calculated using the realistic distributions of electrons and nuclei in each step along the trajectory, which was much smaller than the spatial period of the particle oscillations in the planar channels. The calculated dependence of the dechanneling probability is shown by the dot-dashed line in Fig. 4. The dependence is in good agreement with the experiment, the discrepancy is not larger than 10%. Fig. 6 shows the calculated particle distributions in the transverse energy E x at the crystal entrance (1) and exit (2) for the same angles of the beam fraction orientation as in Fig. 3. In the case of θ in = 0 (Fig. 6a) the distribution has a large peak near E x = 0 and two small peaks for the transverse energies corresponding to the effective potential values at the channel walls where its changes are minimal. Two dot-dashed lines show the range of transverse energies at which particles can enter the nu-clear corridor of the channels. The initial distribution of particles in this range of nuclear dechanneling is approximately uniform.
In the case of θ in = 8.75 μrad the initial distribution in E x is significantly wider. The distribution maximum is near E x , which is close to the potential well depth of the planar channel. The distribution of particles in the nuclear dechanneling range of E x is strongly non-uniform. The particle density with the transverse energies close to the potential well depth value is maximal.
Simulation for protons with transverse energies from narrow layers with the width of 1 eV was made to estimate the dechanneling rate of particles with different E x from the nuclear dechanneling range (E xc , E xm ). The dechanneling lengths for E x = 15 eV and 18 eV, which correspond to the middle and the upper part of the nuclear dechanneling range, were found to be S n = 1.49 mm and 0.61 mm, respectively. The last value is close to the dechanneling length observed in the experiment for the beam fraction with the large incident angle, θ in = 10.5 μrad.
Thus, when the beam fraction enters the crystal at an angle relative to the planes close to the critical one, the population of the upper part of the nuclear dechanneling range is maximal and only those particles determine the dechanneling length value observed in the experiment. The distributions of particle transverse energies at the crystal exit clearly show that dechanneling occurs only from the nuclear corridor range.
The experiment showed that the dechanneling probability increases with an increase of the incident angle of particles relative to the planes which can be explained by the increase of the particle population in the whole range of nuclear dechanneling. Moreover, the observation of the dechanneling length reduction provides evidence of an increase of the particle population at the top part of this nuclear dechanneling range. It is important to take these changes of dechanneling length for protons into account for experiments on the collider beam halo collimation with using bent crystals. | 3,085.6 | 2015-04-09T00:00:00.000 | [
"Physics"
] |
First and second order dual phase lag equation. Numerical solution using the explicit and implicit schemes of the finite difference method
In the paper the different variants of the dual phase lag equation (DPLE) are considered. As one knows, the mathematical form of DPLE results from the generalization of the Fourier law in which two delay times are introduced, namely the relaxation time τq and the thermalization one τT. Depending on the order of development of the left and right hand sides of the generalized Fourier law into the Taylor series one can obtain the different forms of the DPLE. It is also possible to consider the others forms of equation discussed resulting from the introduction of the new variable or variables (substitution). In the paper a thin metal film subjected to a laser pulse is considered (the 1D problem). Theoretical considerations are illustrated by the examples of numerical computations. The discussion of the results obtained is also presented.
Introduction
The macroscopic heat conduction model (the Fourier model) results from the assumption of instantaneous propagation of the thermal wave in the domain under considerations. It is known that this approach is not admittedly correct, but in the most of problems related to the heat conduction in the macro scale is fully satisfactory.
Seventy years ago Cattaneo [1] formulated an equation in which the delay time (relaxation time τq) of the heat flux in relation to the temperature gradient was taken into account. This hyperbolic PDE is known as the Cattaneo-Vernotte equation. In the case of the macroscopic problems the Cattaneo-Vernotte equation is used as a model of heat transfer processes for the nontypical materials with a complex internal structure (e.g. biological tissue) [2]).
The very high heating rates typical for the microscale heat transfer cause that the inclusion of the finite value of thermal wave velocity must be somehow taken into account.
The very popular model describing the microscale heat transfer is based on the dual phase lag equation (DPLE) -e.g. [3,4]. The final form of DPLE results from the generalized form of the Fourier law in which both the relaxation time and the thermalization one are introduced (e.g. [5]). Depending on the number of terms in the Taylor series expansion of this law, the different forms of the dual phase lag equation can be obtained, in particular, the first and the second order DPLE and also the mixed variants. Some simple problems described by the DPLE and supplemented by the appropriate boundary and initial conditions can be solved analytically. As an example the paper [6] can be mentioned. The solution concerns a thin metal film heating caused by a laser action, while this action is taken into account by the introduction of the artificial internal heat source.
The numerous approximate solutions of the different problems described by the first order DPLE can be found in literature, e.g. [7][8][9]. The numerical solutions concerning the second order DPLE (based on the finite difference method) are the subject of works [10][11][12][13][14]. In the papers [12,13] the mixed variants of DPLE are also considered.
Below, two variants of DPLE modified forms will be discussed. They are the result of a substitutions causing the appearance in the energy equation of the fragment which is identical as in the classical Fourier equation. The others components can be treated as the internal volumetric heat source. The computer program based on the explicit scheme of finite difference method is not complicated, additionally the stability condition has a simple form. At the stage of numerical modeling the 1D problem is considered. It results from the thermal aspects of the task discussed.
Governing equations
The starting point for the formulation of the dual phase lag equation (DPLE) is the generalized Fourier law in the form Dual phase lag equation must be supplemented by the boundary conditions. Used in this paper the Neumann condition takes a the form (6) or (the second order model) where qb(x, t) is a given boundary heat flux, the sign "+" corresponds to x = 0 (left boundary), while sign "-" corresponds to x = G (right boundary), G is the metal film thickness.
The initial conditions are in the form Additionally (for equations (4) and (5) where Tp is an initial temperature, u(x) and v(x) are the known functions.
Modifications of DPLE
In the paper [8] a certain modification of the first order DPLE using the substitution technique is presented. In this paper we consider the second order DPLE (as in [12]) and the energy equation is extended by the introduction of the internal heat sources. Let from which results that Introducing (11) and (12) into (4) gives where a = λ/c is a thermal diffusion coefficient. This form of DPLE equation is interesting because the first two components (containing the function U) correspond to the classical diffusion equation, while the remaining ones can be treated as an internal heat source (thanks to which the formulation of the stability criterion for the explicit differential scheme used here is very simple). The boundary conditions should be also modified. In particular The last formula corresponds to the Neumann condition, of course. The initial condition takes a form (c.f. equations (8) and (9)) 2 τ 0 : Thus, at first the equation (13) supplemented by boundary conditions (13) and initial condition (15) should be solved. Next, using the formula (10), the temporary temperature field can be determined.
Another approach involves the introduction of two substitutions, namely [13] Introducing formula (16) into (17) one obtains the dependence (10) and then equation (4) can be written in the fom The last formula can be differentiated with respect to time and then or (c.f. (17) By introducing (19) into (21) one obtains As in the case of equation (13), the first two components containing the function U correspond to the classical diffusion equation, while the remaining ones can be treated as an artificial internal heat source.
The equation (23) is supplemented by the modified boundary conditions and the initial condition (15). After solving the problem (23), (24), (15), the ordinary differential equation (17) with the initial condition (c.f. formula (16)) 2 τ 0 : is solved. Finally, the solution of the ordinary differential equation (16) with the initial condition T(x,0) = Tp, allows one to determine the temporary temperature field.
Finite Difference Method
The problems presented previously have been solved using the different variants of the finite difference method (explicit and implicit schemes [12,16]).
All above presented variants of DPLE have been modeled using the in-house computer programs based on the FDM, while two selected solutions will be shown and compared here. In particular, the thermal processes proceeding in the domain of thin metal film subjected to the laser pulse are considered. The first solution applies equation (23) with the internal volumetric heat source resulting from the laser action. The energy equation is supplemented by the Neumann conditions (in the adiabatic version) and the appropriate initial conditions. At the stage of numerical modeling the explicit scheme of the FDM has been used. One can see, that equation (23) has a form which is convenient for numerical modeling. It contains the simple differential operators without the mixed derivatives. The same physical problem has been solved using the equation (4) and the adequate boundary-initial conditions. In this version the implicit scheme of the FDM has been applied.
The following approximation of equation (23) is proposed: The well known stability criterion for parabolic equations determines the critical time step.
The FDM form of the boundary conditions is the following (x = 0): and (x = G): and where I is the laser intensity, tp is the characteristic time of laser pulse, δ is the optical penetration depth, R is the reflectivity of the irradiated surface and β = 4 ln2. For x = 0 and x = G the no-flux boundary conditions are assumed: qb(0, t) = qb(G, t)=0.
In Figure 1 the comparison of results obtained using the algorithms discussed is presented. Solid lines correspond to the explicit scheme of FDM for the equation (23) with the boundary conditions (24) and initial condition (15), while the symbols correspond to the implicit scheme of FDM for the equation (4) with the boundary conditions (7) and initial conditions (8), (9). In this Figure the temperature histories at the points x = 0 (irradiated surface), x = 20 nm and x = 40 nm are shown. As can be seen, the results are almost the same. For example, for the first and the second variants of FDM, the maximum temperatures at the irradiated surface are equal to 306.536 K and 306.535 K, respectively. Figure 3 illustrates the heating/cooling curves at the irradiated surface for a shorter laser pulse tp = 0.05 ps and laser intensities 13.7 J/m 2 and 50 J/m 2 , respectively. For I = 13.7 J/m 2 the maximum temperatures are equal to 315.12 K and 308.28 K, respectively, while for I = 50 J/m 2 the maximum temperatures are equal to 355.17 K and 330.22 K. As previously, for the higher laser intensity the differences between the solutions obtained using the first and second order DPL are greater. For the other intensities: I = 100 J/m 2 : maximum temperatures: 410.34 K and 360.44 K; I = 200 J/m 2 : maximum temperatures: 520.68 K and 420.88 K; I = 300 J/m 2 : maximum temperatures: 631.02 K and 481.33 K.
It should be noted that the results presented in Figures 2 and 3 have been obtained using the implicit scheme of the FDM.
Conclusions
In the paper the second order dual phase lag models are considered. The results of numerical computations show that obtained solutions are practically the same, despite that the final forms of the DPLE are different. The models resulting from the introduction of certain substitutions are more complicated than the classical one, but on the other hand, the algorithms using the FDM are essentially simpler. The results obtained have been compared with the solutions of the first order DPLE and the differences are clearly visible (see: previous chapter). | 2,353.4 | 2018-01-01T00:00:00.000 | [
"Mathematics"
] |
«Life» in Tensor: Implementing Cellular Automata on Graphics Adapters
. This paper presents an approach to the description of cellular automata using tensors. This approach allows to attract various frameworks for organizing scientific calculations on high-performance graphics adapter processors, that is, to automatically build parallel software implementations of cellular automata. In our work, we use the TensorFlow framework to organize computations on NVIDIA graphics adapters. As an example cellular automaton we used Conway's Game of Life. The effect of the described approach to the cellular automata implementation is estimated experimentally.
Introduction
The use of automata in description of a dynamic systems' behavior has been known for a long time.
The key point of this approach to the description of systems is a representation of the object under study in the form of a discrete automatic device -automaton (State Machine or Transition System). Shalyapina N.A., Gromov M.L. «Life» in Tensor: Implementing Cellular Automata on Graphics Adapters. Trudy ISP RAN/Proc. ISP RAS, vol. 31, issue 3, 2019. pp. 217-228 218 Under the influence of input sequences (or external factors) an automaton changes its state and produces reactions. There are many types of such automata: the Moore and Mealy machines [1], the cellular automaton [2], and others. The knowledge of the features of the object under study can provide enough information to select the appropriate type of automaton for the object's behavior description. In some cases, it is convenient to use an infinite model. But finite models are mostly common. In the latter case, the sets of states, input actions (or states of the environment), and output reactions are finite. Our work deals with cellular automata (CA). The theory of cellular automata began to take shape quite a long time ago. The work of John von Neumann [3] might be considered as the first work of the cellular automata theory. Today, a large number of studies devoted to cellular automata are known [4,5]. Note that a major part of these works is devoted to the simulating of spatially distributed systems in physics, chemistry, biology, etc. [6]. The goal of the simulation is to find the states of the cells of a CA after a predetermined number of CA cycles. The resulting set of states in some way characterizes the state of the process or object under study (fluid flow rate at individual points, concentration of substances, etc.). Thus, the task of simulating a certain process or object by a cellular automaton can be divided into two subtasks. First, the researcher must select the parameters of the automaton (the dimension of the grid of cells, the shape of the cells, the type of neighborhood, etc.). And secondly, programmatically implement the behavior of the selected cellular automaton. Our work is focused on the second task -the software implementation of the cellular automaton. In itself, the concept of a cellular automaton is quite simple and the idea of software implementation is obvious. However, the number of required calculations and the structure of these calculations suggest the use of modern supercomputers with a large number of cores and supporting large-block parallelism. In this case, the cell field of the automaton is divided into separate blocks. Processing of blocks is done in parallel and independently from each other. At the end of each processing cycle, the task of combining the processing results of each block arises. This problem was solved in [7] in the original way. The experimental study in [7] of the efficiency of parallelization was carried out on clusters with 32 and 768 processors. Despite the high effectiveness of this approach, it has some issues. First, this approach assumes that a researcher has an access to a cluster. Supercomputers are quite expensive and usually are the property of some collective access center [8]. Of course, after waiting a certain time in the queue, access to the cluster is possible. But another difficulty arises: a special skill is needed to write parallel programs in order to organize parallel sections of the program correctly. And this leads to the fact that it takes a certain number of experiments with the program to debug it before use. The latter means multiple times of waiting in a queue for a cluster, which, of course, delays the moment of launching actual (not debugging) experiments with cellular automata. We offer an alternative approach for software implementation of cellular automata, which is based on the use of modern graphics adapters. Modern graphics adapters are also well-organized supercomputers, consisting of several specialized computational cores and allowing execution of operations in parallel. Compared to clusters, graphics adapters are available for a wide range of users and we believe that their capabilities are enough to implement cellular automata. In addition, there are special source development kits or frameworks (for example, ThensorFlow [9]) that can exploit multi-core graphics adapters and help a researcher quickly and efficiently create a software product, without being distracted by thinking about parallelizing data flows and control flows. In this paper, we demonstrate an approach to implementation of cellular automata on graphics adapters based on TensorFlow. In order to use this tool, we propose to describe the set of states of an automaton cells' by the main data structure used in this framework, namely, the tensor. Then we describe the process of evolution of the automaton in terms of tensor operations. A well-known cellular automaton, the Conway's Game of Life, is used as a working example. The paper is structured as follows. Section 3 presents the basic concepts and definitions concerning the theory of cellular automata. Section 3 provides a description of the game Conway's Game of Life, its features and rules of operation. Section 4 is devoted to a detailed presentation of the proposed approach for software implementation of cellular automata on graphics adapters. The results of computer experiments with the implementation of the Conway's Game of Life and comparison with the results of a classical sequential implementation are presented in section 5.
Preliminaries
The Moore machine (finite, deterministic, fully defined) is a 6-tuple = 〈 ,, , , , 〉, where S is the finite nonempty set of states of the machine with a distinguished initial state ̂∈ , I is the finite set of input stimuli (input signals), O is a finite set of output reactions (output signals), : × → is a fully defined transition function, : → is a fully defined function of output reactions. If at some moment of time the Moore machine 〈 ,, , , , 〉 is at the certain state ∈ and the input signal ∈ arrives, then the machine changes its state to the state ′ = ( , ), and the signal = ( ′) appears at its output. The machine starts its operation from the initial state ̂ with the output signal (). It is important to note that originally Moore defined the machine so that the output signal of the machine is determined not by the final state of the transition, but by the initial one (i.e. in the definition above instead of = ( ′) should be = ( )). However, for our purposes it is more convenient to use the definition we have specified. Let ℤ be the set of integers. Consider the set of all possible integers pairs ( , ) ∈ ℤ × ℤ. With each pair ( , ) we associate some finite set of pairs of integers , ⊆ ℤ × ℤ, called the neighborhood of the pair (i, j). Pairs of Ni,j will be called neighbors of the pair (i, j). The sets Ni, j must be such that the following rule holds: if the pair (p, q) is the neighbor of the pair (i, j), then the pair (p + k, q + l) is the neighbor of the pair (i + k, j + l), where k and l are some integers. Note that the cardinalities of all neighborhoods coincide and the sets will have the same structure. For convenience, we assume that all neighbors from Ni, j are enumerated with integers from 1 to | Ni, j |, where | Ni, j | is the cardinality of the set Ni, j. Then we can talk about the first, second, etc. neighbor of some pair (i, j). If the pair (p, q) is the n-th neighbor of the pair (i, j), then the pair (p + k, q + l) is the n-th neighbor of the pair (i + k, j + l).
Consider the set of Moore machines of the form , = 〈 ,̂ , , , , , , 〉 such that ( ) = . Here i and j are some integers, is the n-th Cartesian power of the set B. The machines corresponding to the neighbors of the pair (i, j) are called neighbors of the machine Ai, j. Neighboring machines will be numbered as well as the corresponding neighboring pairs (that is, the first neighbor, the second, etc.). We specifically note that (i) for each machine Ai, j the set of states is the same, i.e. S; (ii) for each machine Ai, j, the set of output signals coincides with the set of states, that is, also S; (iii) as an output signal, the machine gives its current state; (iv) all machines have the same transition function and the same function of output reaction; (v) as an input signal, machines take tuples of states (of their neighbors), the number of elements in the tuple coincides with the number of neighbors, that is, equals to | Ni, j |; (vi) machines differ only in their initial states. Let at a given time moment the current state of the first neighbor of the machine Ai, j is equal to s1, the state of the second neighbor is s2, ..., the state of the n-th neighbor is sn, where n = | Ni, j |. Then the tuple (s1, s2, ..., sn) is the input signal of the machine Ai, j at this very moment. All machines accept input signals, change their states and provide output signals simultaneously and synchronously. That is, some global clock signal is assumed. The resulting set , | ( , ) ∈ ℤ × ℤ of the Moore machines is called a two-dimensional synchronous cellular automaton (or simply cellular automaton -CA). Each individual Moore machine of this set will be called a cell. The set of states of all cells the CA at a given time moment will be called the global state of the cellular automaton at this time moment. The transition rules of cells from one state to another (the function φ), the type of neighborhood of the cells (the sets Ni, j), the number of different possible cell states (the set S) define the whole variety of synchronous two-dimensional cellular automata. For clarity, one can draw cellular automata on the plane. For this, the plane is covered with figures. Coverage can be arbitrary, but of course, it is more convenient to do it in a regular way. Classic covers are equal squares, equal triangles and equal hexagons. The choice of one or another method of covering the plane is dictated by the original problem a CA is used for and the selected set of neighbors. Next, the cover figures are assigned to the cells of the cellular automaton in a regular manner. For example, let the plane be covered with equal squares, so that each vertex of each square is also the vertex of the other three squares of the coverage ( fig. 1a). Choose the square of this coverage randomly and associate it with the cell A0,0. Let the cell Ai,j be associated with a certain square. Then we associate the cell Ai + 1,j with the square on the right, the cell Ai -1,j with the square on the left, the cell Ai,j + 1 with the square above, and the cell Ai,j -1 with the square below ( fig. 1b). Cell states will be represented by the color of the corresponding square ( fig. 1c) a) b) c) The resulting square based representation of a CA on a plane is classical one. In our work we consider only this representation. For the square based representation of a CA, the neighborhoods shown in fig. 2 are the most common. If a given cellular automaton models a process (for example, heat transfer), then the various global initial states ̂ , | ( , ) ∈ ℤ × ℤ of the cellular automaton correspond to different initial conditions of the process. According to the definition of cellular automata introduced by us, the set of cells in it is infinite. However, from the point of view of practice, especially in the case of an implementation of a cellular automaton, a set of cells have to be made finite. In this case, some of the cells lack some neighbors. Therefore, for them the set of neighbors and the transition function are modified. Such modifications determine the boundary conditions of the process being modeled.
Conway's Game of Life
In the 70s of the 20th century, the English mathematician John Conway proposed a cellular automaton called the Conway's Game of Life [10]. The cells of this automaton are interpreted as biological cells. The state «0» corresponds to the «dead» cell, and the state «1» -«alive». The game uses the Moore's neighborhood (Fig. 2b), i.e. each cell has 8 neighbors. The rules for the transition of cells from one state to another are as follows: if a cell is «dead» and has three «alive» neighbors then it becomes «alive»; if a cell is «alive» and has two or three «alive» neighbors then it remains «alive»; if a cell is «alive» and has less than two or more than three «alive» neighbors then it becomes «dead». For the convenience of perception, the behavior of each cell of the cellular automaton Conway's Game of Life can be illustrated using the transition graph ( fig. 3). Despite the simplicity of the functioning of the automaton, it is an object for numerous studies, since the variation of the initial configuration leads to the appearance of various images of its dynamics with interesting properties. One of the most interesting among them are moving groups of cellsgliders. Gliders not only oscillate with a certain periodicity, but also move through the space (plane). Thus, as a result of experiments, it was established that on the basis of gliders logical elements AND, OR, NOT can be built. Therefore any other Boolean function can be implemented. It was also proved that using the cellular automata Conway's Game of Life it is possible to emulate the operation of a Turing machine.
Features of Conway's Game of Life Parallel Implementation
According to our definition, a set of states of a cell is finite. It is obvious that, in this case, without loss of generality, we can assume that the set of states is the set of integers from 0 to |S| -1, where |S| -is the cardinality of the set of states. Therefore, the global state of the cellular automaton can be represented as a matrix A. The element Ai,j of this matrix is equal to the current state of the cell Ai,j. We call the matrix A the matrix of the global state of the cellular automaton. If there are no restrictions on the number of cells, then matrix A will be infinite. As have already been mentioned, the number of cells has to be limited from a practical point of view, that is, it is necessary to somehow choose the finite subset of cells. After that, only selected cells are considered. In this case, the ability to describe the global state of the cellular automaton by the matrix is determined by which cells are selected. We assume that the following set of cells is selected: where m and n -two fixed natural numbers. In this case, the global state matrix is obtained naturally. Since we use the TensorFlow framework for implementation of a CA, we should work with concepts defined in it. The main data structure in TensorFlow is a multidimensional matrix which in terms of this framework is called a tensor. However, in many cases, such a matrix may not correspond to any tensor. The tensor in the n-dimensional space must have n p + q components and is represented as (p + q)-dimensional matrix, where (p, q) is the rank of the tensor. And, for example, a 2 by 3 matrix does not follow these restricions. But the convenience of data manipulation provided by the framework justifies some deviations from strictly defining the tensor. Therefore, in the case when we are talking about the software implementation of a cellular automaton using TensorFlow, we will
Note the special role of the element S22 = 0,5. This element corresponds to the cell for which the number of living neighbors is calculated. Let the number of living neighbors of a certain dead cell be calculated. Then it will turn out to be integer because component S22 will be multiplied by the state of the dead cell (and it is equal to 0), and in the sum the number S22 will not participate. It will turn out to be half-integer in the case when the number of living neighbors of a living cell is calculated. This is important when the cell has two living neighbors. Then the dead cell must remain dead, and the living cell must live. That is, if after the convolution the counted number of living 223 neighbors turns out to be 2 (the cell is dead, it has 2 living neighbors), then in its place should be 0 in the tensor of the global state of the cellular automaton in the next cycle. If, after convolution, the counted number of living neighbors is 2.5 (the cell is alive, and it has 2 neighbors), then in its place should be 1 in the tensor of the global state of the cellular automaton in the next cycle. Constructing a convolution with the kernel S of the tensor T, we obtain the new tensor C, where at the intersection of the i-th row and j-th column there is an element corresponding to the number of living neighbors for the cell Ai,j. Note that we obtain a tensor (m -2) (n -2) when constructing a convolution with a kernel of size 3 3 of an arbitrary tensor of the size m n. In order to save the initial dimensions of the global state tensor of a cellular automaton, we will set the elements in the first and last row and in the first and last column of the global automaton tensor to 0. We will append these zero rows and columns to the result after the convolution is completed. Appended elements in the formula (3) are highlighted in gray. The mentioned fact suggests that some of the subsequent computations are superfluous (namely computations on the appended elements). The amount of extra computations for the global state tensor with dimensions m n will be (2m -2) + (2n -2).
Then, the part of extra computations in the amount of useful computations is ( Taking into account the agreement on the half-integer value of the number of living neighbors, the integer part of the value of the tensor component C determines the number of living neighbors of the cell, and the presence of the fractional part means that the cell was alive in the previous step. According to the rules of the Conway's Game of Life it is necessary to transform the tensor C in order to determine the global state of the cellular automaton in the next step. Components with values in the range [2.5, 3.5] should take the value 1 (cells are alive). The remaining components should become 0 (cells are dead). Among the classical operations on tensors there is no operation that would allow to express the required transformation. However, the framework used in our work was created primarily for the problems of the theory of artificial intelligence, namely, for implementation of neural networks. The data flow there is the flow of tensors (a tensor as an input, a tensor as an output). Computational elements, that change data, are layers of the neural network. So, for example, in our case for the convolution we use a two-dimensional convolution layer with the kernel S (formula (2)). Any tool for neural network implementation ought to have the special type of layers -activation layers (layer of non-linear transformations). These layers calculate activation functions (some non-linear functions) of each element of the input tensor and put the result into the output tensor. TensorFlow offers a standard set of non-linear activation functions. In addition, it is possible to create custom activation functions. We built our own activation function based on a function from a standard set of functions, called a Rectified Linear Unit (ReLU). The function ReLU is defined as follows (formula (4)). Its graph is shown in fig. 5: = (0, ) Taking into account the required transformation of the components of the tensor C described above, we suggested the function presented in (5) As a result of applying the function δ to each component of the tensor C, the tensor of the global state of the cellular automaton will take the following form (formula (6)).
Thus, the software implementation of the Conway's Game of Life using TensorFlow is a two-layer neural network. The first layer is convolutional, with the kernel from formula (2). The second layer is the activation layer with the activation function from formula (5).
Experimental results
We have implemented the described approach for the cellular automaton of the Conway's Game of Life in Python. Since there was no one in our group familiar with TensorFlow, but we have some experience in Keras [11], the implementation was built using Keras as a kind of wrapper over TensorFlow. Keras is a high level interface to various low-level artificial intelligence libraries, including TensorFlow. The resulting program was launched on a graphics adapter with CUDA support. For comparison with the classical implementation of the cellular automaton of the Conway's Game of Life on a uniprocessor system, we used the implementation of [12]. R-pentamino located in the middle of the field ( fig. 4) states of the cellular automaton is much less than the time of transmission of information. As the field size grows, the computation time of the cellular automaton state becomes significant and the multiprocessor implementation on the graphics adapter begins to outrun the single-threaded speed. Obviously, the dependence of the execution time of programs on the "length" m of the square field side of the Conway's Game of Life must be parabolic. With the growth of m, the number of cells grows as m 2 , each cell needs to be processed once per cycle. Therefore, the number of operations must be of the order of m 2 . According to the obtained results we constructed regression polynomials of the second degree. Regression curves are in good agreement with experimental data (Fig. 7, 8). It may seem that for a multithreaded implementation the dependency should be different. However, we note that when the number of cells becomes much more than the number of cores in a multi-core system (in our case, the graphics adapter had 768 cores), then processing will be performed block by block: first comes one block of 768 cells, then another, etc. Thus, m 2 /K operations will be done, where K is the number of cores, that is, also of the order of m 2 .
Conclusions
In this paper, a tensor approach to the software implementation of cellular automata is described and programmatically implemented. The approach is focused on launching programs on multi-core graphics adapters. The program is implemented in Python using TensorFlow and Keras as an interface to TensorFlow. TensorFlow allows automatically generate and run multi-threaded programs on multi-core graphics adapters. The effectiveness of using the developed approach was shown during a series of computer experiments. For the experiments the Conway's Game of Life was chosen. If the number of cells in the automaton is less or equal to the number of cores, then the maximum acceleration can be observed. If the number of cells exceeds the number of cores, then the parallel sections of the program are executed sequentially. This means that with a very large size of the playing field the type of dependence will be parabolic when using a graphics adapter. The latter is confirmed by regression analysis. | 5,745.6 | 2019-08-05T00:00:00.000 | [
"Computer Science"
] |
Superoxide dismutase SodB is a protective antigen against Campylobacter jejuni colonisation in chickens
required and are predicted to limit the inci- dence of human campylobacteriosis. Towards this aim, a purified recombinant subunit vaccine based on the superoxide dismutase (SodB) protein of C. jejuni M1 was developed and tested in White Leghorn birds. Birds were vaccinated on the day of hatch and 14 days later with SodB fused to glutathione S transferase (GST) or purified GST alone. Birds were challenged with C. jejuni M1 at 28 days of age and caecal Campylobacter counts determined at weekly intervals. Across three independent trials, the vaccine induced a statistically significant 1 log 10 reduction in caecal Campylobacter numbers in vaccinated birds compared to age-matched GST-vaccinated controls. Significant induction of antigen-specific serum IgY was detected in all vaccinated birds, however the magnitude and timing of SodB-specific IgY did not cor- relate with lower numbers of C. jejuni . Antibodies from SodB-vaccinated chickens detected the protein in the periplasm and not membrane fractions or on the bacterial surface, suggesting that the protection observed may not be strictly antibody-mediated. SodB may be useful as a constituent of vaccines for control of C. jejuni infection in broiler birds, however modest protection was observed late relative to the life of broiler birds and further studies are required to potentiate the magnitude and timing of protection.
Introduction
Campylobacter is the leading cause of foodborne diarrhoeal illness in the developed world. In the United Kingdom in 2013 there were 66,575 laboratory-confirmed cases of human campylobacteriosis [1], however for every case captured by national surveillance a further 9.3 are estimated to be undiagnosed in the community and the true incidence may therefore exceed 685,000 cases per annum [2]. The European Food Standards Agency estimated that there are nine million cases of human campylobacteriosis per year across EU27 countries, with the disease and its sequelae (including inflammatory neuropathies and reactive arthritis) causing 0.35 million disability-adjusted life years at a cost of D 2.4 billion per annum [3].
Epidemiology unequivocally implicates poultry as the key source of Campylobacter affecting humans. Over 90% of laboratory-confirmed human campylobacteriosis is due to C. jejuni and source attribution studies indicate that up to 80% of such cases may be due to raw poultry meat [3]. The strategic case to control Campylobacter in farmed poultry is compelling, with a year-long UK-wide survey reporting contamination of 73% of raw chicken on retail sale [4]. Such levels are scarcely different from a UK-wide survey in 2007/8 [5]. With a recent census indicating that c. 900 million broilers are reared each year in the UK (c. 60 billion worldwide) the scale of the problem is vast. Though chilling and topical application of chlorinated water, steam, organic acids or bacteriophages can achieve modest reductions in surface contamination, control of Campylobacter in broilers prior to slaughter would substantially reduce cross-contamination in the abattoir and pathogen entry into the food chain.
Control of Campylobacter may also improve poultry welfare and productivity as recent research indicates that C. jejuni prolonged inflammatory responses, damage to intestinal mucosa and diarrhoea in some commercial broiler lines [6]. Moreover, it was reported that C. jejuni adversely affects body mass gain in broilers [7] and Campylobacter-positive birds are also more likely to exhibit digital dermatitis and signs of colibacillosis [8], though causal links have yet to be formally proven.
Previous studies indicate that various classes of recombinant Campylobacter antigens can elicit protection against colonisation in chickens, including major flagellar subunits [9,10], membrane transport proteins [11,12] and adhesins [13]. However protection often required large quantities of antigen or was observed too late post-immunisation to be relevant to modern broiler production, where birds often enter the food chain at 6-7 weeks of age. One possible target for improved vaccines is the superoxide dismutase protein SodB. SodB influences intestinal colonisation of chickens by C. jejuni [14], and a sodB mutant was reported to be defective in entry and survival in cultured intestinal cells [15]. Moreover, a vaccine against Helicobacter pylori based on recombinant SodB was protective in a murine model [16]. SodB has a high level of sequence conservation amongst sequenced Campylobacters (99%) unlike some candidate antigens evaluated to date. Based on these data, we chose to evaluate a SodB-based subunit vaccine in chickens.
Bacterial strains and culture methods
Escherichia coli XL1 (Stratagene, USA) was used for maintenance of plasmid constructs and E. coli Rosetta BL21 pLysE (Merck Millipore, UK) was used for protein expression. E. coli strains were grown in Luria Bertani (LB) broth or agar at 37 • C, unless otherwise indicated, with shaking at 200 rpm for liquid cultures. C. jejuni M1 was used as a source of DNA for gene cloning and as the challenge strain in vaccination experiments as described [12]. C. jejuni 11168H was used to assess the cross-reactivity and subcellular localisation of SodB. C. jejuni strains were grown on modified charcoal-cephoperazone-deoxycholate agar (mCCDA) (Oxoid, UK) or in Mueller-Hinton Broth (MH; Oxoid), at 37 • C in a microaerophilic workstation (Don Whitley Scientific, UK) in a low oxygen atmosphere (5% O 2 , 5% CO 2 and 90% N 2 ). Liquid cultures of Campylobacter were grown with shaking at 400 rpm using a table top shaker (IKA, Germany) under low oxygen conditions as above. Antibiotics were used at the final concentrations of 100 g/ml ampicillin and 34 g/ml chloramphenicol where appropriate.
Constructs for expression of recombinant antigens
The C. jejuni M1 sodB gene was amplified using primers 5 CGCGCGGGATCCATGTTTGAATTAAGAAAATT 3 (forward) and 5 CGCGCGGCGGCCGCTTATTTTACAGGGTGAAGTT 3 (reverse). The cjaA gene was amplified using the primers: 5 GGGCTGGCAAGC-CACGTTTGGTG 3 (forward) and 5 CCGGGAGCTGCATGTGTCA-GAGG 3 (reverse). Both genes were separately cloned as in-frame C-terminal fusions to glutathione S-transferase (GST) in the pGEX-4T1 plasmid (GE Healthcare, UK), through ligation-dependent cloning using the BamHI (5 end) and NotI (3 end) restriction sites and transformed into E. coli XL1 Blue. The sodB and cjaA genes were re-amplified using the same forward primers described above, but with the reverse primers: 5 CGCGCGCGCGGTCGACTTATTTTACAGGGTGAAGTT 3 for sodB, and 5 CGCGCGCGCGGTCGACTTAGATCTTGCCGCCCTCAATA 3 for cjaA and cloned via the BamHI (3 end) and the SalI (5 end) restriction sites of the pMal-p2X plasmid (New England Biolabs, UK), to create in-frame fusions with the maltose-binding protein (MBP). The Phusion proof-reading DNA polymerase (Life Sciences, UK) was used to generate amplicons for cloning, using the 2-step cycling conditions as recommended by the manufacturer. All plasmid constructs were verified by dideoxy chain-termination sequencing (Source Bioscience, UK), and transformed into electrocompetent E. coli BL21 pLysE Rosetta for protein expression and production.
Expression, purification and validation of recombinant Campylobacter antigens
Cultures of 500 ml to 2 l of the E. coli BL21 pLysE Rosetta cells encoding GST-and MBP-antigen fusions were inoculated at a 1:100 dilution from a stationary phase overnight culture and incubated for 3 h, with shaking at 200 rpm, at either 28 • C (GST-CjaA and MBP-CjaA) or 37 • C (GST, GST-SodB and MBP-SodB). Cultures were induced with either 0.1 mM isopropyl -d-1-thiogalactopyranoside (IPTG; Thermo Scientific, UK; GST-CjaA, MBP-CjaA) or 1 mM IPTG (GST, GST-SodB, MBP-SodB) based on pilot studies to optimise expression and solubility. GST and MBP fusion proteins were, respectively, purified using glutathione sepharose (GE Healthcare, UK) or amylose resin beads (New England Biolabs; UK), in batch format, following the manufacturer's instructions. Bound GST and MBP fusion proteins were eluted for 1 h (GST: 50 mM TrisHCl, 40 mM glutathione, pH 8; MBP: as suggested by manufacturer), in a volume double that of the beads. Beads were eluted three times and fusion protein-containing eluates analysed by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) using 10% Mini-Protean TGX gels (BioRad, UK) and silver staining (Pierce, Life Technologies, UK).
SDS-PAGE and Western blotting
Protein concentration was determined using the QuickStart Bradford Assay (BioRad, UK), following the manufacturer's protocol. Recombinant proteins were transferred to polyvinylidene difluoride (PVDF) membrane using the TransBlot Turbo system (Biorad, UK) and analysed by Western blot with anti-GST (Santa Cruz Biotech, USA) or anti-MBP antibody (New England Biolabs, UK) at 1:10,000 dilutions. Bound antibodies were detected using appropriate horseradish peroxidase (HRP)-conjugated secondary antibodies at 1:10,000 dilutions. To assess if the cloned antigens were immunogenic following natural Campylobacter infection, pooled serum from Campylobacter-infected non-vaccinated White Leghorn birds collected three weeks post-infection was used at a 1:100 dilution. Bound serum IgY was detected with an HRPconjugated rabbit-anti chicken IgY (Sigma-Aldrich, UK) at a 1:3000 dilution. In order to analyse the subcellular localisation of SodB a Western blot of subcellular fractions (Section 2.9) was probed with sera from GST-SodB vaccinated birds as above. Blots were developed using Clarity ECL (BioRad, UK) and autoradiography (Amersham Hyperfilm ECL, GE Lifesciences, UK).
Vaccination and challenge experiments
All procedures were conducted under Home Office project licence PPL 60/4420, according to the requirements of the Animal (Scientific Procedures) Act 1986, with the approval of local ethical review committees. A total of 180 White Leghorn chickens, obtained on the day of hatch from a Home Office licensed breeding establishment were used. Eggs were incubated and hatched under specified-pathogen free conditions. Animals were housed in groups of up to 20 in colony cages. Groups were of mixed sex and were wing-tagged for individual identification. Water and sterile irradiated feed based on vegetable protein (DBM Ltd., UK) was provided ad libitum.
Three separate trials were conducted, each including vaccination with GST or GST-CjaA as negative and positive controls respectively, alongside GST-SodB. Data for the GST and GST-CjaA control groups was available from an additional experiment that did not test GST-SodB concomitantly but which had an identical design to the three experiments that tested GST-SodB in parallel. The experimental design was essentially as described [12], but with the following modifications. White Leghorn chickens rather than Light Sussex birds were used and GST-CjaA was used rather than a 6xHis-CjaA fusion [12]. Briefly, twenty birds were used per experiment per group. A mechanical dispenser and high accuracy syringes (Hamilton-Bonaduz, Switzerland) were used for both vaccinations and oral gavage. Vaccination was subcutaneous in volumes of 50 l on each side of the thorax, using 1 , 21 gauge needles. Antigen preparations were mixed in a 1:1 ratio with TiterMax Gold ® (Sigma-Aldrich, UK) and each bird received 4.3 × 10 −10 mole of recombinant protein for parity with our earlier studies using 6xHis-CjaA [12]. Birds were given the primary vaccination on the day of hatch, an identical booster 14 days later and challenged with 10 7 colony-forming units (CFU) of C. jejuni M1 at 28 days post-hatch (dph) by oral gavage. Starting one week after challenge, between 4 and 6 birds were removed at weekly intervals to enumerate caecal Campylobacter by plating 100 l of 10-fold serial dilutions of caecal contents in phosphate-buffered saline (PBS) on mCCDA plates. At the same time, samples of blood and bile were collected for the measurement of humoral responses. Blood was stored at −4 • C overnight to allow coagulation, after which blood cells were pelleted by centrifugation at 3000 × g for 10 min. Serum was collected and stored at −80 • C until use.
Analysis of humoral immune responses following vaccination
Enzyme-linked immunosorbent assays (ELISAs) were carried out to measure antigen-specific serum IgY and secretory bile IgA (sIgA) against SodB and CjaA. In order to improve the specificity of detection of antigen-specific antibodies, MBP fusions were used as the capture antigen in these assays. The assays were done as previously described [12], however no blocking step was used for the measurement of serum IgY. Coating conditions were optimised using chequerboard analyses for IgY and IgA individually. To analyse serum IgY, 96 well plates were coated with 0.5 g/ml of MBP-CjaA or 2 g/ml MBP-SodB. Serum was diluted at 1:500 for GST-CjaA and 1:250 for GST-SodB vaccinated birds. To analyse secretory IgA (sIgA), each recombinant protein was coated at a concentration of 1 g/ml and a 1:250 bile dilution was used.
Immunofluorescence microscopy
For assessment of subcellular localisation of SodB within C. jejuni 11168H cells pooled serum from SodB-vaccinated chickens collected before challenge was used at a 1:500 dilution to stain Campylobacter cells bound to poly-l-lysine treated glass cover slips. Where indicated, bacterial cells were permeabilised with 10% (v/v) Triton X-100 for 30 min. A goat anti-chicken IgY conjugated to AlexaFluor-488 secondary antibody (Abcam, UK) was used for detection at a 1:500 dilution. Cover slips were mounted on glass slides using Prolong Gold (Life Technologies). Images were captured using fluorescent and light microscopy on a Leica DML (Leica, Germany) microscope.
Generation of subcellular fractions of C. jejuni
For the preparation of the periplasmic fraction of C. jejuni, an osmotic shock procedure was used as described [17], with modifications given in Supplementary Material. The outer membrane and inner membrane preparations were made as described [18] with modifications as given in Supplementary Material. Subcellular fractions of C. jejuni 11168H were used as preparations of high purity were already available from other work.
Statistical analysis
Statistical analyses were performed using Minitab 17 (Minitab, UK). Individual caecal Campylobacter counts were logarithmically transformed and the arithmetic mean was calculated. Significant reductions compared to control groups were determined using post-hoc Dunnet tests following fitting of a second order hierarchical general linear model (GLM) that took into account interactions between time of sample collection and treatment group. The first two outliers in each group, as identified by the GLM as having high residuals of over 2.5 log 10 CFU/g, were removed from the data. To test whether humoral immune responses were significantly induced relative to control birds the mean OD 450 reading was calculated and a two-tailed Student's T-test used to detect significant increases in antibody levels. Antigen-specific fold changes in OD 450 of serum IgY in individual birds were calculated by dividing the OD 450 measures in each vaccinated bird by the average of the control group calculated at each sampling time-point. Correlations between serum IgY levels and caecal Campylobacter counts in individual birds were assessed by fitting of a linear regression to the data. P values of ≤0.05 were considered significant.
Recombinant protein purification and validation
The preparations of GST and GST-SodB used in all vaccination trials and a typical preparation of GST-CjaA (from the first vaccination trial) are shown in Fig. 1A. In Western blots using GST-specific antibody the GST-SodB preparation was detected as a single species whereas the GST-CjaA preparation appeared to contain a species of the size of GST only as well as the dominant GST-CjaA fusion protein (Fig. 1B). A similar pattern was observed previously when CjaA was expressed as a fusion to TetC [12]. MBP fusions were similarly validated with an anti-MBP antibody (data not shown). To determine if the proteins are recognised during C. jejuni infection, pooled serum from unvaccinated chickens challenged with C. jejuni M1 was used for Western blots. The serum reacted to GST-CjaA but not GST-SodB or GST alone (Fig. 1C), implying that of the antigens tested only CjaA is naturally immunogenic following Campylobacter infection with the M1 strain, at least at the limit of detection of the method. However, owing to use of denaturing SDS-PAGE only linear epitopes would be detected.
Vaccination of chickens with recombinant SodB elicits a statistically significant reduction in caecal Campylobacter colonisation
We evaluated the impact of vaccination of chickens with GST-SodB on protection against homologous challenge with C. jejuni in three independent trials. In these trials, GST-CjaA was tested concomitantly as CjaA was previously demonstrated to be protective when given as a 6xHis-tagged recombinant protein [12]. GST alone was given as a negative control. No adverse effects of vaccination were noted in any of the experimental animals and no obvious clinical signs were induced by the challenge with C. jejuni M1 in any of the birds. Caecal Campylobacter loads were determined at post-mortem examination at weekly intervals following challenge as longitudinal sampling by cloacal swabbing is less reliable for obtaining viable counts [19]. Across 3 biological replicates (4-6 birds sampled at each time point per group, per replicate) the groups vaccinated with GST-SodB had a significantly different course of caecal Campylobacter colonisation compared to both the GST (P = 0.001) and GST-CjaA (P < 0.001) vaccinated groups tested across 4 biological replicates ( Fig. 2A) Post-hoc Dunnet's tests indicated that the 1.3 log 10 reduction observed at 56 dph in the GST-SodB vaccinated group compared to the GST vaccinated group was significant (P < 0.001). At 49 dph a reduction of 0.75 log 10 was also observed compared to the GST group, however, this was not statistically significant (P = 0.19). Significant reduction were observed in the GST-SodB vaccinated group compared to the GST-CjaA vaccinated group at both 49 (P = 0.048) and 56 dph (P < 0.001). No reductions were observed in the GST-CjaA vaccinated birds relative to those given GST alone at any time interval.
Both SodB-and CjaA-based vaccines induced antigen-specific antibody responses following vaccination
In order to measure antigen-specific humoral responses, serum IgY and bile IgA responses against C. jejuni M1 antigens in vaccinated birds were quantified by ELISA. Campylobacter antigens were expressed as MBP fusions to separate responses to C. jejuni antigens from the GST fusion partner. A significantly higher level of antigen-specific serum IgY was induced in the GST-SodB and GST-CjaA vaccinated groups compared to GST vaccinated groups at all time-points measured (Fig. 2). However, no significant induction of antigen-specific bile IgA was detected in either of the vaccinated groups at any of the time points (Fig. 2), similar to previous Mea OD450 m for a t ge spe f b le IgA observations [12]. Further, at the level of individual birds the magnitude of antigen-specific serum IgY responses did not correlate with caecal Campylobacter counts for either of the antigens (Fig. 3).
SodB is an intracellular protein in C. jejuni cells
The subcellular localisation of SodB within C. jejuni 11168H cells was determined by Western blotting of subcellular fractions and immunofluorescence microscopy in order to assess the likelihood of directly neutralising antibodies playing a role in protection following vaccination. Sera from GST-SodB vaccinated birds detected SodB only in the periplasmic fraction of C. jejuni 11168H but not within the outer or inner membrane fractions (Fig. 4A). The purity of the fractions was demonstrated by Western blotting with ␣CapA (an outer membrane auto-transported adhesin, [20]) or ␣MfrA (a periplasmic fumarate reductase subunit associated with an inner membrane complex, [21]; Fig. 4A). As a cytoplasmic fraction was not examined we cannot be certain that SodB exists only in the periplasm of C. jejuni. Immunofluorescence microscopy using serum from GST-SodB vaccinated birds detected specific staining of permeabilised C. jejuni cells, but not non-permeabilised cells (Fig. 4B), supporting a lack of surface exposure of SodB, at least within the limit of detection of the method. Secretion of SodB was not assessed in this study. SodB lacks a predicted signal peptide for secretion in Campylobacter and no evidence of other iron-base superoxide dismutases being secreted in bacteria is available in the literature.
Discussion
Towards the aim of developing a vaccine to control Campylobacter in its primary reservoir, we evaluated the efficacy of purified GST-SodB in reducing Campylobacter colonisation in chickens. The vaccine reduced caecal colonisation by approximately 1 log 10 at 49 and 56 dph relative to GST-vaccinated birds, however only the reduction at 56 dph proved to be statistically significant. The GST-CjaA vaccine was not protective, unlike our previous study [12], however this could be due to changes in the design of the experiments. For example the line of birds used was White Leghorn rather than Light Sussex and the fusion partner and affinity purification processes were different. Further, the existence of a truncation of GST-CjaA ( Fig. 1A and B) meant that the molar quantity of CjaA received by the GST-CjaA vaccinated birds was lower than when the 6xHis-CjaA fusion was evaluated, and the current study used the Gold version of TiterMax ® as adjuvant which is further optimised for the promotion of humoral responses. The lack of a protective effect using GST-CjaA should not cast doubt on the use of GST fusion vaccines for control of Campylobacter. Previous studies have demonstrated protection against colonisation and clinical symptoms using GST-PorA in a murine model of campylobacteriosis [22] and protection against colonisation through the use of a combined GST and 6xHis-tagged FlpA vaccine in a chicken model [13].
Though the GST-SodB-based vaccine was protective, both immunofluorescence microscopy of whole C. jejuni cells and Western blotting of subcellular fractions indicated the absence of SodB from the bacterial surface. This is consistent with the subcellular localisation of SodB within E. coli [23], and indicates that directly neutralising antibodies binding to the bacteria may not play a major role in protection. This is supported by bacteria not being agglutinated when mixed 1:1 (v/v) with serum from GST-SodB vaccinated birds collected at the time of challenge (data not shown). Furthermore, despite the significant induction of humoral responses by the SodB-and CjaA-based vaccines compared to GST-vaccinated birds, levels of antigen-specific serum IgY levels did not correlate with caecal Campylobacter counts in individual birds (Fig. 3), nor was the peak of antigen-specific IgY coincident with the timing of the protective effect (Fig. 2). In addition, both vaccines failed to induce antigen-specific detectable biliary IgA at any of the time intervals studied. Further characterisation of the nature and consequences of cell-mediated and humoral responses in protection against Campylobacter colonisation will help to refine vaccine design of vaccines and inform the selection of adjuvants. Vaccination of transgenic chickens lacking the Ig heavy chain J segment [24] would allow the role of antibody in vaccine-mediates protection to be formally established.
Quantitative risk assessments predict that even a relatively modest hundred-fold reduction of Campylobacter on broiler carcasses could reduce the incidence of human disease due to chicken consumption by 30-fold [25]. Even though protective vaccines against Campylobacter in chickens have been described, they each present drawbacks that hinder field application. Some are costly to produce [26], others pose the challenge of attenuated live vectors that persist at the point of entry into the food chain [11,12], and others require very high doses to obtain protection [13]. The GST-SodB vaccine described herein has the advantages of high sequence conservation and high solubility in aqueous medium, making it easy to produce and deliver under experimental conditions. However, a successful field vaccine is likely to require vectoring due to benefits in cost and ease of use. Our study shows proof-of-potential for anti-Campylobacter vaccination using SodB in chickens and adds an additional protective antigen to the limited repertoire of those described in the literature to date.
Conflict of interest statement
None of the authors have any conflicts of interest. Zoetis did not participate directly in the design and evaluation of the vaccines described. | 5,285 | 2015-11-17T00:00:00.000 | [
"Biology"
] |
Adult Limbal Neurosphere Cells: A Potential Autologous Cell Resource for Retinal Cell Generation
The Corneal limbus is a readily accessible region at the front of the eye, separating the cornea and sclera. Neural colonies (neurospheres) can be generated from adult corneal limbus in vitro. We have previously shown that these neurospheres originate from neural crest stem/progenitor cells and that they can differentiate into functional neurons in vitro. The aim of this study was to investigate whether mouse and human limbal neurosphere cells (LNS) could differentiate towards a retinal lineage both in vivo and in vitro following exposure to a developing retinal microenvironment. In this article we show that LNS can be generated from adult mice and aged humans (up to 97 years) using a serum free culture assay. Following culture with developing mouse retinal cells, we detected retinal progenitor cell markers, mature retinal/neuronal markers and sensory cilia in the majority of mouse LNS experiments. After transplantation into the sub-retinal space of neonatal mice, mouse LNS cells expressed photoreceptor specific markers, but no incorporation into host retinal tissue was seen. Human LNS cells also expressed retinal progenitor markers at the transcription level but mature retinal markers were not observed in vitro or in vivo. This data highlights that mouse corneal limbal stromal progenitor cells can transdifferentiate towards a retinal lineage. Complete differentiation is likely to require more comprehensive regulation; however, the accessibility and plasticity of LNS makes them an attractive cell resource for future study and ultimately therapeutic application.
Introduction
Retinal diseases are the leading cause of untreatable blindness worldwide. These conditions include age related macular degeneration (AMD) and a wide spectrum of inherited retinal diseases. Irreversible visual impairment arises due to a gradual loss of light sensory neurons-photoreceptors and/or their supportive cells the retinal pigment epithelium (RPE). Unlike lower vertebrates, adult mammals cannot regenerate retinal neurons. The visual disability caused by these diseases carries a formidable clinical and socioeconomic burden in western countries [1].
Cell based therapies are an attractive approach to treat retinal disease [2]. They offer the potential to restore functional vision. Recent studies have demonstrated that transplanted photoreceptor precursor cells can form synaptic connections with host retina and improve visual function in animal models of retinal degeneration [2][3][4]. However, identifying practical cell sources to generate sufficient functional cells for transplantation remains challenging. Utilizing embryonic or fetal tissue is difficult due to limited resources, ethical issues or risks of tumour formation [5]. In addition, transplant rejection may occur due to chronic immune responses. This has been observed after transplantation with a 90% loss of integrated allogeneic photoreceptors by 4 months, and nearly 100% loss at 6 months [6]. Therefore immune-matched autologous cell resources have considerable advantages.
Autologous somatic cells can be genetically reprogrammed into induced pluripotent stem cells (iPSCs), an embryonic stem cell-like state, and then differentiate into all three germ layer cells, including a retinal lineage with the production of photoreceptors and RPE cells [7]. These iPSCs derived cells have been transplanted into animal models of retinal degeneration and have shown promising results [8,9]. Whilst using this differentiation method, risk of tumour formation remains due to contamination with undifferentiated cells [10]. Recently, a new 3D culture method has successfully produced a larger number of ''integrationcompetent'' photoreceptor cells from ESCs. The process of differentiation recapitulates the in vivo development of the opticcup [11,12]. This 3D culture protocol is also based on Matrigel, a solubilised basement membrane derived from murine sarcomas. It contains undefined xenogenic growth factors, which prevents the protocol from production of clinical grade transplantable retinal cells. Hence, potential adverse effects still need to be carefully addressed prior to iPSCs based cell therapy.
Adult stem/progenitor cells are an attractive alternative autologous cell resource. Studies have shown the plasticity of these cell types. They can be induced to transdifferentiate toward lineages other than that of their origin [13][14][15]. Certain cell types can also de-differentiate into multipotent progenitor cells that give rise to cells that express retinal specific markers. This includes ciliary body (CB) epithelium and retinal Müller glial (MG) cells, although their potential remains controversial [16][17][18][19][20][21]. In addition, routine safe and practical surgical techniques do not exist to harvest them. Therefore they are unlikely to be a practical autologous cell resource in the immediate future.
In contrast, the corneal limbus is a readily accessible area, where the superficial layers are amenable to tissue harvesting. Several groups have reported generation of neural colonies (neurospheres) from cornea/limbus by neurosphere assay [22,23]. This utilises a well-defined suspension culture system, thus it is more appropriate for the derivation of cells for clinical application. Zhao et al. showed the rat limbal cell cultured as neurospheres expressed photoreceptor specific markers following co-culture with neonatal retinal cells. The co-culture condition provides a photoreceptor promoting microenvironment [15]. However, it remains unknown whether LNS from other species, particularly from humans and mice, can give rise to retinal like cells. Their ability to generate photoreceptor like cells in vivo and to integrate into host retina is yet to be proven. In addition, the number of adult stem/progenitor cells normally decreases with age. It is thus important to investigate whether LNS can be cultured from aged human eyes and used as an autologous cell resource in age related diseases. Here, we investigate LNS derived from mice and humans to extend the knowledge of limbal cells to other species.
We have previously conducted a comprehensive characterization of mouse LNS regarding their self-renewal capacity, origin and ultrastructure, and shown that neurospheres derived from the corneal limbus are neural crest derived limbal stromal stem/ progenitor cells. For the first time, we demonstrated that functional neural-like cells can be derived from neural crestderived limbal cells [24]. The aim of this study is now to investigate whether mouse and human limbal neurosphere cells (LNS) can differentiate into retinal like cells both in vivo and in vitro after exposure to a developing retinal microenvironment.
Animals
The use of animals in this study was in accordance with the ARVO statement for the use of animals in Ophthalmic and Vision Research and the regulations set down by the UK Animals (Scientific Procedures) Act 1986. The protocol was approved by the UK Home Office. All surgery was performed under isoflurane inhalation anaesthesia, and every effort was made to minimize suffering.
Male C57BL/6 mice were maintained in the animal facility of the University of Southampton. Adult mice (6-8 weeks old) were used for corneal limbal cell culture, differentiation, and transplantation studies. Postnatal (PN) day 1-3 mice were used for isolation of retina to provide a conditioned retinal development environment in vitro and as recipients for sub-retinal transplantation of LNS cells.
Cell culture
Human limbal tissues that were consented for research use were requested from the Corneal Transplant Service Eye Bank in Bristol (CTS Eye Bank, http://www.bristol.ac.uk/clinicalsciences/research/ophthalmology/tissue-bank/eye-bank/). The study was approved by Southampton & South West Hampshire Research Ethics Committee (A). The use of human fetal retinas followed the guidelines of the Polkinghome Report, and was approved by the Southampton & South West Hampshire Local Research Ethics Committee. Written informed consent from the donor or the next of kin was obtained for use of human samples in this research.
Immunocytochemistry/immunohistochemistry
Cells were fixed with 4% Paraformaldehyde (PFA, pH 7.4, Sigma-Aldrich) for 15-20 min at 4uC. Cells or tissue slides were permeabilized and blocked with 0.1 mM phosphate buffer saline (PBS) supplemented with 0.1% Triton X-100 (Sigma-Aldrich) and 5% donkey blocking serum (DBS, Sigma-Aldrich) for 0.5-1 hrs at room temperature (rt), prior to addition of primary antibodies. Specific IgG secondary antibodies (Alexa Fluor 488, 555conjugate (1:500) Invitrogen) were incubated at rt for 1-2 hrs. Negative controls omitted the primary antibody. Nuclei were counterstained with 10 ng/ml 49, 69-diamidino-2-phenylindole (DAPI, Sigma-Aldrich). Images were captured using a Leica DM IRB microscope or a Leica SP5 confocal laser scanning microscope (Leica Microsystems UK Ltd, Buckinghamshire, UK). To quantify the percentage of cells expressing a particular phenotypic marker, the number of positive cells was determined relative to the total number of cells (DAPI labelled nuclei). A total of 500-1000 cells from 6 random fields were counted per marker. The antibodies used are listed in Table S1 in File S1.
Transmission electron microscopy (TEM)
Samples were fixed with 0.1 M sodium cacodylate (Sigma-Aldrich), 3% glutaraldehyde (Sigma-Aldrich), 4% PFA and 0.1 M PIPES buffer (Sigma-Aldrich) for 15 min. After rinsing in 0.1 M PIPES buffer, samples were postfixed in 1% buffered osmium tetroxide (1 hr, Sigma-Aldrich) and block stained in 2% aqueous uranyl acetate (20 min, Sigma-Aldrich). Following dehydration through a graded series of ethanol fixations up to 100%, the samples were embedded in TAAB resin (TAAB Laboratories, Berkshire, UK). Gold sections were cut on a Leica OMU 3 ultramicrotome (Leica), stained with Reynolds lead stain and viewed on a Hitachi H7000 transmission electron microscope equipped with a SIS megaview III digital camera (Hitachi High-Technologies Corporation, Berkshire, UK).
Reverse transcription-polymerase chain reaction (RT-PCR)
Total RNA was isolated and cDNA synthesis was performed as per manufacturer's instructions using an RNeasy Plus kit (Qiagen, West Sussex, UK) and High Capacity cDNA Reverse Transcription Kits (Applied Biosystems, Cheshire, UK). cDNA was amplified using gene specific primers (Table S2 in File S1) using step cycles (denaturing for 30 sec at 94uC; annealing for 30 sec at 60uC and extension for 30 sec at 72uC for 35 cycles unless indicated otherwise). Electrophoresis was performed on a 1.5% agarose gel. Real time quantitative PCR experiments were performed using Rotor-gene 6000 (Qiagen, Manchester, UK). Primers and FAM-labeled probes were designed and manufactured by PrimerDesign (PrimerDesign Ltd, Southampton, UK; Table S3 in File S1). Other PCR reagents and amplification protocol were obtained from the same commercial provider. Samples were analysed in duplicate and normalised to Gapdh expression level by the 2 2DDCt method.
Cell transplantation
LNS cells passages 3-5 were used for transplantation. Following transfection with a lentiviral-eGFP vector (kind gift from Professor Andrew Dick, University of Bristol) at a concentration of 5 MOI (multiplicity of infection), Green fluorescent protein (GFP) was observed in over 95% of LNS cells 72 hours post-transfection. LNS cells were dissociated into single cells with Accutase (Sigma-Aldrich), washed twice with PBS and resuspended at a concentration of 4,000-10,000 cells/ml in DMEM media (Invitrogen). P1-3 mice were subjected to inhalation anaesthesia using 50% isoflurane (Sigma-Aldrich) mixed with 50% oxygen. Animals received cell transplants (0.8-1.0 ml) via a transcleral injection into the subretinal space using a 34 gauge hypodermic needle (Hamilton, Switzerland), connected to a Hamilton syringe (Hamilton). Needle insertions were made tangentially though the lateral superior sclera, and cells were injected slowly to produce retinal detachments. Mice were sacrificed 2-3 weeks after transplantation. After enucleation, the eyes were fixed with 4% PFA in PBS, and cryoprotected in 20% sucrose, before embedding in OCT (TissueTek, Thatcham, UK). Cryosections (16 mm thick) were cut and affixed to poly-L-lysine coated slides (Thermo Scientific, Hertfordshire, UK).
Statistical Methods
All results are presented as mean 6 SEM (standard error of the mean) unless otherwise stated; n represents the number of replicates. Statistical comparisons were made using a Student's ttest or one way analysis of variance (ANOVA) with a significance threshold of p,0.05. GraphPad Prism Software (GraphPad, San Diego, USA) was used for statistical analysis.
Generation of neurospheres from adult mouse and human limbal cells
We previously demonstrated that neural colonies (neurospheres) can be generated from mouse adult corneal limbus in serum free medium in the presence of mitogens [24].
By using the same culture system, we sought to enrich the neural stem-like cells from both adult mouse and aged human corneal limbus. Limbal tissue was harvested from adult mice (6-8 weeks, Fig. S1 in File S1, File S2) and aged donors (72-97 years of age), and cultured in the serum free neurosphere culture system. The mouse and human limbal sphere-clusters started forming on day 5 and day 7 in vitro, respectively. Approximately 100-120 LNS, size ranging from 50-150 mm in diameter, were generated from aged individual human eyes after 10-14 days (Fig. 1). The numbers were significantly less than those generated from single young adult mouse eyes (392618, p,0.001). This may be due to the age of the human donor eyes as well as low cell viability after 5-28 days tissue storage at the local eye bank. We examined the phenotype of the cells within the human LNS. As revealed by immunocytochemistry, human limbal sphere clusters expressed neural stem cell markers, including the transcription factor Sox2 and intermediate filament protein nestin. The proportions of Sox2 and nestin positive cells were 31.2610.2% and 34.862.2% respectively (Fig. 1). This is similar to our previous findings in mouse LNS [24]. Co-expression of both markers was observed in 26.369.7% of cells.
Expression of photoreceptor specific markers in mouse LNS cells following co-culture with developing retinal cells
To promote differentiation of LNS into photoreceptor like cells, mouse LNS cells were co-cultured with neonatal retinal cells. As previously described, cultured PN1-3 murine retinal cells release diffusible rod promoting factors, which can promote stem cell differentiation or transdifferentiation towards rod photoreceptors [25,26]. Cell inserts with a semi-permeable membrane were utilised to avoid cell contamination. Mouse LNS cells formed a monolayer following withdrawal of mitogens, with cells displaying neural morphologies. Expression of retinal progenitor cell markers Pax6, Lhx2 was observed by RT-PCR after 2-4 days of co-culture. Following 7-10 days in co-culture, mouse LNS cells were immunopositive for the photoreceptor specific marker Rhodopsin (1363%) (Fig. 2). Rhodopsin positive cells had a dendritic morphology, with staining in both the cell body and cytoplasmic processes. Under control conditions (without co-culture), a few Rhodopsin positive cells were detected (1.1960.39%). Statistical analysis showed a significant difference between the two groups in the presence and absence of co-culture (P,0.001, unpaired t-test). Expression of both Rhodopsin and Rhodopsin kinase was also detected at the RNA level by RT-PCR (Fig. 2), although the expression level was significant lower than native neonatal retinal tissues (,1%, P,0.001, ANOVA). Approximately 8-10% of cells exhibited strong immunoreactivity to Syntaxin3, a major component of synapses within the retina (Fig. 2) [27].
Differentiated LNS cells displayed ultrastructural changes following co-culture. We previously reported that cellular junctions such as gap junctions and immature adherens junctions were found in LNS [24]. TEM revealed loss of junctions within the LNS and presence of non-motile primary cilia ( Fig. 2A, arrow) after differentiation. The cilia noted in co-cultured LNS were identified as sensory cilia, consisting of an axoneme of nine doublet microtubules, with a lack of a key central pair of microtubules that are involved in ciliary motility [28]. Although not specific to retinal lineage cells, this subtype of sensory cilia is present in photoreceptor and RPE cells.
Potential of mouse LNS cells differentiation to retinal lineage in vivo
To investigate the differentiation of mouse LNS cells in vivo, eGFP expressing donor cells between passage 3-5 were transplanted into the sub-retinal space (SRS) of P1 wild-type C57BL/6 mice. P1 mice were selected as hosts since their retinas were undergoing rod photoreceptor genesis. This produces an optimal microenvironment to stimulate cell differentiation and integration [25].
Following injection of a suspension of dissociated LNS cells, identified by their eGFP tags, grafted mouse LNS cells were found in the SRS or the vitreous (Fig. 3). Donor cells did not appear to migrate into the host retina. Mouse LNS cells located in the SRS showed small round cell bodies. Immunohistochemical analysis using photoreceptor specific antibodies against rhodopsin, recoverin and syntaxin3 demonstrated expression of these markers in eGFP cells, indicating differentiation of mouse LNS cells along a photoreceptor lineage in vivo. This concurs with our observations following in vitro co-culture using mouse LNS cells. Interestingly, mouse LNS cells located in the vitreous cavity, incorporated into the lens epithelium. However, photoreceptor markers were not detected in these eGFP tagged LNS cells (Fig. 3). This was possibly due to the ''non-permissive'' environment of the vitreous cavity.
Potential of human LNS cell differentiation towards retinal lineages in vitro and in vivo
To investigate the potential of human LNS cells to transdifferentiate towards retinal-like cells, both neonatal mouse retinal cells and early human developing retinal cells were used for the coculture assay. In addition, the previously reported extrinsic factors Shh/Taurine/RA were utilised to promote cell differentiation to retinal cells [15,29]. Low levels of Lhx2 and Pax6 were detected in all samples co-cultured with P1 mouse retinal cells or Shh/ Taurine/RA conditions. The retinal homeobox gene (Rx) was expressed in 50% of the above samples. On the contrary, human LNS cells co-cultured with early developing human retinal cells or in control conditions where only differentiation media was applied, did not express Lhx2 or Pax6 (Fig. 4A, C-E). Mature photoreceptor specific markers such as Rhodopsin were not detected in human LNS cells at either the transcript or protein level as shown by RT-PCR and immunocytochemical analysis.
We further investigated whether the in vivo permissive environment of neonatal mouse retina could induce human LNS cells to differentiate into retinal like cells. Following transduction of cells with eGFP, human LNS were subsequently transplanted into the SRS of wildtype PN1-3 mice. The human xenografts survived for 25 days in the absence of immuno-suppression (longest observation time). They formed cell clusters, and displayed dendritic processes. However, their processes didn't extend towards the ONL, but mainly spread horizontally. Adjacent to the site where human LNS cells were grafted, host retinas displayed an upregulation of glial fibrillary acidic protein (GFAP) and an aberrant host retinal structure. This was more severe than in the recipient retinas transplanted with allogeneic mouse LNS cells. The histology suggested formation of a glial scar due to an immune response to the xenograft. Similarly with co-cultured human LNS cells, photoreceptor markers, such as rhodopsin and recoverin, were not detected in any grafted human eGFP-LNS cells (10 eyes, Fig. 4). This suggests that human LNS cells failed to differentiate towards a photoreceptor lineage following transplantation into the SRS of developing mouse eyes. This implies that more comprehensive intrinsic and extrinsic regulation is required to drive human LNS cells differentiation towards retinal-like cells.
Discussion
We have previously demonstrated that adult mouse LNS are neural crest-derived limbal stromal stem/progenitor cells [24]. They can generate functional neural-like cells in vitro [24]. We now report that these cells have the potential for differentiation towards a retinal lineage. Following co-culture with neonatal retinal cells, LNS cells express retinal progenitor cell markers such as Lhx2 and Pax6, and mature retinal specific markers including Rhodopsin and Rhodopsin Kinase, with approximately 10% of ganglion layer, OPL: Outer plexiform layer, INL: inner nuclear layer, OPL: Outer plexiform layer, ONL: outer nuclear layer, OS: outer segments, IS: inner segments Scale bar: 200 nm (A-B); 13 mm (a-c, e-g, j-l); 26 mm (d, h, i). doi:10.1371/journal.pone.0108418.g002 We further investigated whether LNS can be generated from aged human eyes, and whether they have a similar potential to generate retinal cells. We generated LNS from aged human limbal tissue from donors up to 97 years of age. The upregulation of retinal progenitor markers such as Lhx2, Pax6 and Rx was noted following culture in permissive conditions in vitro. To the best of our knowledge, this is the first evidence showing that human LNS have the plasticity to express retinal progenitor markers. However, mature photoreceptor markers were not observed. The lack of further cell maturation suggests that more comprehensive intrinsic and extrinsic regulation is needed cf. mouse cells. Extrinsic factors released by PN1-3 mouse retinal cells may be insufficient in promoting further differentiation of human cells towards a mature retinal cell lineage.
We also co-cultured human LNS with fetal week (Fwk) 7-8 human retinal cells. LNS did not appear to differentiate towards a retinal lineage in this circumstance. This may be due to the fact that retinal tissue/cells at the gestational stage of Fwk 7-8, have not started rod genesis [30]. Our observation is consistent with a previous report that rod promoting activity is only observed in retinal cells at the peak of rod genesis, but not at an early developmental stage or in adult retinal cells [25]. Due to ethical concerns, we were unable to access later stage human fetal retinal tissues. However, it is encouraging that human LNS expressed retinal progenitor markers when exposed to several defined factors including Shh, Taurine and RA. Shh has been shown to be involved in the formation of the ventral optic cup, specification of dorso-ventral polarity in the optic vesicle, and governing of ocular morphogenesis [31]. Besides specification of the eye field during embryonic development, Shh also has been implicated in the control of retinal development in vertebrates [32,33] and is required for the maintenance of retinal progenitor cell proliferation [34]. Another factor, RA, plays an important role in early eye development as well as in the differentiation, maturation and survival of photoreceptors [35]. Similar to the effect of co-culture with neonatal (P1-3) mouse retinal cells, the combination of Shh, Taurine and RA promoted upregulation of retinal progenitor markers in human LNS. This suggests that defined culture conditions may replace the use of animal tissue in the future.
We did not observe LNS cell migration or integration into the host retina following sub-retinal transplantation into neonatal mice. Cell integration into the retina remains challenging. Despite being derived from the same origin as neural retina, iris or CB derived cells have also shown limited ability for retinal integration [36]. The proportions of cells which integrate into embryonic retinal explants or retina from degenerate animal models are small. Studies using retinal progenitor cells from embryonic retina have also shown little integration into host retina, although mature retinal phenotypes have been observed following sub-retinal transplantation [37,38]. MacLaren et al. investigated the optimal cell resource for functional integration into adult retina [2,6]. The cells which migrated and integrated were shown to be post-mitotic rod precursor cells. Therefore, the ontogenetic stage of transplanted cells is important for successful cell integration. Grafted LNS cells in this study were not fully committed post-mitotic cells. This may explain why cell integration was not observed. The host microenvironment is also essential for inducing cell differentiation and migration. In a study involving transplantation of IPE derived cells [39], the grafted cells expressed the photoreceptor specific marker rhodopsin when they were transplanted into the SRS of embryonic chicken (E5) eyes. On the contrary, they did not express rhodopsin or other neural markers when they were transplanted into the vitreous cavity. This is in accordance with our observation that the LNS cells transplanted into the vitreous do not express photoreceptor markers.
LNS display plasticity, the potential to cross the tissue/germ layer boundary and generate cells other than their origin [15,24]. However, LNS have limited potential to generate photoreceptorlike cells. The highest rhodopsin expression level noted using LNS derived cells was ,3% of that observed compared to using neonatal mouse retinal tissue. Reports on other ocular stem-like/ progenitor cells also show limited success in the generation of photoreceptor regardless of cell origin. Recently two independent groups showed CE-derived cells failed to give rise to photoreceptor cells [16,17]. Retinal neurosphere cells derived from neonatal mice also had a low efficiency (1-2%) in generation of rhodopsin positive cells during spontaneous differentiation [40].
It has been suggested that cell reprogramming is likely to be needed for robust photoreceptor cell production. LNS cells would also be an optimal cell resource for reprogramming and/or transdifferentiation and subsequent retinal repair. They are readily accessible, highly proliferative and multipotent ocular stem cells. iPSCs have been generated from mouse and human somatic cells by ectopic expression of four transcription factors including OCT4, SOX2, c-Myc and KLF4. Due to risks such as insertional mutagenesis or tumour formation, it is desirable to use the minimal number of transcription factors and to eliminate oncogenic factors [41][42][43]. This goal can be achieved through optimal selection of candidate cell resources. For example, Kim et al. generated iPSCs from adult mouse and human neural stem cells by ectopic expression of a single transcription factor Oct4 [41][42][43]. As we previously demonstrated [24], a diverse range of neural stem markers including Sox2, were detected on LNS cells. The multipotent capability of limbal stroma derived stem/ progenitor cells have been reported by different research groups [22,23,[44][45][46]. Dravida et al. showed that stem cells derived from human corneal-limbal stroma, expressed the ESC marker SSEA-4 (stage specific embryonic antigen-4) and other stem cell markers important for maintaining an undifferentiated state [45]. Therefore, LNS cells may become an ideal cell resource for single-factor reprogramming and subsequent retinal repair due to their existing stem/progenitor cell properties, multipotency and plasticity.
In summary, this data demonstrates the potential of mouse and human LNS to differentiate into retinal lineages in vitro and in vivo. The regulation of human LNS differentiation to a retinal lineage appears more comprehensive than with mouse LNS cells. As a readily accessible progenitor cell resource that can be derived from individuals up to 97 years of age, limbal neurosphere cells remain an attractive cell resource for the development of novel therapeutic approaches for degenerative retinal diseases.
Supporting Information
File S1 Table S1, Primary antibodies used for immunocytochemical analysis. Table S2, Primer sequences used for phenotypic analysis and expected product sizes. | 5,832.4 | 2014-10-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Visualization of flow past square cylinders with corner modification
This article presents the results for flow past a square cylinder and two square cylinders of same and different size with corner modifications by varying the spacing ratio. Here, experimental work is conducted in a recirculatory channel filled with water. A set of aluminum discs made to rotate to create the flow in the test section. Motor is used to vary the speed of water. Fine aluminum powder is used as a tracer medium. It is observed that vortex shedding frequency decreases by placing second cylinder in the downstream of the first cylinder. For a similar size cylinders, the width of the eddy in the middle of the cylinders increases with increase in spacing ratio. With the increase of spacing ratio to 6, the flow past each cylinder behaves like single square cylinder. If upstream square cylinder size is smaller than the downstream square cylinder, the eddy size is reduced in between the cylinder compared to the downstream of the second cylinder. If upstream square cylinder size is bigger than the downstream square cylinder, the eddy size is larger in between the cylinder compared to the downstream of the second cylinder.
INTRODUCTION
Over the last 100 years, the flow around circular cylinder have been the subject of intense research, mainly due to the engineering significance of structural design and flow induced vibration. In recent years, such studies have received a great deal of attention as a result of increasing computer capabilities and improvements in experimental measurement techniques. Most of work has been done on the flow past a circular cylinder than a square cylinder. Flow past a square cylinder resembles flow past a circular cylinder as far as instabilities are concerned. In case of the flow past a circular cylinder, separation occurs due to adverse pressure gradient in the downstream direction, resulting in back and forth movement of the separation point on the cylinder surface. In case of the flow past a square cylinder, however, the location of flow separation is fixed at upstream corners of the cylinder due to the abrupt geometrical change. Nowadays, high rise buildings are not isolated but situated close to other building. These structures can get energy from surrounded flow and cause flow induced oscillation under certain circumstances. When one of the bodies is subjected to oscillation, the interference becomes more complex and depends on oscillation frequency and amplitude. Oscillation of a cylinder significantly influences the wake of the downstream side of the bluff bodies. However, when the Reynolds number exceeds a critical value, vortex shedding occurs in the wake, resulting in a significant pressure drop on the rear surface of the body. Also, vortex shedding occurs over a wide range of Reynolds numbers, causing serious structural vibrations and resonance. A little attention has been directed towards the flow past multiple cylinders. The fluid flow interference between two cylinders placed one behind the other has been the subject of considerable research. Such a two-body arrangement has many engineering applications such as twin-conductor transmission lines, two parallel suspension bridges and ocean structures. Flow interference between the bodies depends on various factors such as body geometry, spacing ratio, reduced velocity, supports and end conditions of the arrangement. For a square cylinder, the separation points are fixed. But no systematic study is available in the literature that deals with the effect of rounding or chamfering the corners of square cylinder or multiple cylinders on flow structures. Therefore, experimental and numerical study is done to study the influence of rounding and chamfering the corners of flow past square cylinder. Fig.1 indicates geometry of flow past square cylinder. In the current work, the simulation of the vortex shedding behind a square cylinder is considered. It is essential to find the top, bottom and inflow boundaries at adequate separation from the square cylinder to such an extent that the boundary conditions connected should not have any adverse impacts across the square cylinder. The extent of the computational area is of extraordinary significance. Selection of a poor computational area leads to inaccurate results. A too little space is not ready to catch the whole flow field which is impacted by the cylinder accordingly, essential impacts will get ignored and furthermore the blockage ratio should to be under 0.01. After going through the most of the literatures carried out experimental and numerical simulations on flow past a square cylinder, it is concluded to locate the top, bottom and inflow boundaries 6.5D square cylinders. Correspondingly, so as to limit the impacts of the outpouring condition on the flow in the region of the cylinder, the computational area has been stretched out to 30D square cylinders in the cylinder of the downstream.
GEOMETRY AND BOUNDARY CONDITIONS
Boundary condition for flow past a square cylinder is shown in Fig.2. The inlet boundary condition is considered as uniform velocity U=1, V=0 and outlet boundary condition is set zero for pressure. No-slip boundary conditions are applied on the walls of the cylinder and symmetry boundary condition is applied to the lateral upper and lower boundaries. This boundary condition is maintained for all the remaining cases.
Figs.3-13 show geometry of a square cylinder of same and different size with corners chamfered and rounded square cylinder.
In all the above cases, cylinder of the upstream is fixed and the spacing ratios of 2, 4 and 6 are used. In all the cases geometry condition is not as per the scale.
EXPERIMENTAL ARRANGEMENT
Here it is discussed about the flow visualization equipment. The experiment is performed for flow past single square cylinder with corner modification. Further, experiments are repeated for two square cylinders with corner modification of same and differnt size arranged in tandem with different spacing ratio of 2,4 and 6. Fig. 14 and Fig. 15 shows the line diagram and experimental setup of flow visualization equipment. Experimental setup consists of three phase induction motor with speed control arrangement, fabrication of recirculatory water tank and aluminum discs. Further, DSLR camera, aluminum powder, flash lights and experimental models are used. Initially, recirculatory tank is filled with water which measures 2.6 m in length and breadth of 1.5 m and depth of 0.13 m. The test section width is 380 mm. Pair of aluminum disc with appropriate spacing between them is made to rotate, acting as paddles and thereby creating the flow in the recirculatory tank. The discs are connected to a variable speed motor so; extensive variety of speed flow in the test section can be achieved. At higher velocities the water surface ends up plainly wavy. For the investigations, a reasonable speed is picked where such waves does not take place. Fine aluminum powder is utilized as a tracer medium in the recirculatory water tank to know the flow pattern as it passes the experimental models. The perimeter of square cylinder, density, and viscosity are known. In order to find the velocity of water which is collected in a tank, the tuft is made to travel a specified distance in water and the time is noted using a stopwatch. The procedure is repeated for 5 times and the average velocity is calculated.
The flow visualization experiment has been conducted by keeping a stationary model in a flowing fluid in a recirculatory water tank in such a way that, the side face of the square cylinder with corner alteration is facing to the oncoming flow. The experiment is conducted for square cylinder with corner modification. Further, experiments are repeated for two square cylinders of same and diverse size with corner modification arranged in tandem. The upstream cylinder is kept stationary while downstream cylinder is varied in a recirculatory channel to the spacing ratio (S/D) 2, 4 and 6 and Reynolds number used is 3483 and 3732 in the range of 3000 to 4000 for all the cases. The center distance between two cylinders is designated by S and characteristic length of square cylinder is designated by D. Two lamps are used to obtain proper lighting. Fluid flow behind the experimental models is projected by two lamps. Digital single lens reflex (DSLR) camera is placed at a suitable height above the cylinders to video graph the fluid flow pattern in between the cylinders and in the downstream of the cylinder.
Nine experimental models are used which is made with wood at a height of 130 mm. It is rigidly fixed to the base plate of mild steel of 50 mm in diameter with a thickness of 10 mm using nut and bolts. The dimensions of the experimental models are shown below: − larger square cylinder = 40×40 mm, − smaller square cylinders = 25×25 mm, − square cylinder with corners chamfered by 5 mm = 40×40 mm, − square cylinder with corners rounded by 5 mm = 40×40 mm.
RESULTS AND DISCUSSION
Figs.16-36 show the streamlines of single square cylinder and two square cylinders of same size and differnt sizes with corner modifications along with flow visualization photograph with SR=2 and 6. For a square cylinder, the flow is separated from two sharp corners of the front face. The eddies are alternatively formed on either side of the square cylinder in the downstream. As the flow forms a clockwise eddy, it rushes past the top of the square cylinder faster than the flow across the bottom. The square cylinder is more bluffer body compared to the square cylinder with corners chamfered and corners rounded. Therefore, the vortex formation region is significantly broader and longer, the separation points are fixed, either at the leading edge or the trailing edge. This makes the flow diverge further and creates a wide wake. When Re is increased, the clockwise eddy rushes past the top of the square cylinder much faster than the flow across the bottom. The flow is uniform and symmetrical in the upstream of the square cylinder with corner modification. Further, the eddies grow in size as they move away from the cylinder upto a certain length from the cylinder and then gradually die out and the flow becomes uniform as in the upstream in all the cases under investigation. The flow visualization images representing the flow pattern around the square cylinder with corners modification obtained from the experiment is of different size compared to numerical work, the vortex formed is much far when compared to the images obtained from numerical simulations. By experimental and numerical work, it might be said the vortex shedding frequency behind the square cylinder takes more time compared to the square cylinder with corners rounded whereas, square cylinder with corners chamfered lies in between them. For two square cylinders of same size for spacing ratio 2 and 6. The flow is uniform and symmetrical in the upstream of the square cylinder in all the cases under investigation. The flow is separated from two sharp corners of the front face of the upstream cylinder. Since the separation points in square cylinder is fixed at the front edge and the back edge which make the flow diverge further and creates a wider wake in the downstream. When spacing ratio is 2, the size of alternate eddies which are formed in between the cylinders are smaller than the eddies formed behind the downstream of the second cylinder. This is because second cylinder is suppressing the eddy formation in between the square cylinders. Also, the frequency of formation of eddies in between the cylinders is less when compared to the downstream of the second cylinder. This is due to the distance between the two cylinders is very small. But in case of spacing ratio 6, the size of the eddy in between the square cylinders is elongated where eddies are alternatively formed on either side of the upstream cylinder and downstream cylinder. As the flow forms a clockwise eddy, it rushes past the top of the square cylinder faster than the flow across the bottom because it is more bluffer. Therefore, the vortex formation region is significantly broader and longer. When Re is increased, the clockwise eddy rushes past the top of the square cylinder faster than the flow across the bottom in all the cases under investigation. For two square cylinders with corners chamfered of same size for SR=2 and 6.When SR=2, the alternate eddies shed quite faster which are formed in between the cylinders and single large eddy is formed in the downstream of second cylinder since square cylinders is chamfered at the corners. In SR=6, eddy is fully formed at the bottom and vortex gets folded up at the top of the upstream cylinder occurring in between the cylinders and large eddy is formed behind the downstream cylinder because both the front and rear edges is chamfered at the corners. So the vortex formation region is significantly less broader and shorter. When Re is increased, the eddy is moved across square cylinders with corners chamfered faster than the flow across the bottom compared. For two square cylinders with corners rounded of same size for spacing ratio 2 and 6. When spacing ratio is 2, the alternate eddies shed at a very fast rate across the cylinders in the middle of the cylinders and second cylinder of the downstream. since the cylinder corners are rounded. But in case of spacing ratio 6, the vortex formation region is significantly narrow and shorter. When Re is increased, since there is no fixed separation points at the front and rear edges. The vortex shedding behind the cylinders is much faster. For two square cylinders with corner modification different size with larger upstream and smaller downstream for SR=2 and 6. In these cases, the size of the eddies is bigger in between the cylinders compared to the downstream of the second cylinder when SR=2.For two square cylinders with corners modification of different size with smaller upstream and larger downstream for SR=2 and 6.,the size of the eddies is smaller in between the cylinders compared to the downstream of the second cylinder when SR=2. It has been found that in all the cases under investigation, when spacing ratio is increased to 4, the size of the eddies which are formed in between the cylinders and behind the second cylinder of the downstream lies in between spacing ratio 2 and 6. The frequency of eddies formation in between the cylinders and downstream of the second cylinder lies in between them. But in case of SR=6, the flow over upstream cylinder and downstream cylinder almost behave as a single cylinder because distance between the cylinders is more. But change can be seen in the formation of vortex region around the cylinders when SR=2,4& 6. The flow visualization images are shown along with the streamlines obtained by numerical simulation thereby, representing flow pattern which are conducted during the experiment.
CONCLUSIONS
Flow around single square cylinder with corners modification is performed experimentally and simulated numerically. The following conclusions have been drawn. Experimental work and numerical simulations is carried out for two square cylinders of same and differnt size with corners variation by varying the spacing ratio 2,4 and 6 are considered for the investigation.Vortex shedding frequency behind the square cylinder takes more time compared to the square cylinder with corners rounded whereas, square cylinder with corners chamfered lies in between them. Frequency of vortex shedding decreases by placing second cylinder in the downstream of the first cylinder. For a same size cylinders, the width of the eddy in the middle of the cylinders increases with increase in spacing ratio. With the increse of spacing ratio to 6, the flow past each cylinder behaves like single square cylinder. If upstream square cylinder size is smaller than the downstream square cylinder, the eddy size is reduced in between the cylinder compared to the downstream of the second cylinder. If upstream square cylinder size is bigger than the downstream square cylinder, the eddy size is larger in between the cylinder compared to the downstream of the second cylinder. | 3,715.6 | 2020-12-10T00:00:00.000 | [
"Engineering",
"Physics"
] |
Interactive Geological Data Visualization in an Immersive Environment
: Underground flow paths (UFP) often play an important role in the illustration of geological data by geologists, especially in illustrating geological data and revealing stratigraphic structures, which can help domain experts in their exploration of petroleum information. In this paper, we present a new immersive visualization tool to help domain experts better illustrate stratigraphic data. We use a visualization method based on bit-array-based 3-D texture to represent stratigraphic data. Our visualization tool has three major advantages: it allows for flexible interaction at the immersive device, it enables domain experts to obtain their desired UFP structure through the execution of quadratic surface queries, and supports different stratigraphic display modes, as well as switching and integration geological information flexibly. Feedback from domain experts has shown that our tool can contribute more for domain experts in the scientific exploration of stratigraphic data, compared to the existing UFP visualization tools in the field. Thus, experts in geology can have a more comprehensive understanding and more effective illustration of the structure and distribution of UFPs.
Introduction
Geological refraction data, referred to as geological data, can help illustrate geological formations using data obtained from the propagation of geological waves in different parts of the subsurface. Scientific data visualization based on geological data, especially geological volume visualization, plays an integral role in oil and gas exploration. Compared to traditional volume data, such as CT scan volume data and medical volume data, geological data has a noise problem in the results acquired due to current limitations in survey technology. This affects the clear interpretation of stratigraphic data by experts in the field. According to the domain knowledge provided by our geologists, the underground flow path (UFP) plays an important role in understanding the stratigraphic structure.
The quadratic surface-based query can help experts explore the structure of UFPs and local regions. Domain-specific languages (DSLs), such as set operator expressions, provide a high degree of customizability to quadratic surface queries. Liu et al. [1,2] provided methods to illustrate 3-D seismic data and obtain the visualization results that reveal the distribution and geostructures of UFP. In addition, they also designed lightweight compilers based on set data visualization [3,4] to parse DSLs. Their interactive design brings great inspiration. We have referred to these and designed a compiler to reveal the UFP, which is based on parsing the set operator expressions.
Although there are many tools [1,2] to analyze stratigraphic data, the collaborating geologists say that there are still some challenges in existing graphical geological data visualization methods for interactive exploration. They are as follows. First, the user's observation is limited because the traditional display device is restricted by the limited size of the screen. Most of the traditional geological data visualizations are presented on PC platforms, which prevents users from immersive experience and intuitive mid-air interaction of the data exploration. Second, due to the limitations of traditional interaction tools, such as 2-D screens, keyboard, mouse, and multi-touch devices, users are limited to operate in the 2-D space and have troubles with depth-direction moving and hybrid 3-D interactions of data exploration [5]. Third, in order to help geologists study the structure and UFP distribution information of geological data, they need to be given different types of interactive functions and different stratigraphic display modes. Examples are armchair display modes, palisade display modes, and sometimes they need to combine them, i.e., cross-shaped display modes [2].
Moreover, the domain experts also require the integration of UFP results and the combination of various stratigraphic display methods, and even the integration of other query results, such as quadratic surface queries based on set operator expressions. In addition, there is a need to efficiently switch between different stratigraphic display modes. In response to these challenges, we propose an immersive stratigraphic data visualization tool for creating and customizing petroleum data explorations. We designed a new immersive interface to visualize seismic data, aiming to help domain experts explore the data intuitively and obtain insight into understanding the data through Touch Handle interaction. The main advantages of our tools are as follows.
First, our tool provides users to explore data in an immersive environment, compared with some currently existing works [6,7] that explore stratigraphic data in a completely virtual environment.zc We blended data visualization with real environments, which takes full advantage of immersive devices.
Second, according to the needs of domain experts in petroleum data exploration, we thus designed three stratigraphic display modes in an environment, i.e., armchair display modes, palisade display modes, and cross-shaped display modes.
Third, the quadratic surface query based on set expressions allows users to freely explore stratigraphic data of different shapes, which is adapted to the actual needs of exploring stratigraphic data.
We tested our tool by using three datasets provided by the Northwest Branch of the China Institute of Petroleum Exploration and Development . Further, according to the peculiar structure of each, we also refer to the three as planar (Stratigraphic Datasets I), flat and long (Stratigraphic Datasets II), and vertical (Stratigraphic Datasets III). Rendering tasks were assigned to immersive devices that users could explore the geological data in the immersive environment. It provides many opportunities for enhanced free human interaction with 3-D objects and visualizations. The evaluation of the design and implementation includes user studies and case studies to show the usability and effectiveness of the proposed tool. In addition, the different ways of displaying strata in the immersive environment can help domain experts to explore geological strata from different perspectives and scales.
In this paper, we first review the background of this work in Section 2 and present our approach in Section 3, which describes in detail the three main parts of our approach: Section 3.1 describes the interaction design of the immersive visualization tool, Section 3.2 describes the interactive integration and switch, Section 3.3 describes the different modes of stratigraphic display in the immersive environment, and Section 3.4 describes the immersive environment in which quadratic surface-based queries are performed on the underlying data. In Section 4, we give the results and discuss them separately, the domain experts gave us a lot of valuable feedback, most of which are positive. We conclude in Sections 5 and 6.
Background and Related Work
In this paper, we review the related work on applications of immersive visualization, devices of immersive visualization, data-centric transfer functions, geological data illustration, and visualization to show the background of our work.
Applications of Immersive Visualization
Immersive visualization can be customized for various visualization types, but currently the visualization types that can be built are the most common types, including bar-shaped bodies, chair-shaped bodies, and crosshair functions. We use a technique called join-and-complement to perform queries. AR and VR can achieve better spatial perception and increase authenticity, and are usually used as a data-driven presentation medium for exploration and experience. Augmented Reality (AR) and Virtual Reality (VR), which are usually used as a data-driven presentation medium for exploration and experience, can achieve better spatial perception and increase authenticity, and can also present a data-driven process of exploration and experience. Narrative visualization in immersive environments [8] can help users better understand domain knowledge and provide users with an intuitive experience of interactively exploring scientific data. With large screens, more examples can be displayed and zoomed in, and immersive technologies can be explored by combining with AR [9] or VR [7]. There are many immersive interactive tools, including handles, mice, and keyboards. This kind of interaction can promote user exploration because it is very convenient and natural with the immersive display. Scientific visualization mainly focuses on the visualization of three-dimensional (3-D) phenomena, which is an interdisciplinary application field in science.
Data-Centric Transfer Functions
Transfer function, which is an interactive analysis method, enables users to select RGBA values of different hardness values, so as to obtain the final interpretation effect [10]. According to the terms of dimensionality, we can easily classify data-centric transfer functions into one-dimensional, two-dimensional, and higher-dimensional forms. Igouchkine et al. [11] implemented a 1-D transfer function. It can assign different properties to subregions and their boundaries. Liu et al. [1,12] used 1-D transfer functions to visualize geological volume data.
To help users explore clusters and evaluate selected features, Ma et al. [13] proposed a semi-automatic transfer function (TF) scheme in the 1D/2D TF domain. Compared with the traditional scheme, this scheme can effectively understand and evaluate the selected features.
With the development of science and technology, more data contains multiple variables, so other parameterized expressions are needed. Wang et al. [14] proposed a method that can fit data distribution through the Gaussian mixture model, and it also can generate a Gaussian transfer function. Volume data of different colors can be used as multi-variable volume data. Wang et al. [15] designed a parallel volume rendering method to visualize large scale volume data. Ebert et al. [16] used the color distance gradient dot product transfer functions and applied it to help users find the desired tissue information. Furthermore, the interactive dynamic coordinate system [17] can be used to project the multi-variable data into the 2-D space to facilitate users to select features. In addition, Guo et al. [18,19] put forward a kind of transfer function design interface that is based on the combination of dimensional projections and row coordinates. Moreover, Lawton et al. [6] proposed to turn the visualization of seismic data in the experimental environment into a customizable 3-D data visualization. Similarly, in order to ensure the construction of an effective 3-D geological model, Amorim et al. [20] mimic expert interpretation and create geological structure models based on sketches. This method can avoid the shortcomings of separate modeling. Rocha et al. [21] also used a new sampling strategy to enhance different geo-logical attributes. The high-dimensional composition also has many advantages, it can provide not only its goals and backgrounds but also the best number and combination of attributes [22].
Geological Data Illustration
Geological data description can be divided into three categories: Horizon extraction, fault detection, and geological data interpretation. Horizon extraction usually uses surface detection technology to detect the horizon in 3-D geological data. The geological horizon indicates that the rock properties have changed and played a central role in the interpretation of Geosciences [23]. Based on the surface detection technology, Faraklioti et al. [24] also used six-connectivity to detect horizon fragments. In addition, they proposed an automatic recognition algorithm that can analyze the layers in seismic volume data. In addition, the combination of 2-D and 3-D minimum cost path and minimum cost surface tracking is introduced to extract the horizon with little user input. Sketches can help to extract geological horizons from geological data [25,26]. The 3-D geological discontinuities or faults have important applications in 3-D structural and stratigraphic analysis. Geological coherence or faults act on the geological data itself, so they are not affected by the interpreter or automatic picker biases [27]. Conventional amplitude time slices are usually helpful in viewing faults. Then, a more robust coherence algorithm based on similarity is proposed [27], which reduces the mixing of upper-layer or lower-layer features. The automatic fault detection algorithm is also developed through the Highest Confidence First (HCF) merging strategy [28] and double Hough transforms [27]. A more effective geological fault detection system is further developed by using a graphics processor [29]. In addition, the number of attributes based on consistency and texture can significantly improve the efficiency and quality of 3-D Ground-Penetrating Radar (GPR) interpretation [30], especially for complex data collected across active fault zones. Geological interpretation is of great significance to reveal the structural information displayed in the data. To improve the annotation of geological structures, Patel et al. [31][32][33] used deformed textures, line transfer functions, texture transfer functions, and various graphical methods to track the horizon and interpret geological data. They further put forward the new technology of knowledge-aided annotation and computer-aided interpretation of geological data for oil and gas exploration [33]. Domain knowledge about the structure and topology of geological features in geological data can also guide dynamic surfaces into these features [34]. Sketch-based methods can greatly improve the interactivity of illustrations. For example, Natali et al. [35] proposed a sketch-based method to create 3-D illustrative models in geological textbooks. In addition, they designed a method based on the composition of two synchronous data structures for processing and rendering [36]. Volume rendering is one of the most common methods for interactive rendering of geological data. It can provide a lot of information about geological continuity and statistics to help visualize the data for analysis [37]. Gradient-based methods are sensitive to high-frequency noise. A real-time gradient-free method [23] is proposed to present results similar to high-quality global illumination. The slice-based method is used to extract the upper channel and salt dome from the geological data. In addition, Liu et al. [38] presented a method based on volume slicing, which extracts UFP through interactive slicing, but it is semi-automatic. It still needs the participation of users, which will reduce the extraction accuracy and exploration efficiency.
Geological Data Visualization
We program a domain-specific visualization system to extract the stratigraphic structures according to seed point tracing. Further, it can also realize the exploration of geological graphic data through human operation of VR equipment. Users are enabled to adjust the recommended seeds by fine-tuning them with visual interactions. The seed is automatically generated by the density gradient calculation based on the kernel function and the link information can be obtained through the weighting algorithm to construct the graph. By different sorts of nodes in the graph, we improve the extracted UFP structures to give the users a more intuitive feeling. Compared to the traditional measures, we can take the following approach: At first, we use the continuous scale-space theory [39][40][41] to analyze the multi-scale features [39][40][41] of the recommended seeds. Based on the theory, we propose a novel kernel-function-based density gradients computation approach to recommend multi-seed [2] automatically. Users intuitively explore the structure of the extracted UFP by fine-tuning the seeds. In order to explore the local area efficiently, we also use a quadratic-surface distance query scheme [42].
Interactive Geological Data Illustration
In order to help geologists or other oil exploration experts better understand the distribution and related information of UFP, we propose a method of interactive description of 3-D stratigraphic data and other exploration data in the immersion device. Figure 1 shows the overall flow of our method. First of all, we have obtained a number of different stratigraphic exploration datasets from petroleum exploration experts. At the same time, the transfer function adjusted on the PC devices will be transferred to the immersive device through the cloud server. Then, the dataset is preprocessed, and the original geological exploration data are sliced to generate 3-D texture visualization of the original geological exploration data on the GPU of the immersive environment. The 3-D texture based on the bit-array will be used for slice query interaction in different modes, so as to observe the organizational structure of each stratum. Simultaneously, the quadratic surface-based query method allows experts to further explore geological data at different stratigraphic levels, for example, oil exploration experts can better determine the position of oil data. Then different stratigraphic display modes are used to visualize the extracted surface, well, or other features. Finally, several illustration results are mixed together to reveal the structure and distribution of UFP. Our proposed tool consists of three steps. Firstly, the preprocessed data are imported into the immersive device, and the transfer function (TF) is imported into the HMD through the cloud server. Then the geological volume data are rendered on GPU by volume rendering. Different types of interaction can be performed on slice data, or UFP can be further extracted by the quadratic surfacebased query. Different stratigraphic display patterns or any combination of them can be obtained. Finally, all the illustrative results can be mixed to better reveal the structure and UFP distribution.
Interaction Design of the Immersive Visualization Tool
In fact, we can use Oculus Quest as our immersive device in our experiment. The movement of the view is controlled by the tracking system of Oculus Quest, and other interaction methods are realized through the Touch Handle and our defined event respond script.
Basic Interactions. Based on the Oculus Touch Handle, we offer some basic interactions. In order to interact with the rendering result, we can bind basic interactions such as rotation, translation, and scaling (zoom-in and zoom-out) to the buttons and joystick on the Touch Handle ( Figure 2). In addition, we have expanded more flexible interaction methods. For example, the user can adjust the thickness of the stratum slices by clicking on the buttons "A" and "B" on the handle, as shown in Table 1. Our tool detects the trajectory strokes of the Touch Handle, taking full advantage of the multiple degrees of freedom of the immersive environment. Users can adjust the position and interval of the armchair or palisade by drawing strokes left and right or back and forth ( Figure 3).
Transfer Function Design. The traditional color selection interface based on the WIMP paradigm in an immersive environment is not suitable for natural interaction techniques and the user is not able to select colors efficiently in interaction [43]. The domain experts we asked also said they were more familiar with the traditional WIMP-based color picker on the PC side; therefore, we program a transfer function editor on the traditional PC side, equipping keyboard and mouse interaction devices. We edit the transfer function of a scientific volume data online on the PC side in a visual steering scheme because of the easiness of manipulating the traditional 2-D transfer function editor in a 2-D visualization space. Users can add control points by selecting special intensity values in the editor and then set the color and opacity values for the control points. The transfer function editor includes an adjustment view (Figure 4, left) and a preview view in the immersive device (Figure 4, right). . We refer to these stratigraphic display modes from a previous article [1].
Volume boundary Rendered by using the 1st peak of the TF Rendered by using the 2nd peak of the TF Rendered by using the 2nd peak of the TF Designed transfer funcƟon (TF) of the immersive volume visualizaƟon Opacity Intensity Based on a grid coordinate system, the adjustment view is designed. The x-axis stands for the intensity value of the volume data, and the y-axis represents the opacity of the intensity on the x-axis. Simultaneously, users can alter the opacity of the y-axis unit grid. Users can select the appropriate color in the color table dialog box by clicking the middle mouse button to obtain the best color value. Afterward, users can adjust the control points in the grid coordinate system of the transfer function editor, forming various peaks, and achieve the change of color and opacity for a certain intensity value of the volume data. After obtaining a suitable preview effect, users can save the transfer function data and render immersively with the data shared into the immersive device. As a result, users can adjust the appropriate transfer function which reflects the stratigraphic information in a collaborative mode. The collaborative mode is two people collaborating to adjust the transfer function, with one person modifying the control points on the GUI interface of the PC side, and the other user wearing an HMD to give real-time feedback on the real-time rendering result.
Interaction Organization and Switch
All the geological survey data we use are provided by geologists who collaborate with us. The original geological data is obtained by reflection of seismic waves and is then integrated into volume data.
According to the discussion with experts in geology, we need an intuitive and convenient interactive design to extract the UFP within a certain range in the immersive environment. In addition, domain experts need a variety of interactive stratigraphic display methods in 3-D volume illustration, and the interaction of the method needs to be as effective as possible. Therefore, we proposed a method with immersive stratigraphic display modes, which include armchair display mode, palisade display mode, and cross-shaped display mode. This method allows the user to observe the distribution of UFP in multi-views in the immersive environment. It also supports the integration of UFP results into various stratigraphic display modes. Moreover we summarize some general immersive interaction designs from the feedback of experts, which requires the system to meet two requirements. First, different types of interactions can occur simultaneously in different strata or voxels. Second, different immersive interaction results can be assigned to a given voxel.
In this paper, the 3-D texture based on bit-array is used to organize different interactions, such as different display modes and quadratic surface queries surfaces. The results of the different interactions are integrated into each voxel with a display value. The display value consisting of 0 and 1 indicates whether the voxel is displayed or not. A 3-D texture is designed to organize all interaction types assigned to the geological volume.
Different Stratigraphic Display Modes
In order to meet the needs of different domain experts, the visualization system should be used in different stratigraphic display modes, i.e., armchair display mode, palisade display mode, cross-shaped display mode, sometimes a combination of them is needed. Figure 3 shows three stratigraphic display modes, which are used in the visualization system. The users can adjust the start position of a group of stratigraphic layers, the depth of a single stratigraphic layer, and the interval between the stratigraphic layers. With the adjustment, the distribution visualization of the UFP and other objects, the users are interested in, can be better revealed.
Moreover, they also integrate the UFP results into many stratigraphic display modes. The straightforward method to visually display the geological data with different display modes is to cut the original volume data before rendering. It is tedious and inefficient because the immersive device should frequently process the data. In the worst case, the data should be preprocessed frame by frame during rendering; however, it is not flexible, and it is difficult to switch between different stratigraphic display modes, because the results of different display modes are very different, as shown in Figure 3.
To increase the flexibility and customization of using different stratigraphic display modes to visualize stratigraphic data, we designed a slice parameter equation, which is determined by the start position of a slice, the interval between slices, and the thickness of the slice, as shown in Equation (1) ( K is used to obtain the spacing between the Kth slice and the starting slice, and W is used to obtain the rendering range of each slice). In our method, the slice equation is the general form after transformation. It is just the specification that defines the slice group. The user can change the parameters of the slice by performing the interaction of different stratigraphic display modes, such as the start position of the front slices, the interval slices, and the depth of each slice.
As described in Section 3.1, 3-D texture based on bit-array helps to support different stratigraphic display modes.
Quadratic Surface Query and Exploration
To extract UFP information in a certain range, we define several quadratic surfaces for exploration. We use quadratic surfaces as a visual metaphor for oil reservoirs. According to the experts' recommendations, we pre-define the size and position of the five spherical surfaces. Simultaneously, users can add any query spheres, and the position of the query sphere and the parameters of the quadratic surface can be modified by scripting. Each query sphere has a special meaning, e.g., the coverage area mapped to the real oil reservoirs. As shown in Figure 5b, we design a virtual keyboard in an immersive environment where the user can click on buttons through the joystick to generate the set operation expressions. We have asked for suggestions from domain experts of geology and the response was that there are many single operator set queries (e.g., A∪C) in geological exploration and there is a need for queries with multiple operators combined. Adopting the suggestions of the experts, we used the five most common operators to achieve the majority of our geological exploration query requirements. The Figure 5a shows the immersive results of queries made by setting several spherical surfaces in a specific range via the virtual keyboard. The virtual keyboard not only has five surfaces A-E, but also includes the Add button (used to add a new quadric surface), the intersection symbol (used to perform a specific set operation on the surface set), the delete key and the enter key (used to perform quadric surface queries) of set operation expression. In the immersive environment, users can click the virtual keyboard through the handle, input a specific set operation expression, and query the UFP in the corresponding quadric surfaces. Algorithm 1 represents the details of the implementation process of the quadratic surface. The 3-D texture based on bit-array is conducive to querying the surface set. In our method, we regard the query process with different surfaces as different interactions. First, we perform a set operation on the pre-defined query spheres. Then, the GPU of the immersive environment generates a 3-D texture map according to the query results, and renders it in an immersive environment by a ray casting algorithm. Users can find the query spheres associated with the query results when looking at the quadratic surface-based query results in an immersive environment. It is convenient to explain each process of quadric surface query, any combination of multiple surface queries, such as their union, intersection, or complementarity.
Implementation
We used three different stratigraphic datasets provided by experts to evaluate the effectiveness and practicability of our method. The experiments were run on the immersive device Oculus Quest. The immersive device is equipped with a Snapdragon 835 Processor, running 4 GB Rama and OLED screen with a 72 HZ refresh rate. The rendering component of stratigraphic data in immersive visualization was developed based on GPU rendering and CG libraries. The proposed method is tested using three datasets provided by field scientists, i.e., Dataset I, Dataset II, and Dataset III.
After data preprocessing and integration, the whole dataset is rendered on the immersive device. We integrate the results of all interactive operations including quadric surface query and three stratigraphic display modes, into a 3-D texture based on a bit-array. The immersive device then renders the dataset by determining the display value of each voxel of the 3-D texture. The immersive environment obtains the color and opacity values of each voxel according to the pre-defined transfer function and mixes them into the final rendering result. Algorithm 1 shows the process of quadric surface query in the immersive environment. Equation (1) shows the slicing equation in the stratigraphic display mode.
We recorded the data preprocessing time in HMD and the display frame rate of the interface when HMD ran the three stratigraphic datasets, as shown in Table 2. The data preprocessing time includes loading the data in the HMD, loading the transfer functions, and loading the information of the query sphere 3-D texture, as shown in Figure 6. Currently, because of the poor processing performance of the HMD used in our tests, it takes a long time to process the data and runs at a low frame rate, which affects the user's exploration efficiency and experience to some extent. In addition, we found that there is still some noise in the results from the secondary surface query, which interferes with the user's exploration of the target data. Table 2. Three stratigraphic datasets are used to test the processing performance of our tools in the HMD. The time of the data preprocessing and the size of display frame rate in HMD shows the performance of our tool for different stratigraphic data.
Case 1: Stratigraphic Dataset I
The first case is an immersive exploration of Dataset I. In this case, the user transmits the stratum Dataset I to the VR device. After rendering in the immersive device, the user can first view the overall geological information of the stratum of Dataset I. Then, the user inputs the set operation expression by clicking the virtual keyboard, and clicks the query button to query the corresponding stratigraphic query information that matches the set operation result. The result of the first example is shown in Figure 7. The user queries the union of query sphere A and query sphere C to determine the possibilities covered by the reservoirs mapped by A and C. The range of river channels contains oil. The user can judge the intensity of the river channel in the area by comparing the river's color with the customized transfer function, so as to evaluate the difficulty of oil extraction. The color scheme of the Dataset I color map visualization is specified by the transfer function editor. Orange indicates that it is difficult to mine with high intensity, blue indicates that it is easy to mine with low intensity, and fully transparent indicates the river channel information that is not required to be rendered and queried, which is other noise information. We can set the color and transparency to make the different types of data, we want to view clearer. In addition, users can also combine the different stratum display modes shown in Figure 3 with the quadric surface query results shown in Figure 7a to finally obtain the stratigraphic data interpretation results shown in Figure 7c. Users can perform basic interactions on geological data through processing (such as rotation, zoom, etc.). For example, by shaking the "Thumbstick" described in Figure 2, the user can rotate the rendering result to view other perspective information based on the river channel data of the formation Dataset I. In order to accurately reflect the stratigraphy profiles, the collected data can be displayed in different stratigraphic display modes to show the stratigraphy profiles required by the domain experts. The stratigraphy profiles obtained through interaction in this way are very clear and easy to understand. They truly and intuitively display their complex geological structures and realize the query of UFP information. The combination of different stratum display modes and quadric surface query results allows the user to see the overall UFP distribution clearly, and also to see different positions and different thicknesses of the stratigraphy profiles.
Case 2: Stratigraphic Dataset II
The second case is an immersive exploration of the flat strip stratigraphic Dataset II. Similarly, users can transfer PC-edited transfer functions and stratigraphic Dataset II to the immersive device. The query sphere can be pre-defined in the script and placed in locations that contain important riverway information by users. In this case, users explore the intersection of query sphere A and B. The red and brown river channel information in Figure 8 is mainly distributed in the strata falling in both query sphere A and query sphere C. Additionally, the river channel structure is visualized by the transfer function, which makes the river channel in deep underground clearer. Users can explore the detailed substructure of a particular range by virtue of intersection operations. For example, if the user is only interested in a small portion of river channel information, the corresponding observations can be made across multiple river channels, as shown in Figure 8. Under the circumstances, visual clutter generated by other strata can be removed. As a result, users can easily explore critical river channel information, avoiding interference from noise.
Case 3: Stratigraphic Dataset III
The third case for immersive exploration is the vertical stratigraphic Dataset III. The volume data mainly shows information about the stratigraphy at depth, which has a more complex structure than the two cases above. In this case, we chose to use multiple set operations for expression queries. Further, to obtain a better rendering, we chose to show the main river structures in blue and purple for immersive exploration, as shown in Figure 9c. As represented in Figure 9d, the user combined the cross-shaped display mode of this stratigraphic data with the results of the expression query B∩!A for a synthetic interpretation of the stratigraphic information.
Domain Experts Feedback
We have consulted with two domain experts from the Northwest Branch of China Institute of Petroleum Exploration and Development who have discussed with us and gave some suggestions frequently throughout our work. These experts had already learned about some of the previous PC-based work on petroleum data exploration. They evaluated whether our data exploration approach in an immersive environment would give them a better and more intuitive understanding of the data by comparing it to traditional methods. Domain experts also provide some practical requirements for oil exploration and assess whether our overall approach can meet the requirements.
Much affirmative feedback from domain experts of geology has been given to us. Experts said that our tool provides an effective method for geologists to study geological phenomena and petroleum observation in 3-D space. Using visualization technology can describe complex geological structures more accurately and intuitively.
User Study
In addition to this informal feedback, we also conducted a user study to assess the usability and the utility of our tool. We recruited 10 participants majoring in geography from the university. Each participant was asked to experience pre-loaded petroleum datasets in an immersive environment, three formation display modes, and quadric surface query results based on set operation expressions, respectively. We encouraged them to freely interact and explore data through the Touch Handles. Investigators provided necessary operation instructions nearby. Lastly, we conducted a questionnaire for users and made a brief interview to collect comments about our tool. Most of the feedback was positive, which demonstrate the usability of the proposed approach as shown in Figure 10.
Discussion
Our approach to petroleum data visualization in an immersive environment effectively helps domain experts to explore stratigraphic data, better reveal geological structures, and illustrate the distribution of geological materials they are interested in. In our approach, exploring data through mid-air interactions allows users to have a high degree of freedom in their operations. Since mid-air interaction can easily cause fatigue, we reduce the type of the hand motions in mid-air interaction by combining the interaction with the Touch Handle buttons. Feedback from domain experts who have tested our tools is mostly positive; however, there are still some limitations of our tool that require further refinement and improvement.
Initially, in the process of quadratic surface query, our tool only allows the user to pre-set the location information of the query surface in a script and then import it into the immersive device for interactive exploration. Compared to some traditional methods with much unnecessary interaction, our query function based on set-operator expressions enables users to query the stratigraphic information of interest freely. There are still some barriers for users to set up the surface information, for example, altering the parameter information via scripting is not intuitive enough. For this reason, we will add a preview model of quadratic surface-based query in the immersive environment, which enables users to interact directly with the settings via the Touch handle in the immersive environment.
In addition, our tools currently only support programs that rely on virtual reality technology. VR still has limitations for the user with regards to data authenticity, which is not the best result; therefore, we would like to enable the tool to support AR in future work. AR is based on an overlay of digital images of the real world environment, with some motion tracking and visual feedback technology. In this way, the user will further enhance the interaction between the device and reality in realistic scenarios, thus enhancing the authenticity and immersion in the exploration of scientific data.
A higher frame rate can improve the smoothness of data during exploration and increase the efficiency of user exploration. There are two main factors affecting the frame rate at present: on the one hand, the poor processing performance of the used immersive devices, to a certain degree, lead to a low frame rate; on the other hand, the complexity of the currently used ray casting algorithm in the immersive visualization is high, and later work can improve the frame rate by optimizing the ray casting algorithm in the immersive visualization.
Finally, we can also add visual customization tools to provide the user to freely annotate the illustration of the visualization results. For example, a lasso tool could be added to help users select the oil data that they need to display. Other components such as brushing, erasing, keying, sketching, and drawing will all be conducive to the data illustration.
Conclusions
Our tool currently allows for the completion of immersive exploration of different stratigraphic datasets. Compared with traditional stratigraphic visualization tools based on virtual environments, our tool integrates data visualization with real environments. We use the proposed bit-array-based 3-D textures to organize the type of interaction with the geological data. This helps field scientists to better understand the distribution of UFP or other geological materials of interest. Currently, users can transfer query spheres and stratigraphic datasets edited on a PC to an immersive device. The transfer function allows visualization of the structure of the river channel, making the deep subsurface channel much clearer. In addition, users can change the different stratigraphic display modes and enter specific set operator expressions to explore the stratigraphic structures of their interest by a query. The different types of queries can be integrated to form the final interpreted visualization. Finally, the geological data visualization system in an immersive environment also supports interactive slice-and-dice analysis using handles, allowing users to explore the data through Touch Handle interaction.
Discussions were held with collaborating domain experts and much positive feedback was received; however, there are still some limitations of the tool. For example, our work does not currently support AR and we cannot move the technology to the real world, which would reduce the adaptability of our work and shrink the authenticity of the user experience.
We will accept the experts' suggestions and continue to improve our work based on what is discussed in Section 5. For later improvements, we will continue to improve and extend our work in the future to address some of the issues in Section 5, including the modification of parameter information for quadratic surfaces. | 8,902.6 | 2022-03-05T00:00:00.000 | [
"Geology",
"Computer Science"
] |
Component Alignment in Total knee Replacement
Total knee replacement (TKR) is one of the most commonly performed surgical intervention providing substantial relief from pain and improvement in functional disability in patients with knee arthritis [1]. Although the survival of primary TKR s is excellent with 95% of survival at 10 years for most implants [2], approximately 20% of those who undergo TKR are not satisfied with the outcome at the end of one year assessment [3]. Restoration of knee alignment is one of the main determinants of successful outcomes after TKR [4]. Implant malalignment following primary TKA has been reported to be the primary reason for revision in 7% of revised TKR [5].
Introduction
Total knee replacement (TKR) is one of the most commonly performed surgical intervention providing substantial relief from pain and improvement in functional disability in patients with knee arthritis [1]. Although the survival of primary TKR s is excellent with 95% of survival at 10 years for most implants [2], approximately 20% of those who undergo TKR are not satisfied with the outcome at the end of one year assessment [3]. Restoration of knee alignment is one of the main determinants of successful outcomes after TKR [4]. Implant malalignment following primary TKA has been reported to be the primary reason for revision in 7% of revised TKR [5].
Alignment axes
Vertical Axis is defined as a vertical line that extends distally from the center of the pubic symphysis on a normal weight bearing antero posterior radiograph. This vertical axis is used as a reference axis/line from which the other axes are determined [5].
The mechanical axis of the lower extremity is determined by drawing a line from the centre of the femoral head to the centre of the ankle joint, which corresponds to an approximately 3° slope compared with that of the vertical axis. The anatomic axis of the lower extremity is an axis in relation to the intra medullary canals. The femur anatomical axis (FAA) is determined by a line drawn proximal to distal in the intra medullary canal bisecting the femur in one-half. Tibia anatomical axis (TAA) is created by a line drawn proximal to distal in the intra medullary canal bisecting the tibial in half [4].
On antero posterior evaluation, the anatomic axis and mechanical axis of femur has 5° to 7° of inclination difference between them, however anatomic and mechanical axis of the tibia correspond with each other (Figure 1). Hip knee axis (HKA) also known as mechanical axis of lower limb is commonly defined as the angle between the mechanical axis of the femur and the mechanical axis of the tibia [6].
Orthopedics and Rheumatology
Open Access Journal ISSN: 2471-6804
Coronal alignment
One of the goals of TKR is restoration of overall mechanical axis of the lower limb also known as HKA angle. Tibiofemoral axis (TFA) can be determined on short anteroposterior radiographs of the knee when full length weight bearing views are not available. The acceptable alignment is between 7°and 9° of valgus. (7). According to Fang et al increased failure rates were noted in TKR with varus malalignment (TFA < 2.5°) and valgus malalignment (TFA >7.5°) [7]. It was found that knee aligned +/-3°of neutral mechanical axis had better international knee society scores (KSS) and short form 12 (SF 12) scores at 6 weeks, 3 months, 6 months, and 12 months after surgery [8].
The optimal distal femoral cut is made at 2°-7° of valgus to the FAA to achieve optimal mechanical alignment. Femoral component placement > 8° of valgus in relation to FAA has resulted in 5 time's higher rates of failure. [9]. Coronal alignment of femoral component > 8° and < 2° of valgus with respect to the FMA were associated with implant failure [10]. The tibia component should be placed at neutral alignment (90°) in coronal plane with maximum bone coverage and this is achieved by proximal tibial cut at 90° to the mechanical axis of tibia [9,11]. Tibial component > 30 of varus had increased risk of medial bone collapse [12]. According to Kim et al. [11]. No knees in the neutrally aligned group required revision of the components, and 3.4% of the varus malaligned tibia component required revision at 15.8 years of average follow up.
Sagittal Alignment
The ideal femoral component positioning in sagittal plane is considered to be 0-3 degrees of flexion [10]. Hyperextension of the femoral component apparently increases the risk of osteolysis [13] and also it may create a notch in the anterior femoral cortex, which can increase the potential risk of a supracondylar fracture [14]. The study by Lustig et al. [15] in 95 patients found that Sagittal alignment of femoral component greater than 3.5° from the mechanical axis was found to increase the relative risk of mild flexion contracture at one-year follow-up by 2.9 times.
The desired tibial sagittal alignment for most prosthesis types is a posterior slope between 0° to 7° [10]. Tibial malalignment in the sagittal plane (< 0° or > 7°) had a failure rate of 4.5%, as compared to a failure rate of 0.2% in the neutrally aligned group [11]. In sagittal alignment the proximal tibia should never resected be in an anterior slope, as it would lead to impaired posterior flexion space and possible instability [16].
Rotational Alignment
There are many anatomical landmarks used to determine rotational alignment for femur. The trans epicondylar axis (TEA) of the femur is regarded as the gold standard axis for establishing the rotational alignment of the femoral component during TKR [8,17,18], other methods are trans-sulcus axis (TSA) also known as Whiteside's line [19], gap balancing (GB) and posterior condylar axis (PCA). Femoral component should not be implanted in internal rotation with respect to TEA as the femoral component should be placed in 2-5° of external rotation in relation to TEA [11,20].
It is accepted that placement of femoral component in approximately 3° to 5° of external rotation, relative to the posterior condylar axis, improves patellar tracking. Externally rotating the femur 3-4° may be accurate with most patients; however in valgus knee with hypoplastic lateral femoral condyles every 1mm of asymmetrical cartilage erosion can change the femoral rotation by approximately 1° if rotational alignment is guided by PCA [20]. Femoral component .in excessive external rotation increases the medial flexion gap and may lead to symptomatic flexion instability; external rotation of this component by as little as 5° from the TEA increases shear forces on the patellar component [17,18]. Bell et al. [20] reported internal rotation of femoral component (>3° internal rotation in relation to TEA) was one of the significant etiological factors for pain after TKR. (24) Femoral component in flexion creates lift off of the anterior flange and patellar impingement and excessive extension of the femoral implant will displace the extensor mechanism anteriorly, increasing retinacular tension and patella femoral compressive force [21].
There are many anatomical landmarks used for the tibial component placement for rotational alignment which includes medial border of the tibial tubercle axis (TTA) [22,23], transverse axis of the tibia, medial 1/3 of the tibial tubercle [22,24] and malleolar axis [23]. There is no gold standard landmark for rotational alignment of the tibial component. Relying only on the TTA has been associated with mal positioning of tibial component [25] so combination of reference points may reduce errors in component position [17]. Tibial component malrotation is more common and typically more severe than femoral component malrotation [26]. It was found that internal rotation of tibia component > 9° in relation to TTA caused pain and functional deficit but there was no pain in patient with external rotational errors [18] and similar results were found by Bell et al. [20].
Conclusion
The goal of successful TKR is to achieve accurate alignment of components. Malaligned components may lead to pain, poor functional outcome and impair stability of the joint [20,[26][27][28][29]. The acceptable parameters for accurate alignment are knee aligned +/-3°of neutral mechanical axis [8] Coronal alignment of femoral component > 8° and < 2° of valgus with respect to the FMA [10]. The tibia component should be placed at neutral alignment (90°) in coronal plane with maximum bone coverage [9,11]. The ideal femoral component positioning in sagittal plane is considered to be 0-3 degrees of flexion and the desired tibial sagittal alignment for most prosthesis types is a posterior slope between 0° to 7° [10]. Femoral component should not be implanted in internal rotation with respect to TEA as the femoral component should be placed in 2-5° of external rotation in relation to TEA [11,20]. There is no gold standard for measurement of tibial component rotation Excessive internal rotation of tibia component when measured in relation to the tibial tubercle can lead to knee pain [10]. | 2,038.4 | 2018-01-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Foam evolution in a processed liquid solution
Foam formation in a carbonated solution undergoing pouring and decompression is investigated with the use of high-speed imaging. Operational conditions similar to those encountered in industrial bottling processes are applied to inspect the mechanisms that control the foaming behaviour in practical filling applications. The evolution of the foam column during pressure release is analysed in quantitative terms by extracting the foam thickness from the images. The bubble dynamics inside the solution, and the destabilization processes on the foam column are seen to have a paramount effect on the observed foam evolution trend. The contributions to foam formation given by the nuclei entrained in the bulk liquid and by the bubble-generating sites on the container walls are finally distinguished and discussed.
Introduction
Foams frequently arise in industrial applications involving filling and pouring of liquids, such as food processing, casting of molten metals, and pharmaceuticals and chemicals manufacturing. In those processes, the formation of foam is an unwanted effect, since it retards the production process and can compromise the product quality. Consequently, finding suitable strategies able to inhibit the inception of foam is of primary importance in those fields.
It is now well known that in practical situations the presence of pre-formed gaseous nuclei is an essential factor for a solution to foam [1]. These nuclei can be as small to escape optical detection [2], and may be introduced in the liquid by several mechanisms, including bubble entrainment by liquidliquid impacts [3][4][5][6], and bubble entrapment along the container walls during filling [7,8]. The growth of nuclei to macroscopic sizes may be activated by pressure reduction or temperature increase, which can drive the system to a supersaturation state [1]. The turbulent dynamics generated inside the container can also trigger the growing of nuclei by cavitation [9]. Once they reach a critical size, bubbles migrate towards the free surface of the liquid where their aggregation gives rise to a column of foam. On the other hand, foam will decay spontaneously over time, its deterioration being dictated by the kinetics of four processes: disproportionation, gas diffusion, drainage and coalescence [10][11][12].
The mechanisms governing foam growth and stability, as well as their influencing parameters, have been extensively studied in different research areas for many years. However, most of the published studies consider one single phenomenon at a time, and are usually performed under simplified conditions. For these reasons, the exact incidence of the various processes on the production of foam is still quite unclear; this is especially true in the context of the filling processes, where a variety of mechanical and fluid dynamical events are interrelated. The present research aims at clarifying for the first time the contributions given by the totality of the foaming mechanisms to foam formation in a carbonated liquid, as a consequence of the pressure-filling process and the subsequent decompression phase. For this purpose, a series of experiments was carried out on a carbonated, optically transparent commercial soft drink, poured into a bottle with the use of a testing filling machine. High-speed imaging was adopted to inspect the phenomena occurring inside and above the solution, which was processed under realistic conditions of pressure and flow rate. The comparison between the images obtained on the foam column and inside the solution allowed the identification of the main phenomena at the basis of foam development. The recorded image sequences were also analysed in quantitative terms and the trend of the foam level during decompression was reconstructed.
Experimental set-up
In order to reproduce the fluid dynamical conditions encountered in practical filling applications, a testing filling machine for the bottling of carbonated beverages was employed in the experimentation. The machine transfers the liquid product from a pressurised reservoir to the container at controlled conditions of pressure and flow rate -their values being those typically used in the beverage industry. The container used for the tests was a brand-new 1.5 l plastic bottle. Since a swirl-type valve was used for filling, the bottle was chosen of cylindrical body, having an almost constant diameter (d = 88 mm), not ribbed and with a smoothly curved shoulder, to prevent accidental detachments of the annular filling jet from the container walls. The working fluid was carbonated on site to 4.3 volumes of CO2, and maintained at the fixed temperature of 17.5 °C. Experiments were performed with the following filling procedure: first, the internal pressure of the bottle was equated to the reservoir pressure, set at 4.4 bar gauge. This was achieved by introducing carbon dioxide in the bottle while venting to the atmosphere the air contained. Next, the liquid was injected into the bottle at a specific constant flow rate. The bottle was filled only partially, i.e. to just 1 l, to prevent foam overflows and allow an accurate measurement of the foam level. After the filling was completed, a short settling period was waited and then the pressure was gradually released. The decompression was operated in two steps, by consecutively opening a snift valve connected to ambient air. The opening time for second snift (∆t2) was three times the opening time of the first snift (∆t1), and equalled the intermediate rest period (∆trest) set between the two decompressions. The actual duration of each step cannot be provided since protected by non-disclosure agreement. At the end of each filling sequence, the bottle was emptied out, rinsed and dripped.
High-speed imaging was used for the recording of foam and bubble dynamics during decompression. The imaging system was arranged as sketched in figure 1.a. Two NanoSense MkIII CMOS cameras from Dantec Dynamics were positioned on one side of the filling machine for the simultaneous observation of the flow phenomena in distinct regions of the bottle. Two different configurations were implemented. In the first configuration (configuration A), one camera was zoomed in the bottle headspace to capture foam evolution, while the other one monitored the rising bubbles close to liquid/foam interface. The cameras were fitted with a zoom lens (AF Nikkor 28-80 mm) and a macro lens (Micro Nikkor 60 mm f/2.8), respectively. In the second configuration (configuration B), both the cameras were focused inside the liquid at different heights. In this case, they were equipped with macro lens to inspect bubble dynamics in the whole filled volume at the maximum achievable spatial resolution. Magnifications up to 100 µm/pixel were obtained. Image distortions in the liquid due to the bottle cylindrical geometry were minimized by encasing the bottle into a square-sided Plexiglas container. The optical compensation box, shown in figure 1.b, was filled with water for the matching of the refractive indexes.
Image sequences were recorded at 200 fps, which was the maximum rate compatible with the duration of the experiments. Acquisitions started at the beginning of the decompression sequence, triggered by a signal transmitted by the filling machine to the cameras. The microcontroller Arduino Uno was employed for the signal handling. A maximum synchronization error of 6.6 μs was measured, including the contribution of the cables. Three 27.2 W/m LED tubes were used as the light sources in backlight configuration. The tubes were positioned close to the test container to provide intense illumination. A uniform distribution of light was obtained by interposing a light diffuser between the LED sources and the compensation box.
Data reduction
The extrapolation of qualitative and quantitative information on the foam evolution and bubble behaviour was performed by processing the acquired images with the software ImageJ (provided by NIH, USA). Images collected inside the liquid were examined by direct visual inspection. No automated detection techniques could be applied because of the very high bubble concentration. The size of the bubbles was evaluated by approximating the nuclei with ellipsoids, from whose volume the equivalent spherical diameter was calculated. Subpixel accuracy could be obtained with this method.
For the estimate of the foam thickness, a procedure based on the extraction of intensity profiles was implemented. Images acquired in the bottle headspace were first filtered to suppress noise, and then reduced to a region with an almost uniform background. The profiles of the pixel intensities were drawn for the frames of interest, by plotting the horizontally averaged grey value along the vertical direction of the selected area. Each curve was analysed to identify the position of the upper and lower foam interface according to pre-determined mathematical criteria. The thickness of the foam was thus derived, with a maximum error of 9%.
Results and discussion
To allow the comparison between acquisitions recorded in different configurations, a preliminary repeatability analysis was carried out. The test was repeated twenty times with the cameras set in configuration A. The observed foam behaviour was qualitatively the same in all the cases; moreover, the maximum foam thickness was obtained at the same time instant with an error of 3%, its measured values showing a variation of 5%. Hence, the process can be considered reproducible.
The experiment was then performed again in both configurations. The foaming phenomena visualized inside and above the liquid during decompression are detailed in the following section. Next, the trend of the foam level as reconstructed from a whole recorded sequence is presented and commented.
Foam and bubble dynamics
The inspected foam and bubble dynamics were divided into five main stages, here described in chronological order in terms of the dimensionless parameter t ' = t / ∆t1. Exemplifying image sequences are provided, reporting i) in the first column, foam evolution as captured by the camera focused in the bottle headspace in configuration A, and ii) in the second and third column, what observed for the same time instant in the upper and lower portions of the liquid, respectively, with the cameras set in configuration B. The beginning of the first decompression step is taken as the reference time.
Early growth.
At the beginning of the decompression stage, foam has not developed yet. As shown in figure 2.a, just a collar of bubbles is occasionally present, formed by the nuclei introduced by the impinging filling jet and already risen to the liquid free surface. Further clustering is prevented by the tail-end of the jet which is still descending along the bottle walls. The liquid bulk is populated by a large amount of bubbles having different size and distribution (figures 2.b and 2.c). They all originate from entrainment processes taking place during filling. Bubble diameters ranging from 260 µm, which is the minimum perceivable size, to 4.83 mm have been measured. The biggest bubbles are found in the upper portion of the fluid, in rapid ascension driven by buoyancy. Below this region, bubble mean diameter gradually decreases downwards. Sub-millimetric nuclei prevail in the bottom of the bottle: such small bubbles are more affected by the liquid flow field, and are easily carried down deep into the fluid where they are retained even for long times.
Within the first four tenths of the first decompression step, all the entrained bubbles having diameters larger than 2 mm before decompression reach the free surface. Their aggregation gives rise to the first layer of foam, displayed in figure 2.d. Meanwhile, the gaseous nuclei gradually increase in size due to pressure reduction. Inside the liquid, this results in some alterations of the motion of the bubbles: as the buoyancy force increases, the nuclei which are already pointing upwards are accelerated towards the free surface, while those entangled in the fluid flow become able to overcome drag. This second effect is obtained when a critical diameter is attained, which depends on the residual agitation level of the liquid. At this stage of the process, the critical diameter is on the order of the millimetre: a sudden change in direction has been observed for bubbles with diameter of 1.27 ± 0.12 mm in the upper portion of the fluid (figure 2.e) and of 1.11 ± 0.08 mm in the lower portion (figure 2.f). The slight difference between the two regions is symptomatic of a fluid velocity field which is still non-uniform.
Bubble enlargement produces, as an additional consequence, the appearance of free nuclei that before were too small to be detected. Furthermore, at nearly t' = 0.4 also gaseous cavities entrapped on the bottle internal walls become visible. Hence, the liquid seems pervaded by an increasingly greater amount of bubbles, as can be observed by scrolling down the images in the second and third column of figure 2. Such a trend continues in the next few instants (0.4 < t' < 0.8). In this time-interval no more bubbles come into view, but a few nucleation sites on the walls activate and start to release bubbles of some hundreds of microns in diameter, with frequencies as high as 24 bubbles per second.
The population of nuclei in the liquid undergoes changes also in size distribution. Since bigger bubbles rise more rapidly, the lower portion of the bottle soon remains occupied by nuclei of homogeneously small size, whose diameter grows in time up to nearly 0.6 mm. The upper portion of the bottle houses a more varied population, bubbles arriving at the liquid-foam interface with diameters of even 2.3 mm. Fed by the continuous arrivals of bubbles, the foam column keeps growing (figure 2.g). During expansion, deterioration processes begin to affect the top layers of the foam: bubble burst occurs when an unstable size is reached, as a consequence of the pressure decrease or due to coalescence between adjacent nuclei. diameter is lower, i.e. of 0.69 ± 0.11 mm, and identical in the whole fluid, suggesting that the flow field has attenuated and levelled out. Besides the nuclei responsible of its generation, the also cloud incorporates i) entrained bubbles, typically of bigger size, already moving towards the free surface, ii) bubbles previously released from the sites on the walls, iii) entrained bubbles of size even smaller than the critical one, dragged into the wake of the rising bubbles, and, iv) bubbles torn away from the solid surface by the cloud passage. While ascending, the nuclei continue to enlarge and the cloud thickens. Since the path of the nuclei is more and more obstructed, their speed is slowed down and it takes about 1.4 ∆t1 for the cloud to rise completely. Hence, the process completes at t' = 2.2 (third row of figure 3), that is, well after the end of the first decompression step, extending over the first third of the intermediate rest period. Images shown in the first column of figure 3 demonstrate that cloud ascent promotes a significant expansion of the foam column. However, foam growth is not constant. On the one hand, it is interrupted by coalescence and burst of the bubbles composing the upper (and older) layers of the foam, like the ones appearing in figure 3.d. On the contrary, the new layers formed by the bubbles carried by the cloud are more compact and homogeneous, and so more stable. On the other hand, the interface between the liquid and foam is not fixed: initially the developing gaseous phase inside the liquid makes the interface shift upwards. As the cloud evolves, the rising nuclei start to crowd at progressively lower levels, and the interface is brought back to nearly the starting height.
First collapse.
With the cloud passage, the liquid empties out of all the nuclei introduced by entrainment. Nucleation sites already identified on the bottle walls continue to release bubbles in the remaining part of the intermediate rest period (2.2 < t' < 4.0), but with progressively slower rates. Further sites between the ones visible in figures 3.h and 3.i become active in this stage. These, in general, produce a few bubbles, whose size and frequency of release are highly influenced by interactions with other nuclei. Due to the scarce supply of new bubbles, the foam behaviour is dominated by decay. The homogeneous structure of the foam hinders its immediate deterioration. With passing time, drainage and coalescence induce structural changes in the whole foam column, involving fusion, deformation and rearrangement of bubbles, as can be observed by comparing figures 3.g and 4.a. As a result, bubble skins get thinner and weaker, ultimately causing the rupture of the top layers of the foam.
Second growth and collapse.
The second decompression step, operated at t' = 4.0, is instantly perceived as a sudden but moderate swelling of the bubbles in the foam. The foam structure is then further destabilized and subsequently exposed to a faster decay. Indeed, figures 5.a and 5.d show that the foam height drops, ultimately assuming, at t' = 5.3, levels similar to those observed in the very early stages of the process (figure 2.d). The new pressure decrease also affects the bubbles inside the liquid, including those bound to the bottle walls. As can be seen from the images in figures 5.b and 5.c, as compared with figures 4.b and 4.c, the bubble diameter increases rapidly, up to 30% in the first three tenths of the second snift. The persistence of a supersaturation state in the liquid promotes the growth of the nuclei also in the following instants. Consequently, more bubbles become able to overcome adhesion and detach from the solid surface. Large bubbles from the bottom of the bottle stimulate, in turn, the release of other nuclei when bumping them during ascension. In addition, nucleation sites activated during the first snift recover their original frequencies of release. All those factors produce a gradual increment in the rate of bubbles arriving at the liquid-foam interface ( figure 5.b). Such a tendency culminates at approximately t' = 4.8, after which no apparent variations in the characteristics of the emerging bubbles are observed.
3.1.5. Final equilibrium. When t' > 5.5 the foam height stops reducing. Any decay of the existing layers is compensated by the massive arrival of new bubbles, as large as those responsible of the generation of the first layer of foam. Hence the foam column is progressively deprived of residual bubbles from the first decompression step, which are substituted by freshly originated nuclei of uniform size. This process contributes to create an equilibrium state which lasts till the end of the process, as proved by figure 6. From figure 6.c it is also evident that despite the augmenting buoyancy force and the collisions with rising nuclei, clusters of bubbles are held on the bottle walls. These may give an additional delayed contribution to the foam, as they can be activated by mechanical agitation once the bottle is detached from the filling machine.
Plot of the foam trend
To understand how the above foaming dynamics reflect on the amount of foam produced, the image sequence relative to configuration A was analysed in quantitative terms. The foam thickness was extracted from 86 frames using the procedure described in section 2.2. The measurements were intensified when rapid changes in the foam column were observed, so as to reconstruct the whole trend with the best accuracy. The resulting graph of the foam thickness versus time is displayed in figure 7. Boundaries between the different stages of the decompression sequence are also indicated. The graph shows that the foam level grows rapidly during the first half of the first decompression step, fostered by the rise of the largest entrained bubbles. The growth is not immediate, but a short latency time has to be waited to allow the aggregation of the initial layer of foam. Later, the foam continues to expand with a slower rate. The change in slope should be attributed to variations occurring in the population of the ascending nuclei, such as their mean diameter and rising velocity. produce a further increase in the foam thickness. Bumps are present in this segment of the curve; these are caused by the occasional burst of bubbles on the top of the column. Though the cloud has not completely risen, at t' = 1.8 the foam reaches its maximum thickness, i.e. 43 mm, corresponding to the 26 % of the filled volume of liquid. The subsequent rupture of a large bubble originated by fusion from those visible in figure 3.d produces a sudden drop in the foam height. Next, the level is partially restored by the tail nuclei of the cloud as they eventually reach the liquid-foam interface.
With the cloud arrival (t' = 2.2), the production of foam induced by the first decompression step ceases. On the contrary, the foam level starts to decrease despite the incoming of some residual bubbles from the bottle walls. Initially (2.2 < t' < 3.2), the decay is almost imperceptible, impeded by the compact structure of the foam. Only a few bubbles burst, their size being too small to produce a significant reduction of the column. Approaching the end of the intermediate rest period, the number and size of the exploding bubbles gradually increase, being promoted by the advancing deterioration processes. However, this translates into a lowering of just 4 mm of the foam height. A more drastic collapse of the foam column is observed after the snift valve is opened again: following the modest peak at t' = 4.2, to be attributed exclusively to a swelling of the existing layers, the thickness of the foam reduces of 18 mm within an interval of 11 tenths. The decrease is not continuous but occurs in steps, as it is related to simultaneous ruptures of multiple large bubbles.
The last portion of the curve is characterized by the convergence of the foam thickness to a constant value, i.e. 13 mm. This can be considered as the equilibrium value in the balance between the further deterioration of foam, on the one hand, and the renewed production of bubbles, on the other one, under the conditions achieved with the second decompression step.
Conclusions
The foaming phenomena occurring in a carbonated beverage during bottling have been examined with the use of high-speed imaging. It is ascertained that with the adopted decompression sequence, nuclei entrained in the bulk liquid during filling play a major role in the thickening of the foam. Changes occurring in the population of the rising bubbles reflect on the rate of growth of the foam level. In particular, the development of a cloud provided by a global deviation of bubble trajectories determines the achievement of the maximum thickness. Other factors participate in defining the initial growth of the foam, including the conditions at the free surface as decompression initiates, and the onset of foam destabilization processes. Those last phenomena strictly depend on the structure of the foam and influence its subsequent evolution. The end of the process is dominated by the competition between the deterioration of the existing layers and the release of bubbles from sites located on the bottle walls, which allow the establishment of an equilibrium height of the foam. | 5,279.4 | 2017-01-01T00:00:00.000 | [
"Materials Science"
] |
Numerical Study of a Multi-Layered Strain Sensor for Structural Health Monitoring of Asphalt Pavement
: Crack initiation and propagation vary the mechanical properties of the asphalt pavement and further alter its designate function. As such, this paper describes a numerical study of a multi-layered strain sensor for the structural health monitoring (SHM) of asphalt pavement. The core of the sensor is an H-shaped Araldite GY-6010 epoxy-based structure with a set of polyvinylidene difluoride (PVDF) piezoelectric transducers in its center beam, which serve as a sensing unit, and a polyurethane foam layer at its external surface which serves as a thermal insulation layer. Sensors are coated with a thin layer of urethane casting resin to prevent the sensor from being corroded by moisture. As a proof-of-concept study, a numerical model is created in COMSOL Multiphysics to simulate the sensor-pavement interaction, in order to design the strain sensor for SHM of asphalt pavement. The results reveal that the optimum thickness of the middle polyurethane foam is 11 mm, with a ratio of the center beam/wing length of 3.2. The simulated results not only validate the feasibility of using the strain sensor for SHM (traffic load monitoring and damage detection), but also to optimize design geometry to increase the sensor sensitivity.
Introduction
Crack initiation and propagation vary the mechanical properties of the pavement and further alter its designed function [1]. To date, optical fibers [2], conventional strain gauges [3], and sometimes metal-foil-type gauges [4], are commonly used for Structural Health Monitoring (SHM) applications. Although conventional strain gauges show good reliability, they are rarely used in asphalt materials, due to the challenges of harsh installation conditions, high temperatures (up to 164 °C), and pressure (around 290 ksi) [5,6]. Optical fibers are relatively expensive.
Piezoelectric materials are materials that can generate electrical charges when they are mechanically deformed. To date, piezoelectric materials, such as piezoceramic material (Lead Ziroconate Titanate, PZT) and piezoelectric plastic material (PVDF), have been widely used in research and in practice as sensors for dynamic applications in SHM and energy harvest [7][8][9][10], since piezoelectric-based sensors have strong piezoelectric effects and wide bandwidth. However, piezoceramic material always suffers from saturation due to its high piezoelectric coefficient, and is also far too brittle to sustain high strain. Piezoelectric plastic materials, such as PVDF, offer the advantages of high sensitivity, good flexibility, good manufacturability, small distortion, low thermal conductivity, high chemical corrosion resistance, and heat resistance [11,12]. As such, PVDF was chosen as the key sensing material for this multi-layered strain sensor. However, due to the harsh installation conditions of the asphalt pavement, particularly the high temperature (up to 164 °C) and pressure (around 290 ksi), specific packaging would need to be designed for the PVDF thin film to survive during the construction.
To this end, this work proposes to use a multi-layered strain sensor to overcome the installation challenges in asphalt pavement and to provide a reliable SHM approach of asphalt pavement. The core of the sensor is an H-shaped Araldite GY-6010 epoxy structure with a set of PVDF piezoelectric transducers in its center beam and a polyurethane foam layer at its external surface. A thin layer of a cast urethane resin coating and Araldite GY-6010 epoxy frameset are added to enhance the overall sensor stiffness and to prevent the sensor's being damaged in the field by compaction. The H-shape is adopted from the conventional strain gauge [13]. As a proof-of-concept study, a numerical stress deflection model was created to simulate the pavement-sensor interaction for the design of the strain sensor for monitoring the SHM of asphalt pavement. Simulation of heat transfer is conducted in COMSOL to determine the thickness of each layer. As a result, the chosen thickness of the middle foam layer is 11 mm. Another finite element analysis was conducted to study the center beam length/wing length ratio and to validate the sensor's capability of capturing the crack initiation after packaging. Figure 1 depicts the multi-layered strains sensor used in this work for the SHM of asphalt pavement. The key sensing unit is an 80 mm × 18 mm × 1.55 mm PVDF piezoelectric thin film [13]. To better protect the PVDF thin film, three layers of protection, which respectively are the internal mechanical protection layer, the middle thermal insulation layer, and the outmost corrosion protection layer made by urethane casting resin, are built on the external surface of the PVDF thin film. Regarding the internal mechanical protection layer, the packaging material chosen is Araldite GY-6010 epoxy [5]. According to the material parameter sheet of Araldite GY-6010 epoxy, it has a tensile modulus around 2.67 GPa and high tensile and flexural strengths above 27.56 MPa, which are similar to other epoxies. Its thermal conductivity is relatively lower, generally only 0.2 W·m −1 ·K −1 . In addition, polyurethane foam is chosen as the material for the middle thermal insulation layer due to its excellent thermal insulation performance. The thermal conductivity of polyurethane foam is only 0.022 W·m −1 ·K −1 [14]. Considering the stress distribution of the sensor embedded underneath the pavement, the H-shape has been adopted in this paper. The thickness of the outer corrosion protection layer is less than 1 mm, which is negligible as compared with the thickness of the other two layers, and it will not be discussed in this paper. The substitute was not included in the analysis.
Sensor Configuration
As mentioned above, the thermal insulation material used in this study is polyurethane foam, which has a thermal conductivity of 0.022 W·m −1 ·K −1 [14]. Several heat conduction simulations are conducted to determine the thickness of the foam. To simply the finite element model (FEM), a twodimensional FEM is created in COMSOL Multiphysics, as shown in Figure 2b according to the conduction Equations (1) and (2).
where Q is the heat content in Joules, k represents the conductivity of materials which used in the simulation, is the local heat flux density, W·m −2 , and ρ is the density of each material, kg·m −3 . is each material's specific heat capacity, J·kg −1 ·K −1 . is the temperature gradient, K·m −1 ·T, t represents the temperature and the time, respectively. In the FEM, the thickness of the polyurethane foam is simulated in the range of 5 mm to 12 mm with a 1 mm increment. Meanwhile, the thicknesses of the asphalt concrete and the internal mechanical protection layer (Araldite GY-6010 epoxy layer) are set as 100 mm and 10 mm, respectively. The thickness of the internal mechanical protection layer is an estimate value based on the desired mechanical strength of the sensor. The boundary condition of the model is set as shown in Figure 2a. The left boundary of the asphalt is directly in contact with air, whose temperature is set as constant room temperature: 298.15 K. Meanwhile, the right boundary of the asphalt is in contact with the strain sensor, whose temperature should initiate at 437.15 K [15]. Finally, according to the literature, the average cooling time of the asphalt pavement is 39 mins [15]. As such, the heat transfer time in this study is set as 39 mins. As a result, the maximum output temperature (the temperature between the middle mechanical protection layer and PVDF thin film) should be no more than 333.15 K (the item labeled in red in Figure 2a). In Figure 2a,b and Figure 3, the X axis and Y axis represent the directions along the sensor height and length, respectively.
As shown in Figure 3, with the increase of the foam thickness, the output temperature drops correspondingly. If the desired output temperature is 333.15 K (60 °C), the minimum thickness of the foam should be 11 mm. It can be clearly seen from Figure 4 that the output temperature decreases dramatically after the thermal insulation layer (foam layer).
Finite Element Model
After the determination of the thickness of the middle thermal insulation layer, the next important task is to determine the optimal ratio of the wing length to the center beam length. With the optimal ratio, the sensor is expected to reach its highest sensitivity with the lowest material cost, in other words, the lowest cost. Considering the pressure on the road, the static performance of the sensor under the pressure of the car (4900 N) on the road is simulated and analyzed. In this simulation, the length of the center beam of the H-shape, LC, is first fixed as 160 mm, and the strain is observed by varying the wing length, LW, in the range of 20 mm to 50 mm with a 10 mm increment. After confirming the length of the wing, LW, another simulation is carried out to determine the final center beam length/wing length ratio by fixing the wing length at 50 mm and varying the center beam length in the range of 80 mm to 200 mm with a 20 mm increment.
Three-Point Bending Test
In this simulation, a three-point bending test is utilized to analyze the sensor design. As shown in Figure 5, two concrete supports are used at both ends of the bottom of asphalt pavement beam. The size of the asphalt concrete is 300 mm × 130 mm × 100 mm. The thickness of the asphalt equals 100 mm, cited from Alavi, A.H.'s study [16]. A force of 4.9 kN is applied to the middle region of the asphalt pavement beam's top surface, with a contact area of 200 mm × 130 mm. In other words, the overall pressure applied on the top of the beam is about the pressure of a tire on the ground of an ordinary car.
In addition, the same analysis is also used to validate the feasibility of using the sensor to detect the pavement crack. The damage was introduced by making a crack at the middle of the bottom of the asphalt pavement beam. To be consistent, the total height of the asphalt pavement beam is still 100 mm (D = 100 mm). By increasing the crack depth, DC, the measured sensor strain should change correspondingly. As such, the crack depth, DC, is chosen in the range of 0 mm to 100 mm with a 10 mm increment. In the FEM, the Elastic modulus, the density, and Poisson's ratio of asphalt pavement beam used were 1200 MPa, 2.6 g·cm −3 , and 0.35, respectively. Figure 6 shows that with the increase of the wing length, LW, both the vertical and the horizontal strain increases. However, when the wing length, LW, increases to 50 mm, the vertical strain curve begins to flatten and it stabilizes at around 101 µε. Meanwhile, the horizontal strain first shows a gentle trend and then shows a sharp upward trend. Therefore, according to the variation trend of the water vertical strain curve, 50 mm can be determined as the suitable width of the wing. After determining the width of the wing, the simulation began to change the length of the center beam, LC, with a fixed side wing length, LW. The simulation results are shown in Figure 7; by increasing the length of the center beam, the two strain curves increase gradually until the length of the center beam, LC, reaches 160 mm. After 160 mm, the horizontal strain basically does not change, but the vertical strain curve drops slightly in the range of 160 mm to 180 mm. Then both the vertical strain and the horizontal strain show a peak at 190 mm. When the length reaches 200 mm, the two curves decrease sharply, indicating that the performance of the H-shape cannot be guaranteed. As such, the final decision is to use 160 mm as the length of the final center beam. In other words, the optimal ratio of the center beam length (160 mm) to wing length (50 mm) is Section 3.2.
Result and Discussion
The simulation results of crack detection are shown in Figure 8. It can be clearly seen from the figure that when the notch depth increases to 50 mm, the two strain curves reach their peaks. From 50-90 mm, these two curves drop slightly. After 90 mm, the strain value begins to decline. Therefore, it can be concluded that the H-shape can detect the state of the crack by observing the vertical strain or the horizontal strain.
Conclusions
In this paper, a unique sensor package is designed to detect cracks in the bottom of asphalt concrete on the road. The selection of packaging materials, the thickness of packaging materials, and the shape of packaging were all simulated to determine the dimensions of packaging. Although the package design of the piezo-sensor has been simulated by many groups, and its feasibility has been proved by the results, there are still many problems to be solved when considering various road and environmental loads. The main problems are as follows: The manufacturing process of the H-shape is complex and time-consuming; the physical and chemical properties of the H-shape surface need further verification as to whether it is suitable for its working environment; only a few types of structures can be analyzed; and the H-shape may not be the best packaging shape. Future research is needed to find better shapes to replace the H-shape. Numerical simulations will help in choosing the ideal stress condition, since valid data can be difficult to collect in the actual use situation. | 3,105.2 | 2019-11-14T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Characterization of the Glehnia littoralis Non-specific Phospholipase C Gene GlNPC3 and Its Involvement in the Salt Stress Response
Glehnia littoralis is a medicinal halophyte that inhabits sandy beaches and has high ecological and commercial value. However, the molecular mechanism of salt adaptation in G. littoralis remains largely unknown. Here, we cloned and identified a non-specific phospholipase C gene (GlNPC3) from G. littoralis, which conferred lipid-mediated signaling during the salt stress response. The expression of GlNPC3 was induced continuously by salt treatment. Overexpression of GlNPC3 in Arabidopsis thaliana increased salt tolerance compared to wild-type (WT) plants. GlNPC3-overexpressing plants had longer roots and higher fresh and dry masses under the salt treatment. The GlNPC3 expression pattern revealed that the gene was expressed in most G. littoralis tissues, particularly in roots. The subcellular localization of GlNPC3 was mainly at the plasma membrane, and partially at the tonoplast. GlNPC3 hydrolyzed common membrane phospholipids, such as phosphotidylserine (PS), phosphoethanolamine (PE), and phosphocholine (PC). In vitro enzymatic assay showed salt-induced total non-specific phospholipase C (NPC) activation in A. thaliana GlNPC3-overexpressing plants. Plant lipid profiling showed a significant change in the membrane-lipid composition of A. thaliana GlNPC3-overexpressing plants compared to WT after the salt treatment. Furthermore, downregulation of GlNPC3 expression by virus-induced gene silencing in G. littoralis reduced the expression levels of some stress-related genes, such as SnRK2, P5SC5, TPC1, and SOS1. Together, these results indicated that GlNPC3 and GlNPC3-mediated membrane lipid change played a positive role in the response of G. littoralis to a saline environment.
INTRODUCTION
Glehnia littoralis Fr. Schmidt ex Miq. is a medicinal and edible plant in the Umbelliferae family. Glehnia littoralis is rich in coumarins, coumarin glycosides, phospholipids, and polysaccharides (Yuan et al., 2002). The peeled and dried roots and rhizomes of G. littoralis are commonly used as a traditional Chinese herbal medicine for moistening the lungs, removing phlegm, relieving cough, curing gastrointestinal disorders, recovering from surgery, and immunoregulation, as well as for their anti-inflammatory properties (Huang et al., 2012;Li et al., 2020). Moreover, the tender leaves of G. littoralis are edible as a vegetable. Therefore, the demand for G. littoralis as a clinical medication and health care product is always high. Glehnia littoralis is a perennial halophyte that grows on sandy beaches in Northern Pacific countries and regions, such as eastern China, Japan, the Korean Peninsula, Russia, and the United States (Li et al., 2018). A previous study elucidated how G. littoralis adapts to high-salinity environments by analyzing its anatomical and morphological characteristics, such as the secretory trichomes and thick cuticle cover on leaves (Voronková et al., 2011), but the molecular mechanism of salt adaptation in G. littoralis remains largely unknown.
The phospholipid bilayer of the plasma membrane is the first barrier to the external environment. Phospholipases, including phospholipase A (PLA 1 , sPLA 2 , and pPLA), phospholipase C (PLC), and phospholipase D (PLD) hydrolyze membrane phospholipids, and affect the structure and stability of the cellular membrane and signaling responses to environmental stimuli (Wang, 2005;Scherer et al., 2010;Hong et al., 2016). The products of phospholipases, such as phosphatidic acid (PA) and inositol 1,4,5-trisphosphate, are cellular messengers that regulate various biological processes (Testerink and Munnik, 2005;Wang et al., 2006;Tang et al., 2007). PLD and PLC/ diacylglycerol kinases (DGK) play a major role in the stressinduced generation of PA ). The activities of the phospholipases are induced rapidly, coupled with a highly dynamic level of PA, in response to hormones, as well as abiotic and biotic stimuli (Bargmann and Munnik, 2006;Li et al., 2006;Peters et al., 2010). PA acts as a cellular mediator by binding with some regulatory proteins, anchoring these proteins to the membrane, modulating their catalytic activities, and altering membrane structure Testerink and Munnik, 2011). PA-interacting proteins include various kinases, transcription factors, phosphatases, and other target proteins during cellular processes (Zhang et al., 2004(Zhang et al., , 2012Yu et al., 2010;Kim et al., 2019;Shen et al., 2019Shen et al., , 2020. Non-specific phospholipase C (NPC) is a subtype of PLC that hydrolyzes common membrane phospholipids, such as phosphatidylcholine (PC), phosphatidylethanolamine (PE), and phosphatidylserine (PS), to produce sn-1,2-diacylglycerol (DAG) and a corresponding phosphorylated headgroup. DAG is phosphorylated by DGK to generate PA. Unlike PI-PLC (another PLC subtype), which is found widely in animals, plants, and bacteria, NPC is found only in bacteria and plants, and has distinct evolutionary features (Nakamura, 2014;Nakamura and Ngo, 2020). In the model plant Arabidopsis thaliana, six NPCs have been identified, most of which have been reported to be involved in multiple biological processes. NPC4 is associated with the plasma membrane, and the recombinant protein of NPC4 shows enzymatic activity toward PC and PE (Nakamura et al., 2005). NPC4 is involved in the plant response to abscisic acid (ABA), auxin, phosphate deficiency, hyperosmotic conditions, and salt and Al stresses (Nakamura et al., 2005;Gaude et al., 2008;Peters et al., 2010;Wimalasekera et al., 2010;Kocourková et al., 2011;Yang et al., 2021). NPC4 and NPC3 are important in brassinolide-mediated signaling in root development (Wimalasekera et al., 2010). NPC4 is also involved in phosphosphingolipid hydrolysis and remodeling in the presence of a phosphate deficiency during root growth . NPC1 plays a vital role in plant heat tolerance (Krčková et al., 2015). NPC1 is localized at secretory pathway compartments, such as the endoplasmic reticulum or Golgi apparatus, so is NPC2. NPC2 expression is suppressed after Pseudomonas syringae attack, and NPC2 is involved in plant immune responses, such as PTI, ETI, and SA (Krčková et al., 2018). The double-mutant npc2npc6 displays a lethal homozygous phenotype and a defect in the heterozygous gametophyte (Ngo et al., 2018). NPC5 is a cytosolic protein. NPC5 and its derived DAG mediate lateral root development under salt stress, and NPC5 is involved in galactolipid accumulation during phosphate limitation (Gaude et al., 2008;Peters et al., 2014). The localization of NPC6 is present in both chloroplast and microsomal fractions, but not in cytosolic or nuclear fractions (Cai et al., 2020). NPC6 promotes seed oil content and enhances PC and galactolipid turnover to TAG (Cai et al., 2020).
The important roles of NPCs in other plant species have been gradually revealed following the NPC findings in A. thaliana. For example, Cai et al. (2020) reported that BnNPCs affect seed mass and yield, and that BnNPC6.C01 is positively associated with seed oil content in oilseed rape. In rice, five OsNPCs are identified. The cytosolic and membrane-associated OsNPC1 modulates silicon distribution and secondary cell wall deposition in nodes and grains, affecting mechanical strength and seed shattering (Cao et al., 2016). OsNPC6 is involved in mesocotyl elongation in rice. OsNPC6overexpressing plants exhibit a shorter mesocotyl than the wild-type (WT) and npc6 mutant . Yang et al. (2021) also indicated that all five OsNPCs showed plasma membrane localization. These studies suggest that plant NPCs are involved in numerous biological processes, although the functional properties of most plant NPCs await exploration. In the current study, we cloned and identified GlNPC3 from G. littoralis, which played a positive role in the salt stress response.
Plant Materials and Growth Conditions
The G. littoralis plants used in this study were obtained and grown as described by Li et al. (2018). The seedlings were subjected to salt (200 mM NaCl), drought (20% PEG 6000), or hormone [100 μM ABA or 100 μM methyl jasmonate (MeJA)] stress treatments for 0, 6, or 24 h. The shoots and roots were sampled separately at different time points in each treatment.
Arabidopsis thaliana Columbia (Col-0, WT) and its transgenic seeds were sown on Murashige and Skoog (MS) medium with vitamins (1% sucrose and 1% agar) and supplemented with 75 mM NaCl for the salt stress treatment. The plants were grown in a vertical position (for root growth) in a growth chamber under a 14-h-light (23°C)/10-h-dark (20°C) Frontiers in Plant Science | www.frontiersin.org photoperiod. The roots were measured using the ImageJ software (NIH, Bethesda, MD, United States).
In silico Characterization of GlNPC3
The coding sequence (CDS) of GlNPC3 was amplified from G. littoralis cDNA using the GlNPC3-F1 and GlNPC3-R1 primers. The locations of the exons and introns were defined using GSDS V2.0. 1 The protein domains were identified with Pfam database. 2 The GlNPC3 upstream promoter sequence was amplified using the Genome Walking Kit (Takara, Dalian, China) with the SP1, SP2, and SP3 primers. The plant regulatory elements in the GlNPC3 promoter region were obtained from the PlantCARE database. 3 All the primers are listed in Supplementary Table S1. The gene, protein, CDS, and a 2,070-bp promoter sequence of GlNPC3 are shown in Data Sheet 1.
Vector Construction and Plant Transformation
To generate the GlNPC3-overexpressing construct (pSuper::GlNPC3), the CDS was amplified using the GlNPC3-F2/GlNPC3-R2 primers. The product was cloned into the pSuper1300 binary vector, which was pCAMBIA1300 containing a Super promoter (Ni et al., 1995;Cao et al., 2013), and digested with XbaI/SalI. To generate the Promoter GlNPC3 ::GUS construct, a 2,070-bp promoter sequence was amplified using the Pro-GlNPC3-F/Pro-GlNPC3-R primers and cloned into the PMV plant binary vector (Hua et al., 2021). The primers are listed in Supplementary Table S1.
Arabidopsis thaliana transformation was performed according to the floral dip method (Clough and Bent, 1998). The harvested seeds were selected on MS medium containing 25 mg/L hygromycin until the T3 generation. The homozygous lines were used for the experiments.
Subcellular Localization of GlNPC3
The CDS sequence of GlNPC3 was inserted into the pSuper1300GFP vector using the GlNPC3-F3 and GlNPC3-R3 primers to generate pSuper::GFP-GlNPC3 recombinant plasmid. The N-terminus of GlNPC3 was translationally fused to GFP under the control of Super promoter. A plasma membrane (PM) marker (AtCBL1n-OFP) and tonoplast marker (AtCBL6-OFP) were used as described by Batistič et al. (2010). The plasmids were introduced into Agrobacterium tumefaciens (GV3101) for transient expression in tobacco (Nicotiana benthamiana), and the tobacco leaves were infiltrated as described previously (Li et al., 2017). Fluorescence was observed using an LSM 780 confocal microscope (Zeiss, Jena, Germany).
GUS Staining
Histochemical staining for GUS expression in transgenic A. thaliana was performed according to the method of Jefferson (1987).
Protein Expression and Enzyme Activity Assays
The GlNPC3 CDS was cloned into the pCold I vector (Takara) with a His-tag at its N-terminus and transformed into E. coli BL21 (DE3). Isopropyl β-D-thiogalactopyranoside (0.1 mM) was added and the culture was incubated at 15°C for 24 h to induce expression of the His-GlNPC3 fusion protein. The protein was purified with TALON ® Metal Affinity Resin (Clontech, Palo Alto, CA, United States) according to the manufacturer's protocol.
GlNPC3 activity was assayed as described previously (Nakamura et al., 2005;Peters et al., 2010) with some modifications. The purified His-GlNPC3 protein or total plant protein was incubated for 1 h at 37°C in a 500 μl reaction mixture (50 mM Tris-HCl, pH 7.3, 50 mM NaCl, and 5% glycerol) in the presence of a sonicated micellar suspension of 200 μM of the fluorescent head-group-labeled lipid substrate. The lipid substrates 18:1 NBD-PS (810198C), 18:1 NBD-PE (810145P), and 18:1 Cy5-PC (850483C) were obtained from Avanti Polar Lipids, Inc. (Alabaster, AL, United States). The reaction was stopped by vigorous vortex with 1 ml of ethyl acetate and 0.75 ml of 0.45% NaCl. Centrifuge the reaction at 1,500 g for 10 min and remove supernatant (water-soluble phase) to a new tube carefully, do not suck middle layer (protein) and lower organic phase (lipid). The water-soluble phase was used to determine fluorescence using a microplate reader (SpectraMax ID5; Molecular Devices, San José, CA, United States) with the following parameters: NBD, excitation at 460 nm and emission at 534 nm; Cy5, excitation at 648 nm and emission at 662 nm.
RNA Extraction and Quantitative Real-Time PCR
RNA extraction and quantitative real-time PCR (RT-qPCR) were performed according to protocols described previously (Li et al., 2020). The primers for RT-qPCR are listed in Supplementary Table S1.
Virus-Induced Gene Silencing in G. littoralis
Virus-induced gene silencing (VIGS) was performed as described previously (Liu et al., 2002). A 409-bp fragment of GlPDS (phytoene desaturase gene) was amplified from the G. littoralis CDS using the PDS-F/PDS-R primers and constructed using pTRV II as the positive control. Similarly, 199-and 200-bp fragments of the CDS within GlNPC3 were cloned into pTRVII using the vigs-F1/vigs-R1 and vigs-F2/vigs-R2 primers. The fragments were designed using the SGN VIGS tool 4 (Fernandez-Pozo et al., 2015). The recombinant vectors were transferred into A. tumefaciens strain GV3101. Suspensions of A. tumefaciens containing recombinant vectors were mixed with suspension of A. tumefaciens containing pTRV I (helper vector) for infiltration. The silencing effect was monitored in the filtered leaves after 14 days by RT-qPCR. The silencing phenotype (photobleaching phenotype) of VIGS-PDS was used as an indicator of VIGS efficiency. All primers are listed in Supplementary Table S1. The fragments used for the VIGS assay are listed in Data Sheet 2.
Cloning and Identification of GlNPC3
The phospholipase-mediated lipid signaling pathway plays an important role in the abiotic stress response in plants. Based on the G. littoralis salt-related transcriptome data obtained in our previous study (Li et al., 2018), we cloned a NPC that showed a positive response to salt treatment in G. littoralis. This gene contained an open reading frame of 1,563 bp encoding a protein of 520 amino acids ( Figure 1A). Phylogenetic analysis of this NPC revealed a close relationship to Daucus carota L. DcNPC3; therefore, we named this gene GlNPC3 (Supplementary Figure S1). GlNPC3 contained a phosphoesterase domain with three motifs commonly present in plant NPCs (Figures 1B,C). To better understand the function of GlNPC3, the 2,070-bp GlNPC3 promoter sequence was cloned. We analyzed the main regulatory cis-elements in the GlNPC3 promoter region using the PlantCARE database ( Figure 1C; Data Sheet 1). Some regulatory elements are involved in plant hormone and abiotic stress responses, such as the light-responsive G-box, GT1-motif, TCT-motif, and GATA-motif; hormones responsive to AREB, the CGTCA-motif, and the TGA-motif; and the stress-responsive MYB, MYC, and STRE.
GlNPC3 Plays a Positive Role in the Salt Stress Response
We performed RT-qPCR to reveal changes in GlNPC3 gene expression when the G. littoralis seedlings were exposed to ABA, MeJA, NaCl, or osmotic stress (PEG). The results showed that GlNPC3 positively responded to these treatments in shoots and roots, and the expression of GlNPC3 increased continuously within 24 h after the salt treatment (Figure 2). The trends in GlNPC3 expression determined by RT-qPCR were consistent with those shown in our previous RNA-seq data (Li et al., 2018). As a stable G. littoralis genetic transformation system has not been established, we transformed recombinant plasmid (pSuper::GlNPC3) into A. thaliana (Col-0) to verify the biological function of GlNPC3 in the salt stress response. Homozygous transgenic lines were isolated and exogenous GlNPC3 expression was evaluated in the transgenic lines by reverse transcription PCR (Supplementary Figure S2). It was found that exogenous GlNPC3 was over-expressed in transgenic lines OE1 and OE2 (Supplementary Figure S2). The OE lines showed no obvious phenotype when grown under normal conditions compared to WT plants. However, when plants were grown in medium supplemented with 75 mM NaCl, the main root length of the OE lines were significantly longer than that of the WT (Figure 3). Moreover, soil-cultivated plants were exposed to the salt treatment. Four-week-old plants were watered with Hoagland's culture solution containing 100 mM NaCl, and the concentration was maintained for 10 days. The fresh and dry masses were higher in OE lines than the WT under the salt treatment (Supplementary Figure S3). The leaf size of the WT decreased compared to that of the OE lines. Thus, these results suggest a positive role for GlNPC3 in G. littoralis salt tolerance.
GlNPC3 Expression Pattern and Localization
We examined the tissue expression pattern and subcellular localization to further understand the function of GlNPC3 in G. littoralis. RT-qPCR revealed that GlNPC3 was expressed in most tissues, including roots, rhizomes, leaves, flowers, pedicels, leaf sheath, the rachis, and the petioles ( Figure 4A). GlNPC3 was highly expressed in roots, at a level more than three times higher than in other tissues ( Figure 4A). A GUS construct driven by the GlNPC3 native promoter (Promoter GlNPC3 ::GUS) was transformed into A. thaliana. As shown in Figures 4B-H, GUS staining occurred in the seminal and lateral roots, hypocotyl, leaves, stamens, and stems. Promoter GlNPC3 ::GUS displayed a high level of expression in leaves at the A. thaliana eight-leaf stage; however, little expression was detected at the four-leaf stage and in old leaves (Figures 4B,C,E). The expression of GlNPC3 varied spatiotemporally. Moreover, the expression of Promoter GlNPC3 ::GUS also responded to exogenous NaCl, mannitol or the hormones treatments in the roots of transgenic A. thaliana seedlings (Supplementary Figure S4). To explore the subcellular localization of GlNPC3, GFP-GlNPC3 fusion protein was co-expressed with a PM marker or tonoplast marker tagged with orange fluorescent protein (OFP) in N. benthamiana leaves (Batistič et al., 2010). The distribution of green and orange fluorescence indicated that GlNPC3 was predominantly localized at the plasma membrane, with some staining of the tonoplast (Figure 5).
Biochemical Characterization of GlNPC3
To determine the biochemical characteristics of GlNPC3, the GlNPC3 protein fused with a His-tag at its N-terminus (His-GlNPC3) was expressed in E. coli and subjected to an in vitro enzymatic assay (Figure 6A, inset). GlNPC3 activity was assayed using various fluorescent head group-labeled phospholipid substrates, including NBD-PS, NBD-PE, and Cy5-PC. Phospholipases hydrolyze lipid substrates and produce a soluble group with fluorescence, which is easy to detect with a microplate reader. In our results, His-GlNPC3 fusion protein displayed twice the hydrolyzing activity toward the PS substrate compared to PC, and five times the activity compared to PE ( Figure 6A). As a negative control, His-tag protein (empty vector) displayed little enzymatic activity toward these substrates ( Figure 6A). Next, we extracted total protein from WT and GlNPC3-overexpressing lines (OE1 and OE2) for enzymatic activity assay, respectively. Total NPC activity was assayed using the PS substrate. The WT and OE lines showed similar NPC activity under the control condition. NPC activity was significantly induced by salt treatment (Figure 6B). Total NPC activity was higher in the OE lines after the salt treatment compared to the WT. These results suggest that GlNPC3 contributes to the overall NPC activity in plants during the salt stress response.
Overexpression of GlNPC3 Alters Membrane-Lipid Composition in Transgenic Plants
Membrane lipids affect the stability of membranes, and are involved in various cellular responses to abiotic and biotic stresses. To investigate the mechanism underlying GlNPC3 lipid metabolism in response to salt stress, we determined the lipid composition of transgenic A. thaliana using ESI-MS/MS-based A B FIGURE 2 | Relative expression levels of GlNPC3 under different stress conditions. GlNPC3 expression levels in shoots (A) and roots (B) under different stress conditions. Glehnia littoralis seedlings were subjected to salt (200 mM NaCl), drought (20% PEG 6000), or hormone (100 μM ABA or 100 μM MeJA) treatments for 0, 6, or 24 h. The shoots and roots of G. littoralis were harvested at the indicated time intervals for RNA extraction. GlCYP2 was used as the internal control. Data represent means three biological replicates with three pooled plants each ± SD (three technical replicates per biological replicate). Different letters indicate statistically significant difference in each treatment (p < 0.05, Duncan's multiple range test).
lipid profiling (Figure 7). Under normal condition, there was no significant difference in the levels of PC, PE, phosphatidylinositol (PI), PS, phosphatidylglycerol (PG), DAG, and PA between WT and GlNPC3-overexpressing seedlings (OE1; Figures 7A-H). After 45 min of salt treatment, the lipid composition changed to varying degrees (Figure 7). The levels of PC, PS, and PG in OE1 decreased obviously compared to that of WT. DAG content increased significantly in WT after exposure to the salt treatment, however, no significant difference occurred in OE1 ( Figure 7G). Generally, NPC hydrolyzed glycerolipids to generate DAG, and DAG simultaneously generated PA through DGK kinases. Then, we analyzed PA content. In our result, there was no significant difference in the level of total PA between salt-treated and non-treated plants. Furthermore, a molecular species analysis revealed the lipid composition of the plants (Supplementary Figure S5). The molecular species in the seedlings with a higher mass spectral signal were 34:3 Figure S5). The changes in major lipid molecular species (Supplementary Figure S5) were almost consistent with those shown in sum (Figure 7).
Downregulation of GlNPC3 Affects the Transcript Levels of Some Stress-Related Genes
It was difficult to obtain transgenic G. littoralis plants, so we used VIGS to transiently silence the GlNPC3 gene in G. littoralis leaves for rapid functional analysis. First, the endogenous G. littoralis phytoene desaturase gene (GlPDS), which causes photobleaching, was used as a positive control to assess VIGS efficiency (Supplementary Figure S6). The photobleaching phenotype was different from senescence (Supplementary Figure S6A).
Photobleaching was confined to infiltrated leaves, perhaps because of the unique morphology of G. littoralis, which has extremely short rhizomes and a long petiole, and a basal petiole that expands into the sheath. Thus, only infiltrated leaves were used for the RNA extraction and RT-qPCR analysis. Then, we selected two interfering fragments (VIGS NPC3 -1 and VIGS NPC3 -2) of the GlNPC3 CDS for VIGS-silencing. The specificity of gene silencing was detected in infiltrated leaves (Supplementary Figures S6B-D).
GlNPC3 expression was almost 50 and 70% reduced in the injected leaves of VIGS NPC3 -1 and VIGS NPC3 -2, respectively (Supplementary Figure S6B). We also checked two of the GlNPC3 homologous genes. Their expression levels were not reduced as much as that of GlNPC3 (Supplementary Figures S6C,D). RT-qPCR analysis was performed to investigate whether the GlNPC3-mediated lipid changes were accompanied by a gene expression response. According to the RT-qPCR results, most of the stress-related genes were significantly induced by the NaCl treatment (Figure 8). In particular, transcription of GlSnRK2 (comp37685_c0_seq1), GlP5CS (comp33363_c0_seq1), GlWRKY (comp25557_c0_seq2), GlSOS1 (comp30905_c0_seq3), and GlCIPK (comp35199_c0_seq4) increased more than 3-fold in response to the salt stress, while the transcription of GlTPC1 (comp35393_ c0_seq6) increased more than 10-fold. The induction of most of the tested genes was significantly inhibited in the VIGS NPC3 -1 and VIGS NPC3 -2 leaves (Figure 8).
DISCUSSION
Glehnia littoralis is a medicinal halophyte that grows in coastal habitats. There is an inevitable association between saline adaptability and the accumulation of active medicinal ingredients in G. littoralis. Shu et al. (2019) indicated that the accumulation of furocoumarins was increased by NaCl stress in G. littoralis. Few studies have been reported on the salt tolerance mechanism of G. littoralis. In our previous study, we performed a comprehensive transcriptome analysis of the response of G. littoralis to salt stress, and obtained a large number of differentially expressed genes involved in basic metabolism, secondary metabolism, transportation, and signal transduction (Li et al., 2018). We cloned and identified a differentially expressed NPC gene named GlNPC3 (Figure 1). RT-qPCR showed that the expression level of GlNPC3 increased continuously upon salt treatment (Figure 2). The characterization of A. thaliana GlNPC3-overexpressing plants showed that overexpression of GlNPC3 enhanced the salt tolerance of A. thaliana. Therefore, we performed a series of detailed experiments to elucidate the involvement of GlNPC3 in the salt stress response in G. littoralis. Similar to most NPCs, GlNPC3 contained a conserved phosphoesterase domain necessary for esterase activity and NPC catalytic activity (Figure 1 and Supplementary Figure S1). In A. thaliana, AtNPC1, AtNPC2, and AtNPC6 contain a signal peptide at the C-terminus, but AtNPC3, AtNPC4, and AtNPC5 lack it (Pokotylo et al., 2013). GlNPC3 contains no signal peptide and was clustered with AtNPC3, AtNPC4, and AtNPC5 in the phylogenetic analysis. The 50-100 amino acids at the C-terminus of NPCs are the most variable regions among NPC subfamilies. These regions may be responsible for various molecular functions and subcellular localization of different NPCs (Pokotylo et al., 2013). The gene expression changes under the different treatments showed that GlNPC3 responds to various stresses (Figure 2). Our analysis of the GlNPC3 promoter sequence also showed the variety of stress-related regulatory elements in the GlNPC3 promoter sequence, suggesting that GlNPC3 might be involved in a variety of transcriptionally regulatory processes. Moreover, we found that the expression of the GUS reporter driven by the GlNPC3 promoter in the roots of A. thaliana was upregulated after NaCl, mannitol or the hormone treatments during 2 h (Supplementary Figure S4). However, the expression of GlNPC3 did not increase at 6 and 24 h after ABA treatment in the root of G. littoralis (Figure 1B). We supposed that the response of GlNPC3 to ABA might be earlier or later than the time points we analyzed in G. littoralis. Data are mean ± SD of three measurements. Inset, immunoblotting assay of His-GlNPC3 fusion protein with the His antibody. Arrow indicates the protein.
(B) Total non-specific phospholipase C (NPC) activities of WT and transgenic A. thaliana. Total protein was extracted from A. thaliana seedlings exposed or not to 75 mM NaCl for 24 days. Fluorescent head group-labeled phosphotidylserine (PS; 200 μM) was used as the substrate. Data represent means ± SD three biological replicates. Different letters above each bar indicate significant differences (p < 0.05, Duncan's multiple range test).
In previous studies, AtNPCs were expressed at different levels in the roots (Peters et al., 2010), and some AtNPCs were involved in the regulation of root development. For example, AtNPC2 and AtNPC6 are required for root growth in Arabidopsis (Ngo et al., 2019). AtNPC4 is involved in the root response to salt stress (Kocourková et al., 2011), and AtNPC5 mediates lateral root development during salt stress (Peters et al., 2014). Our results showed that GlNPC3 was expressed at the highest levels in the roots of G. littoralis, followed by flowers (Figure 4). Expression of the GUS reporter driven by the native GlNPC3 promoter in A. thaliana exhibited a similar pattern with that of G. littoralis. However, there were some differences compared to G. littoralis. For example, Promoter GlNPC3 ::GUS was highly expressed in the stem of A. thaliana, while G. littoralis had extremely short rhizomes and the expression level in rhizomes was similar to that in the petiole. Moreover, the roots of A. thaliana GlNPC3overexpressing plants were longer than that of the WT under the salt treatment (Figure 3). Based on the growth characteristics of G. littoralis, the roots of G. littoralis, which are deeply embedded in the beach, are not only the main medicinal parts but also play a positive role in salt tolerance. Therefore, GlNPC3 may be related to the response of roots to salt stress.
The subcellular localization of GlNPC3 was mainly at the plasma membrane and partially at the tonoplast, different from other NPCs (Figure 5). Both AtNPC1 and AtNPC2 are localized at the endoplasmic reticulum and Golgi apparatus (Krčková et al., 2015;Ngo et al., 2018), and AtNPC2 is localized at the plastid (Ngo et al., 2018). AtNPC4 is associated with the plasma membrane, whereas AtNPC5 localizes to the cytosol (Gaude et al., 2008;Peters et al., 2014). The subcellular localization of the NPC isoforms is generally related to enzyme activity and biological function (Nakamura and Ngo, 2020). The localization of GlNPC3 suggests that it may hydrolyze phospholipids at the plasma membrane and tonoplast, and may be involved in the lipid-mediated signaling pathway or in the lipid metabolism.
Previous studies have reported that NPCs hydrolyze a variety of common lipids, such as PC, PE, and PS (Pokotylo et al., 2013). Changes in NPC substrates and products affect intracellular lipid signaling messengers, and the composition and remodeling of membrane phospholipids, and are involved in a series of cellular responses (Nakamura et al., 2005;Gaude et al., 2008;Krčková et al., 2015;Ngo et al., 2018). We analyzed GlNPC3 catalytic activity using fluorescent head group-labeled lipids as substrates. Compared to previously reported NPCs, the hydrolytic activity of GlNPC3 in the presence of PS was higher than that in the presence of PC in vitro ( Figure 6A); however, the PS content in the plants was significantly lower than that of PC and PE (Figures 7A,B,D). Because of the limitation associated with the use of a single molecular species substrate (18:1) in the in vitro assay, detailed analysis of phospholipids was needed in the plants. In this study, NPC enzyme activity increased in GlNPC3-overexpressing plants after the in vitro salt treatment ( Figure 6B). Then, the changes in phospholipid content of the WT and OE1 were determined under salt stress (Figure 7). The common phospholipids, such as PC, PS, and PG, decreased significantly in OE1 compared to the WT under NaCl treatment, suggesting that these lipids may be hydrolyzed by GlNPC3 in plant. Direct measurement of the DAG product showed that the DAG contents increased significantly in the WT at 45 min after the salt treatment. However, no significant difference was observed between WT and OE1. Previous studies showed that DAG was phosphorylated to PA under hyperosmotic stress conditions (Arisz et al., 2009;Munnik and Testerink, 2009;Nakamura, 2014). We then analyzed PA content; however, the increase in PA did not occur at 45 min after the salt treatment. Why OE plants had less substrates during NaCl treatment, and no significant increase in DAG and PA levels? On one hand, the lipid molecules involved in hydrolysis and synthesis are not limited to these classes that we analyzed. For example, PC also contributes to DAG-mediated triacylglycerol (TAG) synthesis (Bates et al., 2009;Lu et al., 2009). Moreover, Cai et al. (2020) found that AtNPC6 promoted polar glycerolipid and galactolipid [monogalactosyldiacylglycerol (MGDG) and digalactosyldiacylglycerol (DGDG)] conversion to TAG production. In contrast, MGDG and DGDG were converted to DAG by monogalactosyldiacylglycerol synthase (MGD) and digalactosyldiacylglycerol synthase (DGD). TAG and galactolipid were not measured in this study. So, we speculated that OE plants may promote polar glycerolipid turnover to DAG to form TAG during salt treatment, and affect DAG and PA levels. Under normal condition, although no significant difference was observed in DAG content between WT and OE1, OE1 still displayed more DAG content than WT (average 5.47 vs. 3.78 nmol/ mg DM), and less total polar lipids than WT (average 10.49 vs. 11.82 nmol/mg DM; Figures 7F,G). This data suggested that GlNPC3 overexpression contributed the generation of DAG under the normal condition. Together, lipid profiles suggested over-accumulation of GlNPC3 affected the proportion of different membrane lipids and the spatial and electronic properties of membrane, conferring salt tolerance to plants.
On the other hand, the "time limitation" might affect the determination of PA and DAG content. A significantly saltinduced increase in PA and DAG between WT and OE line may occur in the other time points during salt treatment. PA is metabolized rapidly in plants, requiring appropriate time points and rigorous operation, and the extraction and determination of PA are complicated and costly (Welti et al., 2002;Shadyro et al., 2004;Nguyen et al., 2017). Because of the spatiotemporal complexity of monitoring lipid in living cells and tissues of plants, a real-time quantitative and visualized method is required. For example, Potocky et al. (2014) constructed a PA biosensor with a Spo20p-PA binding domain fusing YFP protein, which reflected PA dynamics in transiently transformed tobacco pollen tubes as shown by confocal laser scanning microscopy. Subsequently, Li et al. (2019) developed a PA-specific biosensor, PAleon, which monitors and visualizes the concentration and dynamics of PA in plant cells under abiotic stress based on Förster resonance energy transfer (FRET) technology. PA biosensors have revealed the spatiotemporal dynamics of PA in plants. Similarly, DAG, PS, PtdIns(4,5)P 2 (phosphatidylinositol-4,5-biphosphates), PtdIns4P (phosphatidylinositol-4-phosphate), and PtdIns3P (phosphatidylinositol-3-phosphate) biosensors have been successfully used in plant cells (Vermeer et al., 2006(Vermeer et al., , 2009van Leeuwen et al., 2007;Thole et al., 2008;Yamaoka et al., 2011;Simon et al., 2014;Vermeer et al., 2017). These biosensors could be used for real-time monitoring of lipid dynamics and provide new insight into lipid metabolism in G. littoralis.
Finally, we analyzed the expression of some stress-related genes in WT and VIGS-silenced G. littoralis plants to investigate whether the GlNPC3-mediated lipid changes were associated with gene expression. The expression of SOS1 (salt overly sensitive 1), TPC1 (two-pore channel 1), SnRK2 (Ser/Thr protein kinase), P5CS (∆1-pyrroline-5-carboxylate synthetase), CIPK (CBL-interacting protein kinase), and the WRKY transcription factor genes in G. littoralis was significantly upregulated under the salt treatment, but little changed after VIGS-induced GlNPC3 silencing of the plants (Figure 8). These genes play pivotal roles in responses to multiple stresses in model plants. Among them, the vacuolar ion channel TPC1 is responsible for a reactive oxygen species-assisted Ca 2+ wave during the salt stress response (Evans et al., 2016). The SOS signaling pathway gene SOS1 plays a key role in Na + export (Zhu, 2001(Zhu, , 2016. Therefore, combined with the membrane localization of GlNPC3, these findings suggest that a GlNPC3-mediated lipid change may be accompanied by the response of stress-related genes affecting G. littoralis salt tolerance. In conclusion, we cloned and identified a NPC (GlNPC3) from the medicinal halophyte G. littoralis, and uncovered a potentially functional connection between GlNPC3-mediated phospholipid signaling and the salt-stress response in G. littoralis. Further investigation is needed to determine the role of GlNPC3 in lipid remodeling, and how it is regulated by upstream mediators during the stress response. These results will provide insights into studies on plants living in special environments.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
AUTHOR CONTRIBUTIONS
LL and YZ designed the experiments. LL performed the experiments, analyzed the data, and wrote the manuscript. YB provided support for protein expression and enzyme activity assays. NL and QC provided support for plant growth. XQ helped in bioinformatics analysis. HF, XY, and DL provided the materials and technical support. CL and YZ revised the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This study was supported by grants from the National Natural Science Foundation of China (No. 31800272).
Frontiers in Plant Science | www.frontiersin.org | 7,834.4 | 2021-12-09T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Project engineering management evaluation based on GABP neural network and artificial intelligence
BP neural network is the most representative computational and research method of current machine learning, BP neural network has gradually developed into the most widely used and now the most widely used in the industry with its powerful nonlinear feature function mapping computing capabilities, good data induction and feature recognition computing capabilities. This paper closely links the actual activities of project engineering technology management with modern machine learning network technology, and proposes a new type of engineering management activity model based on machine-integrated GABP neural network. Based on artificial intelligence technology, this paper discusses the theory of analytical neural learning in depth, discusses the theory of analytical neural learning in combination with genetic algorithms, and systematically discusses the models commonly used in this study based on theoretical foundations such as genetic algorithms. And points out the defects and defects of the neural network structure itself. Good generalization ability, in order to minimize the impact of subjective factors on the valuation results. Although there are many management software specializing in construction project engineering currently on the market, the service scope and demand range of these management software specializing in construction project engineering are relatively narrow. For example, construction project management software can only be used manage a construction project. In the actual operation process, it is also found that unreasonable time arrangements often occur in various links, such as personnel scheduling and material distribution of engineering projects. It will not only cause a large waste of material resources, but also affect the effect of labor. To this end, starting from the government’s desire to improve the efficiency of engineering management projects, this paper designs an engineering management system for general projects based on artificial intelligence technology and neural networks, and after a large number of practical analysis.
Introduction
In our real life, project engineering often has a large amount of investment, and it is difficult to coordinate the investment of other projects. The characteristics of these technologies make the business process of our project engineering management cumbersome and the amount of information generated by the project is huge (Sharon et al. 2011). Traditional engineering management software and extensive management models have become increasingly unable to meet our engineering construction and management needs. More and more enterprise project managers are beginning to need a more scientific and effective solution that is closer to social reality (Wasiak et al. 2011). The existing engineering project management software has been functionally supplemented and systematically optimized to fully meet the needs of enterprises (Lechler and Dvir 2010). Various information management requirements in the project management process, collect and process various project engineering data with more reasonable methods and methods, and provide enterprises and project engineering decision-makers with more effective information and auxiliary solutions to improve Management efficiency of enterprises and engineering projects (Chou 2011). This article mainly studies the following content of design work: first, on the basis of extensive research and reference to relevant types of design experience at home and abroad, the specific design work methods and implementation ideas of the implementation of the key project management engineering system of this article are proposed (Spalek 2013). Next, when designing the system, this paper expands the application scope of the system, improves the accuracy of the system model, and combines the problems encountered in the experiment, the technical framework of this paper is elaborated in detail in order to achieve better experimental results (Hu et al. 2015). Third, detailed analysis and in-depth discussion of non-functional application requirements of different needs faced by relevant Chinese departments in the implementation of key project management engineering applications, mainly through detailed analysis of various forms, such as UML case diagrams and actual use demand case tables (Memon et al. 2010). Among the application requirements, fourth, this article will use various research results in actual applications as the theoretical basis, and carry out detailed functional planning and specific design for related functional requirements. In the process of the system design, it mainly introduces the overall functional software architecture, functional overall structure, characteristic design description, detailed structure design of main functions and overall design of the system database of the operating system (Gong et al. 2012). Adopting a three-tier B/S network technology management model, the main function of the entire management system is to realize a variety of business functions, such as system management, analysis and evaluation, tax-related key financial statements and other key project management (Bailing et al. 2016). The author of this article focuses on the management functions. The detailed design of different types of system management application functions and application timings. Finally, this article elaborates on the specific design and application of the management system functions, and shows the operation interfaces that need to be implemented simultaneously for various management functions.
Related work
The literature introduces the main background and significance of GABP neural network recognition technology and the development trend of actual research applications in related research fields (Feng 2019). It introduces the development trends of related research applications at home and abroad based on systems, such as LPR and their actual research development trends in related research fields. The literature focuses on the detection model based on the gray-scale integrated GABP model neural network (Wang 2021). First, it focuses on the edge location detection electronic algorithm in the gray-scale positioning image, the wavelet transform detection algorithm, the color image segmentation detection algorithm, and the grayscale object image in the texture. The feature analysis detection algorithm and the edge location detection algorithm based on modern mathematical image morphology, etc., determine that the key experiment of this article mainly uses the Roberts edge detection operator algorithm to directly detect the edge operator in the grayscale image (Yang and Xu 2011). The literature briefly introduces the software experimental system environment of this article, including the experimental system software platform, processor, installation package and memory, experimental software runner and operating system, the software platform needed for the experiment, and the system experiment in this article (Hundhausen et al. 2011).
Using software examples and videos, system personnel analyzed in detail the specific process and use method of the software in this paper for system experiments (Eghtedari and Mahravan 2021). According to the actual business conditions and work needs of the local taxation business departments at all levels in a certain city and the staff in related positions, the literature first designs and analyzes the business process of the key project engineering design system of the city's local taxation business departments at all levels, and then analyzes the management system business functions were designed and analyzed according to actual requirements (Gavin 2011). At the same time, according to the system function design requirements, the management system functions were divided into three functions: performance evaluation of business database, business statistical reports of tax-related departments, and project management of key projects in the city. The literature uses the company's relevant technology and the national key construction project management experience as the research basis, and integrates business requirements and analysis and research results to design the system in detail (Barrett et al. 2016). First, the system architecture, functional structure, etc. are designed. And then use various functional structure designs as the research basis, and use various types of designs and time periods to integrate with each other to carry out detailed designs for each sub-function in the system, and complete the operation in accordance with the specified time. The literature uses Java, Mysql and other technologies to conduct experiments on the core modules of the entire system. The test results show that the functions of the designed national key project management system are basically to meet the needs of current enterprises, and its characteristics are systematic The performance is good, the interface is simple and clear, and it is convenient for practical operation (Bony 2010). The literature first designed the software architecture of the system, and also designed the overall functional structure diagram and overall type diagram of the system. And researched and realized each main function. In terms of application database system design, a variety of E-R diagram file formats are used to directly carry out system design. And according to the actual application situation on site, three processes are introduced in detail, mainly the software application layer, the view management layer and the data center management layer.
GABP neural network
The basic classification method of the average filter application template is to simulate the filter data obtained after frequency filtering and radio frequency processing are performed on a pixel point filter, and a template used as the average frequency filter is selected. The template of the average frequency filter is determined by A template filtered by radio frequency and several adjacent filter points of the pixel to be filtered are combined, and the average frequency of the average frequency filter to the template is used to calculate the filter threshold of the original pixel as a filter substitute. The template structure of the mean filter is expressed in the form (1): Formula (2) shows the data matrix block diagram of this paper in detail. The average value of the central coordinate node is obtained, and the coefficient equation is further obtained.
The output of the median filter in the two-dimensional sliding template is: Gaussian filter is used to smoothly process various types of signals and is frequently used in science and experiments. This paper also uses Gaussian filter when processing the image of license plate characters.
Then its probability density is: The value of the standard expectation l of the Gaussian function determines the position of the Gaussian function in the image node of the distribution. When l = 0 and r 2 = 1, the image position of the Gaussian function is called a standard normal Gaussian distribution.
Various quasi-number combinations of Gaussian functions, the following will focus on the two fitting forms of one-dimensional Gaussian fitting function and two-dimensional Gaussian function as simpler fitting examples in the actual function application process, and used to measure the Gaussian function Perform fitting and calculation analysis: (1) One-dimensional Gaussian function There are two main methods for fitting expressions of various functions of one-dimensional Gaussian functions: Among them: An anomalous Gaussian integral calculation method of Gaussian function, that is, the use of error Gaussian function, any inverse and time anomalies on an inverted real line in the entire Gaussian space, Gaussian integral can be accurately calculated, and It is concluded that using such an integration method can obtain a Gaussian integral such as (7): The same can be obtained: In this case, Gauss is the probability density function of a normally distributed random variable, and the expected value l = b, namely: (2) Two-dimensional Gaussian function The function expression of the two-dimensional Gaussian function is shown in (10): The bilateral signal filter in bilateral filtering is mainly a nonlinear digital image edge signal processor, which is similar to the basic working principle of Gaussian filtering. One outstanding advantage is that its bilateral signal filtering makes it better than Gaussian filters. It is more efficient to realize the processing effect of edge signal filtering in digital images.
The two-point spatial Euclidean distance error in the bilateral filtering equation can be accurately defined because the Euclidean distance between the current point of the bilateral filtering of two different images and a certain place in the filter center is expressed as: The Gaussian function of the range is defined as: Savitzky-Golay filter is a special audio filter design method that uses least squares smoothing to perform polynomial or analog combination through filtering in a certain space or time domain. The biggest technical advantage of this kind of filter is reflected in their originality. In the process of noise processing, under various conditions of effectively filtering out the original noise, it can also effectively maintain the filtering form and time width of the original digital image coding signal. The least square smoothing of the signal is illustrated by Fig. 1.
Artificial intelligence
The neuron model consists of an external offset. According to whether they are positive or negative, they are used to increase or decrease the network output when the activation function is activated. Two formulas such as (12) and (13) can be used to express a neuron K: The function of the threshold is to affine the output of the linear combiner in the model: Bias is an external parameter of artificial neuron K. The same can be obtained by combining formula (13) and formula (14) as follows: That is the step function. The corresponding output at this time Commonly used nonlinear functions are S function and radial basis function as shown in Fig. 2: The graph of the sigmoid function is sigmoid, which is a strictly increasing function. The following two forms are commonly used: (1) Logistic function (2) Hyperbolic tangent function Such functions are smooth and asymptotic, and maintain monotonicity.
(3) Radial basis function Corresponding to different laws of the basic learning structure and processing model of neural networks, there are different laws in the comprehensive learning process of different neural networks, through these neural networks according to the comprehensive operation principle of the learning process rules to accurately control the different neurons The interconnection and identification weights of, realize the comprehensive learning of different neural networks.
The summary is due to the American nervous system psychologist Donal Hebb, based on various conditioned reflex learning mechanisms in neurophysiological theory, proposed a learning rule about the changes in the interconnection and strength of various neurons. Expressed in a form of geometric algebra as: It is also called the law of error correction learning. This rule depends on the feedback outside the output node to change the weight coefficient, that is, it can be derived by using the condition of the lowest square error between the expected output and the actual output. The adjustment formula for the connection weight is: 4 Design, implementation and testing of project engineering management evaluation system
System function requirement analysis
In the overall construction of a large city, the municipal government needs to directly assume responsibility for organizing the overall construction process supervision and operation management of the entire city. The economic construction of a large and medium-sized city is inseparable from all kinds of construction projects, and the management personnel of government departments also need to strengthen the management and operation supervision of these large-scale construction projects in a timely and effective manner. In fact, because there are many types of construction projects, there are also many development and construction units, so the staff needs a special construction project management system to manage each construction project and its related construction units and other personnel. For a large-scale construction institution, they have jurisdiction over many large-scale engineering projects. For the management personnel of a large-scale construction institution, they need to accurately grasp and use all the basic information for managing these engineering projects, and at the same time, they need to accurately grasp To control the specific work schedule of each large-scale engineering project that they need to be responsible for, then managers also need a complete largescale engineering project schedule management information system to scientifically and effectively manage the speed of these engineering projects. Among the actual construction projects, large-scale engineering projects have a large number of personnel, heavy tasks, complicated documents, and a large amount of various funds. The systems they need not only need to understand these people, funds, technologies, and basic information about the project. Documents, etc. are effectively managed, and reasonable personnel deployment plans, material delivery plans, and solutions for fire monitoring must also be provided. Starting from the actual business situation of Project engineering management evaluation based on GABP neural network and artificial… 6881 engineering projects, this paper summarizes the general and general needs of various engineering projects, and studies and formulates a set of systems suitable for the management of various engineering projects. This article divides the system into seven main functional modules, as shown in Fig. 3.
Design principles
When designing the project management system, the following design principles were followed. The scalability of the project management system is mainly manifested in the presentation layer, application service layer and back-end data layer.
(1) Application layer: in the application layer, the network client provides services for users. The corresponding specialized special service equipment is the network server. When the number of internet requests made by a user rises sharply, it is that we need to increase some web servers to cope with this sharp increase in customer requests. The biggest advantage of a network load balancing cluster is that even if a host in the cluster fails, the cluster can still provide users with uninterrupted services, and the (2) Back-end data layer: in the back-end data layer, it provides users with good data scalability in three forms: widening inward, extending upward, and widening outward. The inward extension reduces the hardware conditions required during the operation of the software, which means that it can provide a higher degree of scalability under the same hardware environment. Extending the system means that one or more computers need to be added to the system. With the increase in the number of system software and hardware, the system will be quickly deployed and run on the existing software needed by itself to meet the needs of customers who continue to expand outwards and expand inwards.
The fault tolerance of construction project management system is mainly embodied in the design of data fault tolerance and the design of software fault tolerance.
(1) Data fault tolerance design: the project management system uses message middleware technology to improve the data fault tolerance. The message dissemination middleware is to ensure the effective use of network bandwidth resources by improving the sliding window technology and other methods. It not only greatly improves the efficiency of information transmission. It also enhances the stability of the network. If a message is broken during transmission, the middleware in the message will be dropped from this transmission line and then connected to it, and then continue to publish.
(2) Software fault tolerance design: code error detection, code error recording, and code error recovery. The system records the error information in the Log file, and the system operator can directly view the log files of these errors.
Good system security is very important for an operating system. During the normal use, maintenance and system update of the entire system, it is necessary to ensure that the entire system has high system stability in time. The network security of the engineering project safety management system mainly includes changes to the current engineering user identification, access control, security risk audit, data security and confidentiality, session security management, real-time alarm of abnormal events, file backup and data recovery. The main aspect.
Realize user identification: the project management system can provide a system login supervision module for identification and verification to ensure user safety in the project.
Access control: the engineering project sets up different project user roles in the system, which can mainly be divided into managers, government personnel, senior leaders of institutions, project leaders and general project constructors. A user can only have one role. You can view the pages you own on the system and do this part of the operation.
Security audit: the project management system provides security audit services covering all users. The review event includes the retrieval event date, time, retrieval type, access IP, retrieval time stamp, event description and retrieval event result. The user's privacy information involved in the review items needs to be displayed in an encrypted manner.
Information and data confidentiality: MD5 encryption algorithm is used to ensure the security of business data. Confidentiality mainly includes the confidentiality of the file system and the storage of the database.
Session management: as the system already has an automatic interrupt function, when the user does not perform any operation on the system, the server will automatically terminate the current session. The Internet time system limits the maximum number of concurrent computer sessions that users can use.
From the perspective of system architecture, it is mainly composed of four parts: server, database, PC-side project management platform, and computer client, using B/S architecture. The web background module software uses an overlay background framework. Cache storage service files use Redis caching technology. The computer mobile terminal software project management system and all computer client interfaces are implemented in VUE mode, respectively.
Database design
In the design of the information management system of this large-scale engineering project, since each entity has more attributes, the attributes that each entity needs are described in the design of the database logic structure. The specific entity association graph structure of each database that the system needs to implement is shown in Fig. 4.
Through the logical structure analysis of the database information of the project management system of China's large-scale enterprises, the specific design of the field names, data types and total length of project data in each table is described by charts one by one: It is mainly used to display and store the basic information of each engineering project, including the project name, project year, etc. The specific table framework is shown in Table 1.
The personnel information list is mainly used to display and store information related to the project, including the name of the project-related personnel, related roles, and Table 2.
As an information list of the construction unit, it is used to record and manage the relevant information of the construction unit, including the name of the unit, the contact person of the construction unit, etc. The specific list format structure is shown in Table 3.
As a progress record data file, its main function is to display and manage project information related to the storage progress. The specific structure format is shown in Table 4.
Project history table, which is used to store related information about a project history, including items related to the project history, personnel related to the history, etc. The specific list format is shown in Table 5.
Design and implementation of key modules
When the user clicks on a project list module, it will directly enter a main page about the project list. If it is the person in charge of a single project, he can only directly see the basic information of a project bundled with him. There are many interfaces included in this module, and the selection provides an in-depth explanation of the project list search interface, the item catalog entry interface, and the project statistics interface.
(1) Item list search interface: the search interface is jointly implemented by the search method and the busSearch method. The search method can be used by administrators to search, and the bussearch method can be used by grassroots users to search. The search query interface takes the basic information fields of the construction project, the construction project unit, the personnel, and the construction workflow obtained by the project receiver as the main parameters. The search function in the search interface is mainly realized through the process of storing the database. (2) Item catalog entry interface: this entry interface can not only realize the batch entry of separate different projects at the same time, but also realize the batch entry of multiple different projects at the same time. This input interface can receive the main input data category of a project, the input data source of the project, etc. as an input parameter. The type of project entry mainly includes the type of manual entry, the type of pictures and documents, the type of excel files, and the data source of the database. (3) Project statistics interface: the data interface takes a project parameter D as the input value parameter, and returns the entire relevant industry statistical data information of the project. In the actual project management project, a construction project with a particularly large actual engineering volume, often someone will be forced to re-decompose it into several individual sub-construction projects, if redecomposed several sub-construction projects. If the actual project volume is not small, some people may be forced to re-decompose these new sub-projects. So when we count the information and data of each project, we may find a prominent problem, because the parent project actually does not have real information and data at all, and the information and data of the parent project itself should also be controlled by its children. Project information and data are accumulated. A tree structure can be formed between parent and child projects. The structure of this tree structure is shown in Fig. 5.
The main function of an intelligent module is to read the message of the sensor device, analyze the message data of the change monitor, and obtain a temperature change data that can be automatically displayed in real time in a new project. When the temperature of an object in a specific area exceeds a certain measured value, it is likely to cause a fire. In order to effectively avoid the recurrence of fire accidents, it is necessary to install temperature sensors in each specific area of the project at the same time. The device in temperature change sensing can record the temperature change of the object in any specific area in real time. When the temperature in a certain place is high, the project leader can easily view the landforms around the project through the map, so as to determine the route of its transfer. All monitored data are displayed in the form of a histogram. The intelligent monitoring module mainly provides several interfaces: (1) Read sensor automatic message interface: this automatic communication data interface can automatically communicate with the equipment in the sensor. After the communication is successful, it will read some messages and send these messages directly to the system database. In this timing interface, we can respectively set a message time point and data time period for reading data messages and saving data.
(2) Parsing message data interface: the parsing interface directly reads the content of the message from the database and parses it, and writes the result directly into the pm_fire table. The pm_fire table can store temperature and time value.
(3) Modeling interface: the main purpose of this interface is to establish a linear regression equation by using the temperature and time data that need to be read, and use the linear regression equation
Project engineering management evaluation system test
For a system to run stably and safely, a detailed test process is essential. In order to provide a reliable and safe system for the users and technicians of the entire project, the test mainly starts from the two dimensions of the function of the equipment and the performance of the system. Unit testing is to check and verify the smallest functional unit in the software. This test is mainly to test various function values of the source code in the form of writing test functions. In the test function, by calling the function in the source code, check whether the consistency of the realized function meets the expected function. The background software of this system is developed using SpringBoot framework, and the interface test is performed using SpringBoot integrated SwagerUi tool. The environment configuration formula of the test software is shown in Table 6.
The test hardware environment configuration is shown in Table 7 The test project practice proved that the system safety function test and key function design in the system project management have basically been able to achieve the
Conclusion
This article carries on the function and performance test to the project management system. In the whole process of technical research, it can be clearly seen that there are many shortcomings in itself: (1) in the intelligent fire monitoring system module of the engineering project, the linear temperature regression relationship model used is useful for predicting the fire temperature. Automatic prediction is carried out. There are many actual situations in which the predicted temperature coefficient T and the actual temperature t do not completely satisfy this linear regression relationship. Then we need to predict a lot, and there may be some mistakes due to this.
(2) In the enterprise material circulation and transportation management module, when calculating the minimum travel distance between two different projects, generally use Baidu search map to call the project place name of the project location to find a corresponding project location, so that The shortest travel distance can be obtained directly, but the location of the project site may still be inaccurate. In this case, the minimum distance of the project site is not a direct representation of the shortest travel distance between the two places in reality. Therefore, it becomes inaccurate when calculating the distance between two engineering projects.
(3) Estimate the working ability value of each technician in the dispatch system module of the professional and technical personnel. These may not be very accurate, and in the actual situation, if there is a large deviation in the working date of the technical personnel, the optimal The solution is not necessarily the only one, so we need to consider all the events that may occur in these situations as a whole, and it is really reasonable to choose the best solution for different situations.
Funding This paper was supported by Natural Science Basic Research Program of Shaanxi: research on the causes of public rejection in reclaimed water reusing projects based on EEG experiment (Program No.2022JM-429).
Data availability Data will be made available on request.
Declarations
Conflict of interest The authors declare that they have no conflict of interests.
Ethical approval This article does not contain any studies with human participants performed by any of the authors. | 6,935.6 | 2023-04-06T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
An EOQ Model for Varying Items with Weibull Distribution Deterioration and Price-dependent Demand
In this paper, we have developed an instantaneous replenishment policy for deteriorating items with price-dependent demand. The demand and deterioration rates are continuous and differentiable function of price and time respectively. A variable proportion of the items will deteriorate per time, where shortages are permissible and completely backordered. We have developed a policy with price-dependent demand under profit maximization. The net profit per unit time is a concave function. Further, it is illustrated with the help of a numerical example.
Introduction
An optimal replenishment policy is dependent on ordering cost, inventory carrying cost and shortage cost.An important problem confronting a supply manager in any modern organization is the control and maintenance of inventories of deteriorating items.Fortunately, the rate of deterioration is too small for items like steel, hardware, toys, glassware etc.There is little requirement for considering deterioration in the determination of economic lot size.Deterioration is defined as change, damage, decay, spoilage, obsolescence and loss of utility or loss of marginal value of a commodity that results in decreasing usefulness from the original one.The demand rate is assumed to be constant in deterministic inventory models.Covert and Philip [1] considered the assumption of constant deterioration rate to represent the distribution of time to deterioration by using a two-parameter Weibull distribution.Further, Philip [2] generalized this model by assuming a three-parameter Weibull distribution.Misra [3] adopted a two-parameter Weibull distribution deterioration to develop an inventory model with finite rate of replenishment.These investigations were followed by works by several researchers like Shah and Jaiswal [4], Aggarwal [5], Roy-Chaudhury and Chaudhuri [6].It has been empirically observed that the failure and life expectancy of many items can be expressed in terms of Weibull distribution.This empirical observation has prompted researchers to represent the time of deterioration of a product by a Weibull distribution.The model of Ghare and Schrader [7] was extended by Covert and Philip [1] who obtained an EOQ model with a variable rate of deterioration by assuming a two-parameter Weibull distribution.Later on several other researchers like Tadikamalla [8], Mukhopadhyay et al. [9,10], Chakrabarty et al. [11] developed economic order quantity models.Therefore, a realistic model is taken in which the deterioration rate is being treated as a time varying function.
A lot of theoretical papers have the assumption that the deterioration rate follows the Weibull distribution.The main attention towards this topic is given by Goyal and Giri [12].This paper deals with units of the product in stock, which is subjected to deterioration.In this extent the investigations were followed by several researchers like Chakrabarty et al. [11], Chen and Lin [13], Ghosh and Chaudhuri [14], Mahapatra and Maiti [15], Mondal et al. [16], Wu and Lee [17].Most researchers on the work of deteriorating inventory assumed constant rate of deterioration.However, the Weibull distribution is used to represent the product in stock which deteriorates with time.At first Wagner and Whitin [18] considered an inventory model for goods which deteriorated at the end of a prescribed storage period.Ghare and Schrader [7] revised an economic order quantity model by considering exponential decay in inventory.The further extension to the case of three-parameter Weibull distribution deterioration was done by Shah [19].Goel and Aggarwal [20] considered a model by assuming varying rate of deterioration.
In real life situation, the retailer's lot size is affected by the demand of the product and the demand is dependent on the price of the product.Therefore, the problems of determining the retail price and lot size are inter-dependent.Kim et al. [21] studied joint price and lot size determination problems for deteriorating products with constant rate of deterioration.Wee [22] also studied the joint pricing and replenishment policy for a deteriorating inventory with price elastic demand rate that decline over time.Abad [23] considered the dynamic pricing and lot sizing problem of perishable goods under partial backlogging of demand.He modeled the backlogging phenomenon using a new approach in which customers are considered impatient.In reality, the effect of marketing policies and the conditions like the price variations and advertisement of an item changes its selling rate among the persons in the recent competitive market.The proper selection to make use of an item, create ideas in customers who are able to make decisions quickly for the selling price of any item.It is commonly seen that lesser selling price causes increase in the selling rate whereas higher selling price has the reverse effect.Hence, the selling rate of an item is dependent on the selling price of that item.This selling rate function must be a decreasing function with respect to the selling price.Several researchers like Ladnay and Sternleib [24], Goyal and Gunasekaran [25], Luo [26], Weng [27], Subramanyam and Kumaraswamy [28] and Das et al. [29] developed their models with price variations for deteriorating items.Incorporating this situation, we have developed a policy with price-dependent demand under profit maximization.The failure and life expectancy of many items expressed in terms of Weibull distribution.Therefore we have considered three-parameter Weibull distribution deterioration.Here shortages are permissible and are completely backordered.
Assumptions and Notations
The following assumptions and notations are considered: 3. The distribution of the time to deterioration follows a three-parameter Weibull distribution and the deteriorated units are not replaced during a given cycle.4. The inventory level, replenishment quantity, demand and deterioration are continuous functions of time. 5.The replenishment quantity and period are constant for each cycle.6. Units are available for satisfying demand immediately after their replenishment.7. The cost of a deteriorated unit is constant and equal to the unit cost ' 'c .8. The demand during the stock-out period is completely satisfied by the next replenishment.9. T is the cycle length, Q is the order quantity per cycle, 1 T is the deterioration of inventory cycle when there is positive inventory and ( ) is the duration of inventory cycle when stock-out occur.10. b a, are positive constants.s is the unit selling price, ( ) is the demand rate which is a function of s .11. C is denoted as the unit cost ( ) s c < , fixed production cost per cycle, inventory holding cost, shortage cost per unit backordered per time-unit.All of these cost parameters are positive constant.
12. ( ) t I 1 denotes the time-varying inventory level in the cycle segments ( ) )
(
) is the net profit per unit time of inventory system.NP , , are the optimal values of the cycle length, inventory, selling price, optimum order rate and net profits respectively.
*, ,
In this article, the item is assumed to be replenished every T integer time-units.Shortages are completely backordered and satisfied by the next replenishment.The behavior of inventory system is depicted in Fig. 1.It is seen that a review period T is divided into two sub-periods where is the period of the system with on-hand inventory and is the period of the system with shortages.The rate of deterioration-time relationship for the three-parameter Weibull distribution is shown in Fig. 2. The figure shows that the three-parameter Weibull distribution is most suitable for items with any initial value of the rate of deterioration and for items, which start deteriorating only after a certain period of time.
Model Development
A typical behavior of the inventory system in a cycle is depicted in figure-1.At the beginning of the cycle, the inventory level arrive with its maximum units of item at .During the interval max , the inventory is depleted due to the combined effects of demand and deterioration.At T , the inventory level is zero and all of the demand during is backlogged.The total number of backlogged items is replaced by the next replenishment.The distribution of the time to deterioration of the items follows a three- The instantaneous rate of deterioration of the non-deteriorated inventory at time t , , can be obtained from , where ( ) is the cumulative distribution function which is equal to 1 for the three parameter Weibull distribution.Thus, the instantaneous rate of deterioration of the on-hand inventory is . The probability density function represents the deterioration of items which is shown in figure-2, which may have a decreasing, constant or increasing rate of deterioration.It is clear from figure-2 that the three-parameter Weibull distribution is suitable for items with any initial value of the rate of deterioration and for items, which start deteriorating only after a certain period.The inventory level of the system at time over period [ can be described by the following equations: with the boundary condition ( ) ) with the boundary condition ( ) , where 2), the equations becomes with the boundary condition ( ) ) with the boundary condition ( ) . By using the conditions ( ) max 0 I I = and , the solutions of equation ( 3) and ( 4) are respectively given by, ( ) where and ) The loss of stock due to deterioration is given by, ( ) Therefore, the total average cost per unit time, K of the system consist of the deterioration cost, the replenishment cost and the backordering cost, which is given by ( ) ( ) Substituting the values of ( ) and in equation ( 8), we get ( ) When there is no shortage, then T T = 1 and the constant demand reduces to .
Putting these values in Eq. ( 9), we get For the constant demand and substituting 1 = β , Eq. ( 9) becomes, For the condition with no shortage T T = 1 and substituting 1 = β , Eq. ( 9) becomes, The net profit per unit time, ( ) , , 1 is determined by the deduction of the revenue per unit time and the average cost per unit time K , which is given by, For maximizing the net profit ; it can be formulated by the following cases: (15) Thus Eq. ( 15) yields, Eqs. ( 16) and ( 17) are two simultaneous equations for and 1 T s provided that they satisfy the sufficient conditions The above conditions are verified since for a linear price function of demand ; Eq. ( 16) yields Hence it can be predicted that 18) and ( 19) are satisfied for the optimal values of and .With the optimal values of and , the net profit can be evaluated from equation (9); and the optimal replenishment lot size is given by * 1 Case II: Optimization of and 1
T s with T as a decision variable
If T is not prescribed, then for a given T ; the optimal values of and , can be found from equation ( 16) and (17).18) and ( 19) can only be solved numerically with the help of some computer algorithm for a given set of parameter values.For a given T , the optimal values of and , can be simultaneously found from Eqs. ( 18) and ( 19) and the values of It is shown from Table 1 that the net profit increases as the deterioration increases whereas the selling price varies slightly.The reason of raise of optimal solution is due to the no shortage constraint in our present model.
Conclusion
We developed an instantaneous replenishment policy for Weibull deteriorating items with price-dependent demand.Shortages are allowed and completely backordered in the present model.The models with price-dependent demand are surprisingly very few while there is abundance of time-varying demand in inventory models.The fact is that the selling price of an item can affect significantly with the demand of an item.Selling price is the main criterion of the consumer when he/she goes to the market to buy a particular item.The principal feature of the model is the deterministic demand rate which is assumed as a function of selling price.Here, the inventory cycle (or holding time) and selling price optimize the net profit which are simultaneously optimized.Aggarwal and Jaggi [30] developed their model in this approach for decaying inventory.Here a simple heuristic is implemented to derive the best replenishment time interval for a maximum net profit.From the Table-1, it is quite remarkable to note that the selling price of an item slightly changes or remains quite stable as the deterioration rate changes.The threeparameter Weibull distribution deterioration considered here is suitable in optimizing the model as well as control the inventory models.In many realistic situations, stock out is unavoidable due to various uncertainties.There are many situations, where the profit of the stored item is higher than its backordered cost.Hence, consideration of shortages is economically desirable in these cases.
In reality, the retailer's lot size is affected by the demand of the product and the demand is dependent on the price of the product.Therefore, the problems of determining the retail price and lot-size are inter-dependent.For reality, the vendor must have some idea about the buyer's behavior such as response to shortages and price.It should be noted that in order to maximize the profit, a vendor can either increase the price or shorten the replenishment cycle or shorten the inventory holding time to counteract a greater loss due to a higher deterioration rate.The selling rate must be a decreasing function with respect to ' s ' as the lower price causes the higher selling rate and vice-versa.In the numerical example, the selling rate is taken as the linear function of ' s '.The model shows how the deterioration patterns influence the scheduling policy and price.
distribution.The three-parameter Weibull density function from Philip[2] is given by ( ) the optimum value of T and 1 s with T prescribed is the solution of the equation as in Case-I.Continue the process unless and until get an optimal solution using other values of T .The concavity of the net profit is shown in Appendix-A.day back-ordered, and the items deterioration patterns a Weibull distribution as in case-I.Repeat the process for other values of until the best T with its associated and * relation (A3), the condition (A7) is verified.
Table 1 .
Optimum solutions obtained using case-II. | 3,238.2 | 2009-12-25T00:00:00.000 | [
"Economics"
] |
The First Complete Mitochondrial Genome of the Genus Litostrophus: Insights into the Rearrangement and Evolution of Mitochondrial Genomes in Diplopoda
This study presents the complete mitochondrial genome (mitogenome) of Litostrophus scaber, which is the first mitogenome of the genus Litostrophus. The mitogenome is a circular molecule with a length of 15,081 bp. The proportion of adenine and thymine (A + T) was 69.25%. The gene ND4L used TGA as the initiation codon, while the other PCGs utilized ATN (A, T, G, C) as the initiation codons. More than half of the PCGs used T as an incomplete termination codon. The transcription direction of the L. scaber mitogenome matched Spirobolus bungii, in contrast to most millipedes. Novel rearrangements were found in the L. scaber mitogenome: trnQ -trnC and trnL1- trnP underwent short-distance translocations and the gene block rrnS-rrnL-ND1 moved to a position between ND4 and ND5, resulting in the formation of a novel gene order. The phylogenetic analysis showed that L. scaber is most closely related to S. bungii, followed by Narceus magnum. These findings enhance our understanding of the rearrangement and evolution of Diplopoda mitogenomes.
Introduction
With more than 12,000 nominal species and almost 80,000 predicted species [1], the Diplopoda are a diverse group of arthropods.They are widely distributed [2].Diplopoda are often colloquially called millipedes, but the majority possess only a few hundred pairs of legs or fewer [1].As soil-based animals, millipedes are highly important components of terrestrial ecosystems [3], playing ecological roles in developed humus [3][4][5] and contributing to the decomposition of plant litter by breaking down fallen leaves and maintaining soil fertility [6].Millipedes are usually used as bioindicators for soil quality.Due to their small size, as well as the diversity and abundance of arthropod groups, as bioindicators, they may more accurately reflect trends in species richness and community composition compared to vertebrates [1,2].Therefore, many studies of the ecological functions of millipedes have been conducted [7][8][9].Despite millipedes' ubiquity, little research has been conducted regarding their evolutionary relationships.The classical taxonomy of millipedes is based on morphology and has defects; meanwhile, there is limited research on the population-level genetics of millipedes, leading to problems in the classification of this group [10,11].Our understanding of millipedes' higher-level relationships, ecology, behavior, physiology, and genomic composition is also limited.
The mitochondria are the basic organelle of eukaryotic cells, and they present independent and conserved genetic characteristic [12].A typical animal mitochondria genome (mitogenome) is a circular structure with a length of 13-17 kbp.Mitogenomes usually comprise 37 coding genes, including 13 protein-coding genes (PCGs), 22 transfer RNA genes (tRNAs), 2 ribosomal RNA genes (rRNAs), and a D-loop region [13].In arthropods, the mitogenome is more structurally diverse, and there are atypical mitogenomes.For example, the mitogenomes of certain species in the order Coleoptera are significantly larger than those of other studied arthropods [14].Additionally, the mitogenomes of some species in the order Anoplura have been observed to split into minicircular chromosomes [15].In recent years, mitogenomes have been widely used in studies of molecular systematics [16,17], population genetics [18,19], and molecular evolution [20,21].In comparison with individual mitochondrial genes, which are susceptible to loss [22,23], mitogenomes provide greater accuracy when inferring phylogenetic relationships [24].For example, there is evidence indicating that the mitochondrial cox1 gene is not suitable for making inferences about the phylogeny of many metazoans due to its low levels of polymorphism [25].Among the characteristics of mitogenomes, gene rearrangements also provide insights into phylogenetic relationships [26].The comparison of these gene arrangements has significant potential to help resolve some of the most fundamental branches of multicellular animal phylogeny [27].Only recently have molecular phylogenetic techniques been used to reconstruct millipedes' phylogenetic relationships.Myriapoda species are model organisms for studying the relationship between gene rearrangements and phylogenetic analysis due to their high rates of rearrangement [28].Despite this potential, research on millipede mitogenomes remained limited [29,30].At the time this paper was written, limited millipede mitogenomes have been uploaded to the NCBI website.However, some of these sequences may have annotation issues.To conduct a comprehensive and systematic phylogenetic analysis of the class Diplopoda, a large number of complete mitogenomes of its subordinate species are needed [31,32].The lack of mitogenome data has led to insufficient research being conducted on the evolutionary significance of Diplopoda gene rearrangements and their phylogenetic relationships.
Litostrophus scaber, a common millipede species in China, belongs to the family Pachybolidae and the order Spirobolida.In this study, the mitogenome of L. scaber was assembled and characterized.We describe the genome size, nucleotide composition, codon usage, and gene rearrangements.To further investigate the use of complete or near-complete mitogenomes to enhance the effectiveness of phylogenetic analyses, we conducted phylogenetic analyses based on 13 PCGs to investigate the phylogenetic position of L. scaber.Our study aims to shed light on the diversity, evolution, gene rearrangement, and phylogenetic relationships of Diplopoda species.
Specimen Collection and DNA Extraction
The specimen used in this study was collected from the Seven Star Park (110.31• N, 25.27 • E) in Guilin, Guangxi, China.After species diagnosis was performed based on morphological features given in previous research [2] and the distribution area provided by the Global Biodiversity Information Facility website (GBIF, available at https://www.gbif.org, accessed on 28 May 2023), the specimen was stored in a −80 • C refrigerator at Nanjing Forestry University Animal Molecular Evolution Laboratory.The collection of the specimen was reviewed and approved by Nanjing Forestry University.The specimen used in this study was collected and studied in accordance with Chinese laws.Total genomic DNA was extracted using a FastPure Cell/Tissue DNA Isolation Mini Kit (Vazyme, Nanjing, China), and it was then stored at −20 • C for the follow-up investigation.
Sequence Analysis
The complete genomic library of L. scaber was established using the Illumina platform (Personal, Shanghai, China), while the sequencing was performed using next-generation sequencing with an insert size of 300 bp (about 2 Gb of raw data).To generate clean data, low-quality sequences were removed.Then, 22,035,594 reads with a GC content of 31.69% were assembled to obtain a complete mitogenome using Geneious Prime v2023.1.2software The mitogenome of Spirobolus bungii (Accession number: MT767838.1)was used as a template for the assembly.The medium sensitivity/speed option was used for the assembling.A consensus sequence was constructed using a 99% base call threshold.
The initial analysis of the mitogenome was based on DNASTAR Lasergene 7.1 and the MITOS Web Server (available at https://usegalaxy.eu/root?tool_id=toolshed.g2.bx.psu.edu/repos/iuc/mitos/mitos/1.1.1%20galaxy0,accessed on 6 June 2023) [33].Lasergene 7.1 was used for the sequence alignment and gene recognition.The MITOS Web Server was utilized to locate RNA genes.The PCGs were predicted using both MITOS and the CDsearch tool on the NCBI website (available at https://www.ncbi.nlm.nih.gov/,accessed on 9 June 2023).The correct mitogenome of L. scaber was submitted to GenBank (accession number: OR139892.1).The gene map was generated using the CG View Server (available at https://cgview.ca/,accessed on 17 June 2023).BLASTN was used for a mitogenomic comparison of 17 species and BRIG was used to acquire a graphical map of the BLASTN results [34].The composition skew was calculated according to the following formula: ATskew = (A − T)/(A + T) and GC-skew = (G − C)/(G + C) [35].DNAsp was used to analyze nucleotide diversity (Pi) [36].MEGA X was used to calculate the relative synonymous codon usage (RSCU) and non-synonymous (Ka) and synonymous substitutions (Ks).The ggplot2 and aplot packages, executed in R v.4.3.1, were utilized to generate these images.The secondary structures of 22 tRNA genes were predicted using ViennaRNA Web Services (available at http://rna.tbi.univie.ac.at/forna/, accessed on 23 June 2023) to generate colorful secondary structures [37].
Phylogenetic Analysis
A total of 23 mitogenomes with credible annotations were selected for phylogenetic analysis, representing 17 genera, 12 families, and 6 orders: the accession numbers and taxonomic information are presented in Table 1.A centipede species, Cermatobius longicornis, was used as the out-group.All operations were completed using the PhyloSuite v1.2.3 software package [38].Sequences of 13 PCGs were extracted using the PhyloSuite v1.2.3 software package and were then used for the phylogenetic analyses.The 'Normal' or 'Codon' alignment and 'auto' strategy of MAFFT v7.313 were used for multiple gene alignment [39].MACSE was used to optimize alignments using the classic "Needleman-Wunsch" algorithm [40].Sequence pruning in the form of triplet codons was performed using the "Codon" pattern in Gblocks [41].RNA sequences were aligned using the "au-tomated1" mode in trimAI [42].The constructed PCGs and RNA sequences generated concatenate sequences in PhyloSuite.With BIC as the standard criterion, partition analysis was performed for IQ-TREE and Mrbayes using ModelFinder's Edge-unlinked mode.Each PCG was divided into three codon partitions using Codon Mode (3 sites) [43].The best-fit models obtained using ModelFinder for both software packages are presented in Table S1.IQ-TREE was used to reconstruct the ML tree with 1000 bootstraps [44].The BI tree was reconstructed using MrBayes 3.2.6 with four Markov chains (three hot chains and one cold chain).Two independent runs of 1000,000 generations were conducted, with sampling carried out every 1000 generations.The first 25% of samples were deleted to reduce the number of simulation errors [45].Phylogenetic trees were visualized and edited using the Interactive Tree of Life Web Server (iTOL, available at https://itol.embl.de,accessed on 10 July 2023).
Mitogenomic Structure and Comparison
The mitogenome was a typical circular, double-stranded molecule; it was 15,081 bp in length, slightly longer than other species in the order Spirobolida.The mitogenome included 13 PCGs, 22 tRNAs, 2 rRNAs, and one D-loop region.Four PCGs (ND4L, ND4, ND1, and ND5), nine tRNAs ( L1, P, Q, C, V, L2, H, F, and Y), and two rRNAs were translated on the majority stand (J-stand), which is similar to S. bungii in the same family [30] (Table 2, Figure 1).The base composition of the mitogenome was as follows: A 34.09%, T 35.16%, G 21.74%, and C 9.01%.An analysis of the base composition suggested that the whole mitogenome was biased toward A and T because of the high A + T content (69.25%).In addition, the AT skew was negative (−0.016) and the GC skew was positive (0.41).Two gene overlaps were identified, one between ND4L and ND4 (six bp in length) and the other between ATP6 and ATP8 (six bp in length).The A + T content, AT skew, and GC skew within Spirobolida exhibit significant variations (Figure 2).The A + T content ranges from 56.7% (S. bungii) to 68.04% (L.scaber).The AT skew ranges from −0.155 (L.scaber) to −0.131 (N.annularus), and the GC skew ranges from −0.060 (N.annularus) to −0.016 (L.scaber).The between-order analysis showed that the order Julida [46] and the order Polydesmida had T skews and G skews, while the other orders had the same skew as the order Spirobolida [47].The A + T content, AT skew, and GC skew within Spirobolida exhibit significant variations (Figure 2).The A + T content ranges from 56.7% (S. bungii) to 68.04% (L.scaber).The AT skew ranges from −0.155 (L.scaber) to −0.131 (N.annularus), and the GC skew ranges from −0.060 (N.annularus) to −0.016 (L.scaber).The between-order analysis showed that the order Julida [46] and the order Polydesmida had T skews and G skews, while the other orders had the same skew as the order Spirobolida [47].The BLAST results revealed that L. scaber exhibited a high level of similarity to other millipedes at the protein level (Figure 3).The protein-level similarity ranged from 73.58% (Appalachioria falcifera) to 79.77% (N.annularus).Specifically, L. scaber exhibited the highest similarity with N. annularus, followed by Chaleponcus netus and Prionopetalum kraepelini.It is noteworthy that these similarities do not align consistently with the established phylogenetic relationships.This discrepancy might be attributed to the gene rearrangement, which appears to have a significant impact on the process of systematic evolution [48,49].The charts generated from the BLAST results using BRIG indicated that the COX1 gene in millipedes was the most highly conserved, displaying the highest consistency.Moreover, the consistency of ATP8, ND6, ND2, and ND4L was lower, a result that aligns with findings from previous studies [30,31].The BLAST results revealed that L. scaber exhibited a high level of similarity to other millipedes at the protein level (Figure 3).The protein-level similarity ranged from 73.58% (Appalachioria falcifera) to 79.77% (N.annularus).Specifically, L. scaber exhibited the highest similarity with N. annularus, followed by Chaleponcus netus and Prionopetalum kraepelini.It is noteworthy that these similarities do not align consistently with the established phylogenetic relationships.This discrepancy might be attributed to the gene rearrangement, which appears to have a significant impact on the process of systematic evolution [48,49].The charts generated from the BLAST results using BRIG indicated that the COX1 gene in millipedes was the most highly conserved, displaying the highest consistency.Moreover, the consistency of ATP8, ND6, ND2, and ND4L was lower, a result that aligns with findings from previous studies [30,31].
PCGs
The total PCG length of the L. scaber mitogenome was 10,905 bp, accounting for 72.31% of the entire mitogenome, similar to other millipede species.Specifically, 12 PCGs used ATN (A, T, G, C) as the initiation codon, and the nonstandard initiation codon TGA was observed in ND4L.Unusual initiation codons have previously been reported, including ND1 of Antrokoreana gracilipes, which starts with GTG, and ATP8 of Anaulaciulus koreanus, which starts with TTA [46,47].Seven PCGs, COX2, COX1, ND5, ND6, CYTB, COX3, and ATP6, used T as the termination codon.ND1 used TA as an incomplete termination codon.These special termination codons are also found in other arthropods [46,47], and these codons might be transformed into TAA or TAG for formal functions [50].The remaining PCGs used TAA/TAG as the termination codons.The RSCU values of 13 millipede species from 12 families of 6 orders are summarized in Figure 4. Overall, the five amino acids with high usage levels were Leu, Gly, Val, Phe, and Ile.The four most common codons were UUU (Phe), AUU (Ile), UUA (Leu), and AUA (Met).The codons translated by these species ranged in quantity from 3628 to 3684.The usage of codons ending in A/U was significantly higher than that of codons ending in C/G, reflecting the strong AT bias of the third codon, a finding consistent with previous studies on the class Myriapoda [51,52].Analogously, the biased use of A + T nucleotides was reflected in the codon frequencies.
PCGs
The total PCG length of the L. scaber mitogenome was 10,905 bp, accounting for 72.31% of the entire mitogenome, similar to other millipede species.Specifically, 12 PCGs used ATN (A, T, G, C) as the initiation codon, and the nonstandard initiation codon TGA was observed in ND4L.Unusual initiation codons have previously been reported, including ND1 of Antrokoreana gracilipes, which starts with GTG, and ATP8 of Anaulaciulus koreanus, which starts with TTA [46,47].Seven PCGs, COX2, COX1, ND5, ND6, CYTB, COX3, and ATP6, used T as the termination codon.ND1 used TA as an incomplete termination codon.These special termination codons are also found in other arthropods [46,47], and these codons might be transformed into TAA or TAG for formal functions [50].The remaining PCGs used TAA/TAG as the termination codons.The RSCU values of 13 millipede species from 12 families of 6 orders are summarized in Figure 4. Overall, the five amino acids with high usage levels were Leu, Gly, Val, Phe, and Ile.The four most common codons were UUU (Phe), AUU (Ile), UUA (Leu), and AUA (Met).The codons translated by these species ranged in quantity from 3628 to 3684.The usage of codons ending in A/U was significantly higher than that of codons ending in C/G, reflecting the strong AT bias of the third codon, a finding consistent with previous studies on the class Myriapoda [51,52].Analogously, the biased use of A + T nucleotides was reflected in the codon frequencies.Pi and P-distance analyses were performed to elucidate the variation and evolution of the PCGs (Figure 5).Within the order Spirobolida, sliding window analysis revealed variation in Pi among the PCGs.The average Pi of each gene ranged from 0.201 (COX1) to 0.422 (ATP8), with ATP8, ND6, and ND2 having the highest Pi values of 0.422, 0.422, and 0.374, respectively.Conversely, COX3, COX2, and COX1 had the lowest Pi values of 0.271, 0.252, and 0.201, respectively, in line with the genetic distance between the PCGs.This implies that ATP8, ND6, and ND2 are fast-evolving genes, while COX3, COX2, and COX1 are slow-evolving genes within the order Spirobolida.However, the analysis across different orders yielded different results.Specifically, within orders, ND3, ND6, and ATP8 were identified as fast-evolving genes, whereas COX3, COX2, and COX1 were determined to be slow-evolving genes in general.Overall, the results of our analysis were similar to those of previous studies [28,52].
To analyze the evolution rate of the PCGs, we assessed the average Ka/Ks values (Figure 5).Under the assumption of neutral protein-level evolution, the ratio of Ka to Ks should be equal, resulting in a Ka/Ks ratio of 1.A Ka/Ks ratio below 1 indicates the presence of purifying or stabilizing selection, which suggests a resistance to change.On the other hand, a ratio above 1 implies positive or Darwinian selection, which drives evolutionary change.Within the order Spirobolida, all PCGs had an average Ka/Ks value of less than 1, ranging from 0.038 (COX1) to 0.423 (ND4L), which indicated the presence of purifying selection.Moreover, among the orders, the average Ka/Ks values ranged from 0.206 (CYTB) to 6.695 (COX2), which suggested that ND4, ND5, ATP6, and COX2 had experienced positive selection.Specifically, COX2 exhibited a Ka/Ks value much higher than 1, this high value may be attributed to the limited number of species used for analysis.Different candidate markers can be selected based on different taxonomic categories.An analysis of the average Ka/Ks value between orders could potentially be used to select candidate markers for molecular biological identification across different orders. .RSCU results of 13 species of Diplopoda, including the order Glomeridesmida, the order Julida, the order Spirobolida, the order Spirostreptida, the order Playtdesmida, and the order Polydesmida.Different colors correspond to a different third codon.
Pi and P-distance analyses were performed to elucidate the variation and evolution of the PCGs (Figure 5).Within the order Spirobolida, sliding window analysis revealed variation in Pi among the PCGs.The average Pi of each gene ranged from 0.201 (COX1) to 0.422 (ATP8), with ATP8, ND6, and ND2 having the highest Pi values of 0.422, 0.422, and 0.374, respectively.Conversely, COX3, COX2, and COX1 had the lowest Pi values of 0.271, 0.252, and 0.201, respectively, in line with the genetic distance between the PCGs.This implies that ATP8, ND6, and ND2 are fast-evolving genes, while COX3, COX2, and COX1 are slow-evolving genes within the order Spirobolida.However, the analysis across different orders yielded different results.Specifically, within orders, ND3, ND6, and ATP8 were identified as fast-evolving genes, whereas COX3, COX2, and COX1 were determined to be slow-evolving genes in general.Overall, the results of our analysis were similar to those of previous studies [28,52].To analyze the evolution rate of the PCGs, we assessed the average Ka/Ks values (Figure 5).Under the assumption of neutral protein-level evolution, the ratio of Ka to Ks should be equal, resulting in a Ka/Ks ratio of 1.A Ka/Ks ratio below 1 indicates the presence of purifying or stabilizing selection, which suggests a resistance to change.On the other hand, a ratio above 1 implies positive or Darwinian selection, which drives evolutionary change.Within the order Spirobolida, all PCGs had an average Ka/Ks value of less than 1, ranging from 0.038 (COX1) to 0.423 (ND4L), which indicated the presence of purifying selection.Moreover, among the orders, the average Ka/Ks values ranged from 0.206 (CYTB) to 6.695 (COX2), which suggested that ND4, ND5, ATP6, and COX2 had experienced positive selection.Specifically, COX2 exhibited a Ka/Ks value much higher than 1, this high value may be attributed to the limited number of species used for analysis.Different candidate markers can be selected based on different taxonomic categories.An analysis of the average Ka/Ks value between orders could potentially be used to select candidate markers for molecular biological identification across different orders.
RNA Genes and Non-Coding Regions
The length of the tRNA genes ranged from 54 to 69 bp, and most tRNA's arms formed classic Watson-Crick pairs.Moreover, 23 U-G wobble pairs that have specific functions were found in the tRNA [53], which is common in invertebrates [54] (Figure 6).The rrnS gene was located between trnV and trnC, with a length of 757 bp and a C + G content of 28.79%.The rrnL gene (length: 1031 bp) was located between trnV and trnL, with a C + G content of 28.42%.The rRNA region of the new mitogenomes was 1788 bp in length, accounting for 11.86% of the whole mitogenome.
The longest non-coding region (length: 459 bp) was flanked by trnI and trnL.This region is responsible for regulating transcription and replication.As the D-loop region of arthropods always has high A + T content [55], this area could be identified as the putative D-loop region based on its high A + T content (71.46%) compared to other mitochondrial genes of L. scber.There were also 17 intergenic regions in the mitogenome, which ranged in length from −6 to 169 bp, and the longest interval was found between rrnL and trnL.Specific spacers within these spacers may serve as binding sites for transcription termination factors [56].
D-loop region based on its high A + T content (71.46%) compared to other mitochondrial genes of L. scber.There were also 17 intergenic regions in the mitogenome, which ranged in length from −6 to 169 bp, and the longest interval was found between rrnL and trnL.Specific spacers within these spacers may serve as binding sites for transcription termination factors [56].
Gene Rearrangement
Comparisons of gene arrangements are an important tool for resolving deep phylogenetic relationships [57].In millipedes, gene rearrangements occurred within and between orders (Figure 7).PCGs in L. scaber shared a similar transcription direction with S. bungii, in contrast to most millipedes [58].Mitogenome arrangements vary significantly across the class Diplopoda.Gene rearrangements are observed, including both the RNA and PCGs.In terms of minor rearrangements, trnQ-trnC and trnL1-trnP underwent short-distance movements, resulting in the formation of trnQ-trnC-trnL1-trnP gene clusters.Rearrangements at the RNA level are common in arthropods [13,59], but these clusters are novel in millipedes.In terms of major rearrangements, the arrangement in L. scaber can be explained by a single rearranged gene block (rrnS-rrnL-ND1) moving to a position between ND4 and ND5 relative to S. bungii and Narceus annulans.The duplication-random loss (TDRL) model might be used to explain this arrangement [60].According to this model, during replication, specific DNA segments are replicated at homologous sites and subsequently removed, resulting in the restoration of the original genomic organization or rearrangement [61].To gain a deeper understanding of the evolutionary implications of gene arrangements in Diplopoda, it is necessary to conduct further research on mitochondrial genomes involving a wider range of taxa.
Gene Rearrangement
Comparisons of gene arrangements are an important tool for resolving deep phylogenetic relationships [57].In millipedes, gene rearrangements occurred within and between orders (Figure 7).PCGs in L. scaber shared a similar transcription direction with S. bungii, in contrast to most millipedes [58].Mitogenome arrangements vary significantly across the class Diplopoda.Gene rearrangements are observed, including both the RNA and PCGs.In terms of minor rearrangements, trnQ-trnC and trnL1-trnP underwent short-distance movements, resulting in the formation of trnQ-trnC-trnL1-trnP gene clusters.Rearrangements at the RNA level are common in arthropods [13,59], but these clusters are novel in millipedes.In terms of major rearrangements, the arrangement in L. scaber can be explained by a single rearranged gene block (rrnS-rrnL-ND1) moving to a position between ND4 and ND5 relative to S. bungii and Narceus annulans.The duplication-random loss (TDRL) model might be used to explain this arrangement [60].According to this model, during replication, specific DNA segments are replicated at homologous sites and subsequently removed, resulting in the restoration of the original genomic organization or rearrangement [61].To gain a deeper understanding of the evolutionary implications of gene arrangements in Diplopoda, it is necessary to conduct further research on mitochondrial genomes involving a wider range of taxa.
Phylogenetic Analysis
Because of the limited mitogenome sequences available for the class Diplopoda, we included only 22 millipede species in addition to the newly sequenced species from 17 genera of Diplopoda in the phylogenetic analyses; we selected C. longicornis as the out-group in the phylogenetic analysis of the class Diplopoda (Figure 8).The results from BI and ML exhibited striking similarities.
Phylogenetic Analysis
Because of the limited mitogenome sequences available for the class Diplopoda, we included only 22 millipede species in addition to the newly sequenced species from 17 genera of Diplopoda in the phylogenetic analyses; we selected C. longicornis as the outgroup in the phylogenetic analysis of the class Diplopoda (Figure 8).The results from BI and ML exhibited striking similarities.
According to the current taxonomy, it is hypothesized that the orders Julida, Spirostreptida, and Spirobolida belong to the superorder Juliformia [62], but the specific relationships between these three orders remain controversial.A study based on mitogenomic analysis suggests that Julida and Spirostreptida have a sister-group relationship [63], while the morphological study suggested that Spirobolida and Julida have a sister-group relationship [60].Studies based on mitochondrial genes, such as COX1 and rrnS, yield divergent perspectives [64,65].The result obtained here were similar to those of previous phylogenetic studies based on the mitogenome.More mitogenomes will contribute to our understanding of the phylogenetic relationships between millipedes.Furthermore, our results showed a sister-group relationship between N. annularus and S. bungii, which did not agree with the result of Zuo Q. [28].This might be due to the high levels of variation in the species selected for the two analyses.According to the current taxonomy, it is hypothesized that the orders Julida, Spirostreptida, and Spirobolida belong to the superorder Juliformia [62], but the specific relationships between these three orders remain controversial.A study based on mitogenomic analysis suggests that Julida and Spirostreptida have a sister-group relationship [63], while the morphological study suggested that Spirobolida and Julida have a sister-group relationship [60].Studies based on mitochondrial genes, such as COX1 and rrnS, yield divergent perspectives [64,65].The result obtained here were similar to those of previous phylogenetic studies based on the mitogenome.More mitogenomes will contribute to our understanding of the phylogenetic relationships between millipedes.Furthermore, our results showed a sister-group relationship between N. annularus and S. bungii, which did not agree with the result of Zuo Q. [28].This might be due to the high levels of variation in the species selected for the two analyses.
Conclusions
In this study, we presented the newly sequenced mitogenome of L. scaber.This is the first representative mitogenome from the genus Litostrophus.The total length of the L. scaber mitogenome was 15,081 bp, with 69.25%A + T content.The mitogenome arrangement varies significantly in the class Diplopoda.Novel arrangements are found in the L. scaber mitogenome with the formation of trnQ-trnC and trnL-trnP gene clusters.The phylogenetic analysis indicated that L. scaber was a sister to S. bungii.The results obtained here were similar to those reported in previous phylogenetic studies based on the mitogenome.More mitogenomes will contribute to our understanding of the phylogenetic relationships between millipedes.Our study offers new insights into the evolution and phylogenetic relationships of Diplopoda.
Figure 1 .
Figure 1.Circular map of the mitogenome of Litostrophus scaber.The outer ring represents genes encoded on the main stand (J-stand), and the inner ring represents genes encoded on the minor stand (N-stand).Genes are shown in different colors.
Figure 1 .
Figure 1.Circular map of the mitogenome of Litostrophus scaber.The outer ring represents genes encoded on the main stand (J-stand), and the inner ring represents genes encoded on the minor stand (N-stand).Genes are shown in different colors.
Figure 2 .
Figure 2. Three-dimensional scatter plots of the AT skew, GC skew, and A + T content of 16 Diplopod mitochondrial genomes.Balls of different colors correspond to different orders.
Figure 2 .
Figure 2. Three-dimensional scatter plots of the AT skew, GC skew, and A + T content of 16 Diplopod mitochondrial genomes.Balls of different colors correspond to different orders.
Figure 3 .
Figure 3. Graphical map of the BLAST results showing the nucleotide identity between the Litostrophus scaber mitochondrial genome and that of 16 other Diplopod species.Different colors correspond to different species.
Figure 3 .
Figure 3. Graphical map of the BLAST results showing the nucleotide identity between the Litostrophus scaber mitochondrial genome and that of 16 other Diplopod species.Different colors correspond to different species.
Figure 4
Figure 4. RSCU results of 13 species of Diplopoda, including the order Glomeridesmida, the order Julida, the order Spirobolida, the order Spirostreptida, the order Playtdesmida, and the order Polydesmida.Different colors correspond to a different third codon.
Figure 4 .
Figure 4. RSCU results of 13 species of Diplopoda, including the order Glomeridesmida, the order Julida, the order Spirobolida, the order Spirostreptida, the order Playtdesmida, and the order Polydesmida.Different colors correspond to a different third codon.
Figure 5 .
Figure 5. Variation in mitochondrial genes and the evolutionary characteristics of Diplopoda.(A) Sliding window analysis within the order Spirobolide, revealing the nucleotide diversity (Pi).(B) Pdistance and Ka/Ks values of mitochondrial gene sequences within the order Spirobolide, revealing its evolutionary characteristics.(C) Sliding window analysis among Diplopoda, revealing Pi values.(D) P-distance and Ka/Ks values of mitochondrial gene sequences among Diplopoda, revealing their evolutionary characteristics.
Figure 5 .
Figure 5. Variation in mitochondrial genes and the evolutionary characteristics of Diplopoda.(A) Sliding window analysis within the order Spirobolide, revealing the nucleotide diversity (Pi).(B) Pdistance and Ka/Ks values of mitochondrial gene sequences within the order Spirobolide, revealing its evolutionary characteristics.(C) Sliding window analysis among Diplopoda, revealing Pi values.(D) P-distance and Ka/Ks values of mitochondrial gene sequences among Diplopoda, revealing their evolutionary characteristics.
Figure 6 .
Figure 6.Secondary structure of 22 tRNA genes from the Litostrophus scaber mitochondrial genome.
Figure 6 .
Figure 6.Secondary structure of 22 tRNA genes from the Litostrophus scaber mitochondrial genome.
Figure 7 .
Figure 7.Comparison of mitogenome arrangements between different Diplopoda species.(A) Gene orders of the different types of mitogenome arrangement for the species used in this study.Gene segments are not drawn to scale.(B) The hypothetical process of the transposition of the gene block rrnS-rrnL-ND1 in the TDRL model."X" indicates the partially random loss of the duplicated genes.
Figure 7 .
Figure 7.Comparison of mitogenome arrangements between different Diplopoda species.(A) Gene orders of the different types of mitogenome arrangement for the species used in this study.Gene segments are not drawn to scale.(B) The hypothetical process of the transposition of the gene block rrnS-rrnL-ND1 in the TDRL model."X" indicates the partially random loss of the duplicated genes.
Figure 8 .
Figure 8. ML (A) and BI (B) trees based on the nucleotide datasets for 13 PCGs from the mitogenomes of 23 species.All the bootstrap values of the branches are indicated.
Table 1 .
List of complete mitogenomes used in this study.
Table 2 .
Gene annotations of the complete mitogenomes of L. scaber. | 7,019.6 | 2024-02-01T00:00:00.000 | [
"Biology"
] |
Inverse Mixed-Solvent Molecular Dynamics for Visualization of the Residue Interaction Profile of Molecular Probes
To ensure efficiency in discovery and development, the application of computational technology is essential. Although virtual screening techniques are widely applied in the early stages of drug discovery research, the computational methods used in lead optimization to improve activity and reduce the toxicity of compounds are still evolving. In this study, we propose a method to construct the residue interaction profile of the chemical structure used in the lead optimization by performing “inverse” mixed-solvent molecular dynamics (MSMD) simulation. Contrary to constructing a protein-based, atom interaction profile, we constructed a probe-based, protein residue interaction profile using MSMD trajectories. It provides us the profile of the preferred protein environments of probes without co-crystallized structures. We assessed the method using three probes: benzamidine, catechol, and benzene. As a result, the residue interaction profile of each probe obtained by MSMD was a reasonable physicochemical description of the general non-covalent interaction. Moreover, comparison with the X-ray structure containing each probe as a ligand shows that the map of the interaction profile matches the arrangement of amino acid residues in the X-ray structure.
Introduction
Drug development is a cost-intensive and time-consuming process. The cost is estimated to be more than USD two billion, and it may take 10-15 years for a new drug to reach the market [1]. To reduce costs, computational techniques have been applied in various drug development studies, and many studies have successfully discovered new therapeutic compounds using these techniques [2][3][4][5][6][7][8][9][10]. In the early stages of drug discovery, virtual screening (VS) techniques such as docking simulations are frequently used to discover seed compounds [11][12][13]. Kelly et al. showed that high-throughput virtual screening (HTVS) produced more than 10-fold hit rates compared to traditional HTS [14]. Computational methods for lead optimization to improve the activity of compounds are also proposed, such as MP-CAFEE [15], free energy perturbation (FEP) [16,17], and quantum mechanical (QM) methods [18]. These methods focused on the binding affinity estimation of given candidate compounds with considerable computational cost; however, methods guiding or proposing the next substituent of hit compounds in the lead optimization phase are still evolving.
To improve activity and selectivity, structural optimization that is strongly and specifically directed at the target protein is necessary. Therefore, the mechanism by which proteins recognize compounds and the prediction of protein-ligand binding modes are important in lead optimization. Several studies have been reported that comprehensively analyze interaction patterns from protein-ligand complex structures registered in the Protein Data Bank (PDB) [19]. Imai et al. focused on 14 polar and aromatic amino acid side chains and carried out contact analysis for protein-ligand complex crystal structures in the PDB [20]. Wang et al. generated 4032 fragments from 71,798 ligands and obtained fragment-residue interaction profiles [21]. Furthermore, Kasahara et al. reported that 63.4% of the ligand atoms exhibited one or more interaction patterns, 25.7% of the ligand atoms interacted without patterns, and the rest had no direct interaction [22]. However, these interaction patterns have been analyzed only from known databases, and the analysis is limited to the well-experimented functional groups. In addition, the interaction patterns of the functional groups investigated in these studies were studied while binding to the target as part of the compound, and the environment of the protein to which functional group fragments match has not been directly investigated.
Mixed-solvent molecular dynamics (MSMD) simulations involve MD in the presence of explicit water molecules mixed with probe molecules or functional group fragments such as for hotspot detection [23,24] and binding site identification [25,26]. MSMD considers the flexibility of proteins and can discover hotspots where probes can bind. These hotspots indicate the protein environment preferred by specific probes. Thus, by analyzing the interaction pattern between the probe and the protein environment sampled by MSMD, it is possible to analyze the binding ability and interaction pattern of individual probes, not part of the functional group of compounds. In particular, by applying drug-like probes with few reported crystal structures to MSMD, the environment of the protein binding site to which the unique probe binds can be sampled.
In this article, we propose an "inverse" MSMD for analyzing a probe's preference of interaction patterns. First, MSMD simulations with 15 diverse proteins were performed to sample various protein residue environments preferred by the probe. The residue environments were then integrated to a residue interaction profile, followed by the visualization of it. We assessed the proposed analysis using three probes with an aromatic ring: benzamidine, catechol, and benzene. Their residue interaction profiles provided a physicochemical account of general non-covalent interactions, such as electrostatic interactions, hydrogen bonding interactions, and amide-π stacking interactions. Moreover, the profiles were consistent with the experimental co-crystalized structures, which supports the ability of the proposed method to detect the actual interaction patterns of functional groups. This is the first proposal and demonstration of the use of inverse MSMD.
Preparation of Proteins
The selection of proteins used in MSMD sampling is crucial to obtain the various residue environments utilized to construct a residue interaction profile. We chose 15 proteins that were previously selected by Soga et al. [27] because they collected the proteins considering their diversity. A list of proteins is shown in Table 1. All proteins were pre-processed using the following procedure: Protein Preparation Wizard and Prime [28] in Schrodinger suite 2020-3 (Schrodinger, Inc., New York, NY, USA) were used to fill the missing loops, side chains, and atoms for all of the selected proteins. N-and C-termini were capped using N-methyl amide (NME) and acetyl (ACE) capping groups, respectively. Subsequently, the ligands, co-factors, and additive molecules were removed. Hydrogens were placed with consideration of hydrogen bonding and ionization states of pH = 7 with PROPKA [29]. Water molecules with less than two hydrogen bonds to a protein in the crystal structures were removed, followed by structure optimization with the OPLS3e force field. Note that the N-terminal of glycoside hydrolase (PDBID: 1H4G) and C-terminal of endo-1,4-β-xylanase A precursor (PDBID: 1E0X) were removed before the procedure because of non-standard amino acid and missing main chain atoms, respectively.
Preparation of Probes
The probe of interest was then pre-processed. The restrained electrostatic potential (RESP) procedure in the Antechamber module of AmberTools18 [30] was employed to fit the partial charges to the electrostatic potential, which was calculated using Gaussian 16 Rev B.01 [31]. First, all probe structures were optimized at the B3LYP/6-31G(d) level. Then, the electrostatic potentials were calculated at the HF level using the optimized structures. The centers of the electrostatic potentials were placed at the center of each atom. Additional force field parameters for the probes were derived using the general AMBER force field 2 (GAFF2), unless otherwise stated.
Mixed-Solvent Molecular Dynamics (MSMD)
We conducted MSMD using the protocol of EXPRORER [32]. It is worth noting that the initial positions of the probes affect the results, especially in short MD simulations, and this initial position dependency influences the convergence of the results of the analysis. To achieve efficient sampling, the following protocol was independently performed 20 times with different initial probe coordinates. The procedure was divided into three steps, as described below.
Initial System Generation
The probes were randomly placed around the protein at a concentration of 0.25 M using PACKMOL 18.169 [33]. The high concentration enables effective sampling of residue environments. Second, the system was solvated with water using the LEaP module of AmberTools18. The Amber ff14SB force field and the TIP3P model [34] were used for the protein and water molecules, respectively. Additionally, a Lennard-Jones force field term with the parameters ( = 10 −6 kcal/mol; R min = 20 Å) was introduced only between the center of the probes to prevent their aggregation.
Minimization, Heating, and Equilibration
After the construction of the initial structures, the systems were minimized to include 200 steps using the steepest descent algorithm with harmonic position restraints on the heavy solute atoms (force constant, 10 kcal/mol/Å 2 ), and then the systems were minimized a further 200 steps using the steepest descent algorithm without any position restraints. After minimization, the system was heated gradually to 300 K during 200 ps constant-NVT MD simulations with harmonic position restraints on the solute heavy atoms (force constant, 10 kcal/mol/Å 2 ). During the subsequent 800 ps constant-NPT MD simulations at 300 K and 10 5 Pa, the force constants of the position restraints were gradually reduced to 0 kcal/mol/Å 2 . The P-LINCS algorithm [35] was used to constrain all bond lengths involving hydrogen atoms, which allowed the use of 2 fs time steps. Temperature and pressure were controlled using a stochastic velocity-rescaling (V-rescale) algorithm [36][37][38] and a Berendsen barostat [39], respectively. Simulations were performed using GROMACS 2019.4 [40]. The ParmEd module [41] was used to convert the AMBER parameter/topology file format to that used by GROMACS.
Production Run
After equilibration, 40 ns constant-NPT MD simulations at 300 K and 10 5 Pa without position restraints were performed. All settings were the same as the initial equilibration step, but a Parrinello-Rahman barostat [42] was used instead of a Berendsen barostat. Snapshots were taken every 10 ps in the 20-40 ns; thus, 2000 snapshots were produced per MSMD simulation.
Inverse MSMD: Construction of Residue Interaction Profile
The workflow for constructing a residue interaction profile from the MSMD simulation is shown in Figure 1. The detailed procedure is described in this section. constant, 10 kcal/mol/Å 2 ). During the subsequent 800 ps constant-NPT MD simulations at 300 K and 10 5 Pa, the force constants of the position restraints were gradually reduced to 0 kcal/mol/Å 2 . The P-LINCS algorithm [35] was used to constrain all bond lengths involving hydrogen atoms, which allowed the use of 2 fs time steps. Temperature and pressure were controlled using a stochastic velocity-rescaling (V-rescale) algorithm [36][37][38] and a Berendsen barostat [39], respectively. Simulations were performed using GROMACS 2019.4 [40]. The ParmEd module [41] was used to convert the AMBER parameter/topology file format to that used by GROMACS.
Production Run
After equilibration, 40 ns constant-NPT MD simulations at 300 K and 10 5 Pa without position restraints were performed. All settings were the same as the initial equilibration step, but a Parrinello-Rahman barostat [42] was used instead of a Berendsen barostat. Snapshots were taken every 10 ps in the 20-40 ns; thus, 2000 snapshots were produced per MSMD simulation.
Inverse MSMD: Construction of Residue Interaction Profile
The workflow for constructing a residue interaction profile from the MSMD simulation is shown in Figure 1. The detailed procedure is described in this section.
Determination of Preferable Protein Surfaces
The concentration of probes in the MSMD simulation was unrealistically high, which could cause artificial interaction profiles even though the concentration is sometimes used in other MSMD protocols [43]. To omit such artifacts, we limited the residue environment sampling based on the binding preference of the probes. First, the spatial probability distribution map (PMAP) was calculated using the following procedure: Atoms in the snapshots were binned into 1 Å × 1 Å × 1 Å grid voxels, and the voxel occupancy of probe heavy atoms was calculated. To focus on the protein surface, V was a set of voxels within 5 Å from the protein atoms, and the values at voxel v / ∈ V were discarded. Then, the values were scaled such that the summation of voxels in V was 1.0.
Probes were placed uniformly among the system, resulting in the underestimation of the probability at deep pockets where access is difficult. Thus, to reduce the underestimation, the largest value among the 20 PMAPs generated from each independent trajectory was stored for each voxel in V in the second step. The product was named max-PMAP. Even for a deep pocket where a probe will bind strongly but will rarely approach, a considerable value of voxel v of max-PMAP was observed if the binding occurred only at least once. Note that the summation of voxels V of max-PMAP was greater than 1.0, while it originated from the probability. Finally, we defined a preferable protein surface of a probe as voxels with max-PMAP values equal to or greater than 0.2, which is determined by visual inspection. Note that the preferable protein surface includes surfaces exposed to solvent as well as deeper binding pockets ( Figure 2). The regions indicate the protein surfaces where the probe stably exists.
Determination of Preferable Protein Surfaces
The concentration of probes in the MSMD simulation was unrealisticall could cause artificial interaction profiles even though the concentration is som in other MSMD protocols [43]. To omit such artifacts, we limited the residue sampling based on the binding preference of the probes. First, the spatia distribution map (PMAP) was calculated using the following procedure: snapshots were binned into 1 Å × 1 Å × 1 Å grid voxels, and the voxel occup heavy atoms was calculated. To focus on the protein surface, was a set of 5 Å from the protein atoms, and the values at voxel ∉ were discard values were scaled such that the summation of voxels in was 1.0. Probes were placed uniformly among the system, resulting in the under the probability at deep pockets where access is difficult. Thus, to underestimation, the largest value among the 20 PMAPs generated independent trajectory was stored for each voxel in in the second step. was named max-PMAP. Even for a deep pocket where a probe will bind stro rarely approach, a considerable value of voxel of max-PMAP was ob binding occurred only at least once. Note that the summation of voxels o was greater than 1.0, while it originated from the probability. Finally, w preferable protein surface of a probe as voxels with max-PMAP values equa than 0.2, which is determined by visual inspection. Note that the prefe surface includes surfaces exposed to solvent as well as deeper binding pock The regions indicate the protein surfaces where the probe stably exists.
Extraction of Residue Environments at Preferable Protein Surfaces
Next, the protein residue environments around the probe molecules w As previously mentioned, we sampled poses on the preferable protein su probe. Extraction of preferable protein surfaces of a probe was performed f shot, and the probe and amino acid residues around the probe were extra tecting the probe on preferable protein surfaces. Here, we defined "amino around the probe" as protein residues with at least one heavy atom that is wi any heavy atom of the probe molecule ( Figure 3).
Extraction of Residue Environments at Preferable Protein Surfaces
Next, the protein residue environments around the probe molecules were extracted. As previously mentioned, we sampled poses on the preferable protein surfaces of the probe. Extraction of preferable protein surfaces of a probe was performed for each snapshot, and the probe and amino acid residues around the probe were extracted after detecting the probe on preferable protein surfaces. Here, we defined "amino acid residues around the probe" as protein residues with at least one heavy atom that is within 4 Å from any heavy atom of the probe molecule ( Figure 3).
Description of Spatial Statistics for Each Type of Residue
Finally, a residue interaction profile, or a set of spatial distribution of residues around a probe, was described with the following steps. All the sampled residue environments were aligned by referring to the designated atoms of the probe molecule. The Cβ atoms of each type of residue were spatially binned into 1 Å × 1 Å × 1 Å grid voxels. The types of residues used in this study are listed in Table 2. Note that we used Cβ atoms rather than the side chain atoms (e.g., nitrogen atoms of Lys and aromatic carbon atoms of Phe) because our aim was to analyze the protein environment, and the direction of the tip of the side chains is easily changed.
Implementation
The scripts used to generate a residue interaction profile of a probe were implemented using Python with Biopython [44]. The implementation is included in a GitHub repository EXPRORER_MSMD https://github.com/keisuke-yanagisawa/exprorer_msmd (accessed on 22 April 2022) under the MIT license.
Results
We tested the proposed method with benzamidine, catechol, and benzene. Ring systems are key scaffold components in medicinal chemistry [45]; therefore, we selected these probes with a ring system as examples of available probes. Note that since the GAFF2 force field incorrectly parameterized the amidino group of benzamidine, we manually assigned "nc" and "cc" GAFF2 atom types for nitrogen and carbon atoms of the amidino group, respectively, to maintain planarity of the functional group. Additionally, structural alignment of the probes was performed with all carbon atoms for benzamidine and with all heavy atoms for catechol and benzene. Figure 4 shows the residue interaction profile of benzamidine. Benzamidine has a basic group and the residue interaction profile correctly and clearly depicted the position Figure 3. Definition of amino acid residues around the probe. Yellow residues are "amino acid residues around the probe". Amino acid heavy atoms within 4 Å from any heavy atom of the probe molecule (red molecule) are shown as balls.
Description of Spatial Statistics for Each Type of Residue
Finally, a residue interaction profile, or a set of spatial distribution of residues around a probe, was described with the following steps. All the sampled residue environments were aligned by referring to the designated atoms of the probe molecule. The Cβ atoms of each type of residue were spatially binned into 1 Å × 1 Å × 1 Å grid voxels. The types of residues used in this study are listed in Table 2. Note that we used Cβ atoms rather than the side chain atoms (e.g., nitrogen atoms of Lys and aromatic carbon atoms of Phe) because our aim was to analyze the protein environment, and the direction of the tip of the side chains is easily changed.
Implementation
The scripts used to generate a residue interaction profile of a probe were implemented using Python with Biopython [44]. The implementation is included in a GitHub repository EXPRORER_MSMD https://github.com/keisuke-yanagisawa/exprorer_msmd (accessed on 22 April 2022) under the MIT license.
Results
We tested the proposed method with benzamidine, catechol, and benzene. Ring systems are key scaffold components in medicinal chemistry [45]; therefore, we selected these probes with a ring system as examples of available probes. Note that since the GAFF2 force field incorrectly parameterized the amidino group of benzamidine, we manually assigned "nc" and "cc" GAFF2 atom types for nitrogen and carbon atoms of the amidino group, respectively, to maintain planarity of the functional group. Additionally, structural alignment of the probes was performed with all carbon atoms for benzamidine and with all heavy atoms for catechol and benzene. Figure 4 shows the residue interaction profile of benzamidine. Benzamidine has a basic group and the residue interaction profile correctly and clearly depicted the position of acidic residues. Furthermore, profiles of multiple types of residues were detected among the amidino group, indicating hydrogen bonds between the main chains and the amidino group ( Figure 5). For the phenyl group, profiles of acidic, hydrophilic, and hydrophobic residues were detected in the vertical direction of the phenyl group. This suggests that the protocol captured amide-π stacking [46]. Therefore, these profiles can provide a physicochemical account of general non-covalent interactions, and the results demonstrate the validity of the proposed method. nt. J. Mol. Sci. 2022, 23, x FOR PEER REVIEW of acidic residues. Furthermore, profiles of multiple types of residues among the amidino group, indicating hydrogen bonds between the main c amidino group ( Figure 5). For the phenyl group, profiles of acidic, hydrop drophobic residues were detected in the vertical direction of the phenyl gr gests that the protocol captured amide-π stacking [46]. Therefore, these pr vide a physicochemical account of general non-covalent interactions, an demonstrate the validity of the proposed method.
Catechol: Interaction Analysis of Hydroxy Groups
Catechol exists as a substructure in several ligands, such as dopami hydroxy groups and a phenyl group. Although it does not have any net ch different from benzamidine, hydroxy groups can form hydrogen bonds, re ble binding to proteins.
The residue interaction profile of catechol is shown in Figure 6. Int acidic group showed clear localization compared to the other residue grou cates the possibility of detecting not only obvious interactions but also no Ala237 Val234 Figure 4. The residue interaction profile of benzamidine. Green, cyan, gray, blue, and red meshes indicate profiles of hydrophobic, hydrophilic, aromatic, basic, and acidic residues, respectively. The molecule structure is that of benzamidine. of acidic residues. Furthermore, profiles of multiple types of residues were detecte among the amidino group, indicating hydrogen bonds between the main chains and th amidino group ( Figure 5). For the phenyl group, profiles of acidic, hydrophilic, and hy drophobic residues were detected in the vertical direction of the phenyl group. This sug gests that the protocol captured amide-π stacking [46]. Therefore, these profiles can pro vide a physicochemical account of general non-covalent interactions, and the result demonstrate the validity of the proposed method.
Catechol: Interaction Analysis of Hydroxy Groups
Catechol exists as a substructure in several ligands, such as dopamine. It has tw hydroxy groups and a phenyl group. Although it does not have any net charge, which i different from benzamidine, hydroxy groups can form hydrogen bonds, resulting in sta ble binding to proteins.
The residue interaction profile of catechol is shown in Figure 6. Interestingly, th acidic group showed clear localization compared to the other residue groups. This indi cates the possibility of detecting not only obvious interactions but also non-intuitive in teractions. Further analysis regarding the same is provided in the discussion section. Ad ditionally, the profiles of the phenyl group were similar to those of benzamidine; however the areas of localization were wider than those of benzamidine. Ghanakota et al. showe that the wider localization of probe atoms can be converted to entropic terms [47]. Thus
Catechol: Interaction Analysis of Hydroxy Groups
Catechol exists as a substructure in several ligands, such as dopamine. It has two hydroxy groups and a phenyl group. Although it does not have any net charge, which is different from benzamidine, hydroxy groups can form hydrogen bonds, resulting in stable binding to proteins.
The residue interaction profile of catechol is shown in Figure 6. Interestingly, the acidic group showed clear localization compared to the other residue groups. This indicates the possibility of detecting not only obvious interactions but also non-intuitive interactions. Further analysis regarding the same is provided in the discussion section. Additionally, the profiles of the phenyl group were similar to those of benzamidine; however, the areas of localization were wider than those of benzamidine. Ghanakota et al. showed that the wider localization of probe atoms can be converted to entropic terms [47]. Thus, the present observations may indicate weaker binding of a catechol substructure to proteins. Figure 6. The residue interaction profile of catechol. Green, cyan, gray, blu profiles of hydrophobic, hydrophilic, aromatic, basic, and acidic residu cule structure is that of catechol.
Benzene: Interaction Analysis of Phenyl Group Itself
Benzamidine and catechol both have phenyl groups, and t residue interaction profiles. On the other hand, benzene did not vertical direction (Figure 7). Instead, weak profiles were detected tion. This indicates that interaction with only a single amide-π s stable at the surface of proteins. Figure 7. The residue interaction profile of catechol. Green, cyan, gray, blu profiles of hydrophobic, hydrophilic, aromatic, basic, and acidic residu cule structure is that of catechol. Figure 6. The residue interaction profile of catechol. Green, cyan, gray, blue, and red meshes indicate profiles of hydrophobic, hydrophilic, aromatic, basic, and acidic residues, respectively. The molecule structure is that of catechol.
Benzene: Interaction Analysis of Phenyl Group Itself
Benzamidine and catechol both have phenyl groups, and their groups show clear residue interaction profiles. On the other hand, benzene did not display a profile in the vertical direction (Figure 7). Instead, weak profiles were detected in the horizontal direction. This indicates that interaction with only a single amide-π stacking is insufficiently stable at the surface of proteins. nt. J. Mol. Sci. 2022, 23, x FOR PEER REVIEW Figure 6. The residue interaction profile of catechol. Green, cyan, gray, blue, and red profiles of hydrophobic, hydrophilic, aromatic, basic, and acidic residues, respecti cule structure is that of catechol.
Benzene: Interaction Analysis of Phenyl Group Itself
Benzamidine and catechol both have phenyl groups, and their grou residue interaction profiles. On the other hand, benzene did not display a vertical direction (Figure 7). Instead, weak profiles were detected in the ho tion. This indicates that interaction with only a single amide-π stacking is stable at the surface of proteins. Figure 7. The residue interaction profile of catechol. Green, cyan, gray, blue, and red profiles of hydrophobic, hydrophilic, aromatic, basic, and acidic residues, respecti cule structure is that of catechol.
Comparison to Co-Crystallized Structures
We compared the constructed residue interaction profiles and crystal further verification of the appropriateness of the method. Figure 7. The residue interaction profile of catechol. Green, cyan, gray, blue, and red meshes indicate profiles of hydrophobic, hydrophilic, aromatic, basic, and acidic residues, respectively. The molecule structure is that of catechol.
Comparison to Co-Crystallized Structures
We compared the constructed residue interaction profiles and crystal structures for further verification of the appropriateness of the method.
Benzamidine
To demonstrate the validity of the residue interaction profile obtained by MSMD, the crystal structures of kinase CK2 (1LPU) and trypsin (2ZPS) with the residue interaction profile are shown in Figure 8. These crystal structures include benzamidine and can be compared to the preferred residue environment obtained by wet experiments. In the residue interaction profile of benzamidine obtained by MSMD, acidic residues are widely present near the amidino group. In kinase CK2 and trypsin, Glu81 and Asp170 are located near the profile of acidic residues. In kinase CK2, Val53, Ile66, and Ile174 are in the profile of hydrophobic residues above and below the aromatic ring of benzamidine. Notably, the profile of hydrophilic residues above and below the aromatic ring was suggested by MSMD. Ser171 of trypsin is located near the aromatic ring of benzamidine, which overlaps with the profile of hydrophilic residues. These two examples of X-ray structures, which are not included in the set of proteins for MSMD simulation, suggest that the residue interaction profile of a probe is reasonable and has generalization performance for any protein.
Catechol
To validate the residue interaction profile of catechol obtained by MSMD, we compared it with X-ray structures that included catechol molecules. Figure 9 shows the X-ray structures of catalase and protocatechuate 3,4-dioxygenase, including catechol. On superimposing the residue interaction profile of catechol over the X-ray structure, Asp110 of catalase is in the profile of acidic residues. The hydrophobic residue interaction profile above and below the aromatic ring contains Leu150 and Leu181 of catalase and Leu35 of protocatechuate 3,4-dioxygenase. In addition, the crystal structure of protocatechuate 3,4dioxygenase showed that Lys355 overlapped with the interaction profile of the basic residues. The amino acid residues around catechol in these X-ray structures are consistent with the residue interaction profile obtained by MSMD and support the simulation results. Again, these two proteins were not included in the set of proteins for MSMD simulation; thus, it indicates the generalization performance of the profile and the proposed method itself.
Catechol
To validate the residue interaction profile of catechol obtained by MSMD, we compared it with X-ray structures that included catechol molecules. Figure 9 shows the X-ray structures of catalase and protocatechuate 3,4-dioxygenase, including catechol. On superimposing the residue interaction profile of catechol over the X-ray structure, Asp110 of catalase is in the profile of acidic residues. The hydrophobic residue interaction profile above and below the aromatic ring contains Leu150 and Leu181 of catalase and Leu35 of protocatechuate 3,4-dioxygenase. In addition, the crystal structure of protocatechuate 3,4-dioxygenase showed that Lys355 overlapped with the interaction profile of the basic residues. The amino acid residues around catechol in these X-ray structures are consistent with the residue interaction profile obtained by MSMD and support the simulation results. Again, these two proteins were not included in the set of proteins for MSMD simulation; thus, it indicates the generalization performance of the profile and the proposed method itself.
above and below the aromatic ring contains Leu150 and Leu181 of catalase and Leu35 of protocatechuate 3,4-dioxygenase. In addition, the crystal structure of protocatechuate 3,4dioxygenase showed that Lys355 overlapped with the interaction profile of the basic residues. The amino acid residues around catechol in these X-ray structures are consistent with the residue interaction profile obtained by MSMD and support the simulation results. Again, these two proteins were not included in the set of proteins for MSMD simulation; thus, it indicates the generalization performance of the profile and the proposed method itself.
Detection of Aromatic Residues' Profile
Despite several agreements between the X-ray structures and residue interaction profiles obtained from MSMD simulation, the profiles of aromatic residues were not detected sufficiently in spite of the threshold being the same among all residue types. One possible reason is the size of the side chains. Aromatic side chains have a relatively large structure compared to other types of residues, leading to a wider placement of the Cβ atoms. In this study, we aimed to show the residue-based interaction profile; thus, we focused on the Cβ atoms, which are common among almost all residues. However, it is also important to know the interaction profiles of specific atoms of side chains. The implementation already has the functionality to generate the profiles of any specific atoms, as well as Cβ. An exemplary interaction profile of aromatic rings is shown in Figure S1 in Supplementary Materials. Although it is a preliminary visualization and the signal of the profile is not stronger than that of Cβ atoms of acidic residues, the visualization result will suggest the π-π stacking more directly and will be informative for lead optimization.
Consideration of Binding Stability
In this study, we sampled residue environments at preferable protein surfaces to omit artifacts of MSMD simulation. The above results indicate that the sampled residue environments were informative; however, preferable protein surfaces can be classified into two types:
1.
Strong binding affinity between a probe molecule and the protein surface, which allows a single probe molecule to bind stably to the surface.
2.
Frequent access of probe molecules to the protein surface, which makes multiple probe molecules bind to the protein surface alternatively.
The aim of this sampling was to obtain a residue interaction profile in which a probe molecule binds stably. The protein surfaces of the first type were suitable for this purpose. The surfaces of the second type might be important in providing access to the binding site, but the binding affinity with the probe may not necessarily be strong.
To obtain more appropriate residue interaction profiles, we tried to filter out residue environments whose probes were not stably situated. Here, we defined a stable probe molecule as a probe molecule that moved less than 3 Å from a place where the molecule was 500 ps before. The selection of stable samples omitted 77.4% of the residue environments of benzamidine, 88.0% of those of catechol, and 94.5% of those of benzene. However, Figure 10 reveals only a slight difference with and without filtering. Although filtering slightly enhanced the localization of residues, the results indicated that there was no significant difference between stable surfaces and accessible surfaces.
The surfaces of the second type might be important in providing access to the binding site, but the binding affinity with the probe may not necessarily be strong.
To obtain more appropriate residue interaction profiles, we tried to filter out residue environments whose probes were not stably situated. Here, we defined a stable probe molecule as a probe molecule that moved less than 3 Å from a place where the molecule was 500 ps before. The selection of stable samples omitted 77.4% of the residue environments of benzamidine, 88.0% of those of catechol, and 94.5% of those of benzene. However, Figure 10 reveals only a slight difference with and without filtering. Although filtering slightly enhanced the localization of residues, the results indicated that there was no significant difference between stable surfaces and accessible surfaces.
Substituent Evaluation with Residue Interaction Profiles
As mentioned in the Introduction, structural optimization that is strongly and specifically directed at a target protein is necessary in lead optimization. Residue interaction profiles can help the optimization step by suggesting whether a substituent is feasible for a protein binding pocket space. For instance, superimposing the interaction profile on a co-crystalized structure indicates how well the existing substituent matches the protein surface environment ( Figure S2). Furthermore, residue interaction profiles of dozens of probes enable the establishment of a recommendation system by computational substitution of an initial compound and superimposition profiles of corresponding substituents or probes. Therefore, constructing residue interaction profiles of many probes with variation will enhance the impact in lead optimization processes.
Conclusions
In this article, we proposed inverse MSMD for analyzing a probe's preference of interaction patterns. Unlike the analysis from known data, such as X-ray, Cryo-EM, and NMR structures, the method can process arbitrary probes, and the results are free from any chemical context of molecules. We assessed the method using benzamidine, catechol, and benzene, resulting in good agreement with the experimental structures. This method indicates where protein surfaces provide suitable binding sites for a probe and this, in turn, can be applied to lead optimization by suggesting substituents based on the vacant spaces of a binding pocket. More precisely, for a target protein surface where the next substituents on hit compounds are placed, the method provides information on which substituents are better in accordance with the residue interaction profiles of multiple probes or possible substituents. The next step in our method will involve proposing a quantitative metric of how well the residue environment and the residue interaction profile match. Furthermore, the construction of the residue interaction profile database is effective for the computational lead optimization process because the profiles can be applied to arbitrary proteins. | 7,915.6 | 2022-04-26T00:00:00.000 | [
"Chemistry"
] |
Preservation and Recycling of Crust during Accretionary and Collisional Phases of Proterozoic Orogens: A Bumpy Road from
Zircon age peaks at 2100–1650 and 1200–1000 Ma correlate with craton collisions in the growth of supercontinents Nuna and Rodinia, respectively, with a time interval between collisions mostly <50 Myr (range 0–250 Myr). Collisional orogens are two types: those with subduction durations <500 Myr and those ≥500 Myr. The latter group comprises orogens with long-lived accretionary stages between Nuna and Rodinia assemblies. Neither orogen age nor duration of either subduction or collision correlates with the volume of orogen preserved. Most rocks preserved date to the pre-collisional, subduction (ocean-basin closing) stage and not to the collisional stage. The most widely preserved tectonic setting in Proterozoic orogens is the continental arc (10%–90%, mean 60%), with oceanic tectonic settings (oceanic crust, arcs, islands and plateaus, serpentinites, pelagic sediments) comprising <20% and mostly <10%. Reworked components comprise 20%–80% (mean 32%) and microcratons comprise a minor but poorly known fraction. Nd and Hf isotopic data indicate that Proterozoic orogens contain from 10% to 60% of juvenile crust (mean 36%) and 40%–75% reworked crust (mean 64%). Neither the fraction nor the rate of preservation of juvenile crust is related to the collision age nor to the duration of subduction. Regardless of the duration of subduction, the amount of juvenile crust preserved reaches a maximum of about 60%, and 37% of the volume of juvenile continental crust preserved between 2000 and 1000 Ma was produced in the Great Proterozoic Accretionary Orogen (GPAO). Pronounced minima occur in frequency of zircon ages of rocks preserved in the GPAO; with minima at 1600–1500 Ma in Laurentia; 1700–1600 Ma in Amazonia; and 1750–1700 Ma in Baltica. If these minima are due to subduction erosion and delamination as in the Andes in the last 250 Myr; approximately one third of the volume of the Laurentian part of the GPAO could have been recycled into the mantle between 1500 and 1250 Ma. This may have enriched the mantle wedge in incompatible elements and water leading to the production of felsic magmas responsible for the widespread granite-rhyolite province of this age. A rapid decrease in global Nd and in detrital zircon Hf model ages between about 1600 and 1250 Ma could reflect an increase in recycling rate of juvenile crust into the mantle; possibly in response to partial fragmentation of Nuna.
Introduction
Although it is now well established that U/Pb ages from zircons show an episodic distribution with age [1][2][3][4][5][6], the origin of this distribution is still not understood and is a subject of ongoing debate.Although the ages of peaks vary with geographic location and also between igneous and detrital zircon populations, major peak clusters on a global scale recognized at 2700, 1900, 1000, 600 and 300 Ma [4].The standard explanation of these peak clusters has been that they represent periods of enhanced production of continental crust together with remelting and reworking of older crust [7][8][9].However, as pointed out by several investigators in recent years, there is a striking correlation of major peaks with the assembly of supercontinents and this has led to the suggestion that the peaks are not really crustal production peaks, but crustal preservation peaks [4,10,11].
Major orogens fall into one of two groups: accretionary and collisional [12].A third type of orogen, the intracratonic orogen (such as the Peterman orogen in Australia), is not important in production or preservation of juvenile crust and will not be considered here.As ocean basins close, accretionary orogens are active on one or both continental margins, and final closure is often marked by a continent-continent collision producing a collisional orogen.Although crust can be destroyed (by uplift, erosion, subduction, etc.) during continental collisions, it is during this stage that most crust is also preserved.What is not well known is how, when, and where in collisional orogens that crust is preserved.Hawkesworth et al. [11] suggested that preservation of igneous rocks reaches a maximum soon after collision begins, whereas others have suggested pre-collisional igneous rocks from the ocean-basin closing stage are the dominant rocks preserved [13].Geochemical data from oceanic basalts and subduction-related volcanics have been interpreted to support new growth of continental crust during the collisional orogenic stage [14].At the other extreme are those that have made a case for losses of continental crust by delamination during the collisional stage [15].Another feature of zircon age spectra that is not well understood is the broad minimum in ages between times of supercontinent formation.Do these represent true minima in juvenile crustal production or could they be due to enhanced recycling of crust into the mantle, and thus also be related to crustal preservation?And still another problem in understanding continental growth is the amount of material underplated beneath continents by magmatic processes.This material is generally mafic and does not produce significant contributions to detrital zircon populations, and hence can be significantly underestimated in crustal growth models.
In this paper, the focus is on some of these questions relative to the time period of 2200 to 1000 Ma, which involves the formation of two supercontinents: Nuna at 1900-1800 Ma and Rodinia at 1200-1000 Ma.There are 50 orogens that play a role in the formation of one or both of these supercontinents.Data for these orogens are compiled in Appendix 1 and shown on diagrammatic reconstructions of the supercontinent Nuna.
Orogens and the Growth of Supercontinents
Orogen evolution can be considered in two stages: the onset of subduction and the onset of collision (Figures 1-3; Appendix 1).The onset of subduction is a maximum age for the onset of closing of an ocean basin, since subduction may begin before actual closing of an ocean basin begins.Available data indicate that post-Archean age peaks ≤2100 Ma are controlled by data from both collisional phases and accretionary (subduction) phases of orogens, whereas those >2100 Ma reflect chiefly or entirely the onset of subduction (such as Birimian-Transamazonian, Magondi-Kheis, Sutam and Trans-North China orogens) (Appendix 1).As previously recognized [3,4], collisional peaks between 2100 and 1900 Ma correlate with the growth of Nuna, whereas those at 1200-1000 Ma correspond to the growth of Rodinia (Figure 1).The earliest phase of Nuna assembly is recorded by collisions in the Luizian, West Congo, Birimian-Transamazonian, Limpopo, Volga-Don, Eburnean, and Magondi-Kheis orogens between 2150 and 2050 Ma, and much of this action occurred in the Congo, West Africa and Tanzania cratons (Appendix 1; Figure 3).What appears to be an orogen of local extent in the Borborema province of eastern Brazil may actually represent one of the oldest Nunian collisions at 2350 Ma.Most of the action in the assembly of Nuna, however, occurred in a relatively short period of time between 1900 and 1800 Ma.The two orogens with collisional onsets around 1600 Ma are in Australia and Antarctica [Olarian, Kararan (Figure 3)], and appear to record the final amalgamation of Nuna.If Nuna fragmented, it occurred in the time interval of 1500-1300 Ma, before or overlapping with the Albany-Fraser and Musgrave collisions in Australia at 1345-1330 Ma.There is a suggestion of a short-lived minimum at about 1350-1200 Ma in both the zircon age spectra and in model Nd and Hf ages, which could correspond to the fragmentation (or more likely, partial fragmentation) of Nuna (Figures 4 and 5; [4]).Grenvillian collisions began at 1200 Ma marking the onset of the formation of Rodinia.The last stages in the formation of Rodinia are recorded by the Eastern Ghats collision in India at 1085 Ma and the Kibaran collision in East Africa at 1000 Ma.
The time interval between collisions ranges from zero to 250 Myr.Most are <50 Myr with a mean of 26 Myr and a median of 10 Myr.As shown in Figure 6, there is no apparent relationship between the time interval between collisions and collision age.There are four notably long time intervals (>100 Myr): Borborema to Luizian (250 Myr), Albany-Fraser to Eastern Ghats (220 Myr), Penokean-Yavapai-Mazatzal to Musgrave (130 Myr) and Nimrod-Ross to Olarian (110 Myr).As expected, the shortest times between collisions (<10 Myr) correlate with either the growth of Nuna at 1900-1800 Ma or the growth of Rodinia at 1200-1000 Ma.
Orogen Durations
To better understand how and when crust is preserved in orogens, it is useful to examine the duration of both the subduction (ocean-basin closing) and collisional stages, estimates of which are compiled in Appendix 1.The onset of subduction is equated with the ages of the oldest arc volcanic and plutonic rocks, and the onset of collision from the oldest syntectonic granitoids and structures associated with a continent-continent collision.The terminations of subduction and collision are more difficult to estimate since these processes end diachronously as colliding continents or terranes are annealed to each other and delamination ceases.This termination is equated with the oldest post-tectonic plutons, dykes and structures, which is a maximum age for the end of collision.Duration of subduction is estimated as the difference between the age of the onset of subduction and the onset of collision, which is a maximum age for duration of closing of an ocean basin, since subduction associated with the accretionary stage may have started before the actual closing of the ocean basin.Subduction duration ranges from about 20 to 900 Myr (mean = 192 Myr) and collision duration from 20 to 170 Myr (mean 54 Myr) (Figure 7).Only eight examples have collisional durations ≥100 Myr, whereas about 30% have collisional durations ≤30 Myr.In terms of subduction duration, orogens fall into two groups: those with subduction durations <500 Myr (mean = 125 Myr) and those with durations of ≥500 Myr (mean = 720 Myr) (Figure 7).
Only five examples of the ≥500 Myr group are recognized: Penokean-Yavapai-Mazatzal, Makkovikian-Labradorian, Baltica, Amazonia, and Xiong'er.The first four of these comprise the Great Proterozoic Accretionary Orogen (GPAO), which may be the longest-lived orogen of all time, parts of it enduring for at least 900 Myr (Appendix 1).All of the orogens with a long subduction duration stage began as accretionary orogens during the amalgamation of Nuna, and did not terminate in collision until Rodinia formed at 1200-1000 Ma.It is noteworthy that the subduction duration of most Proterozoic orogens is longer than the lifespan (oldest igneous rock age minus accretion age) of typical terranes in these orogens (50-100 Myr; [18]).This means that, on average, arc magmatism in terranes continues after terrane docking.
Also given in Appendix 1 are estimates of the areas of preserved orogens today.Most orogens have preserved areas between 10 5 and 10 6 km 2 , with a mean of about 5 × 10 5 km 2 (Figure 8).It is noteworthy that neither orogen age nor the duration of either subduction or collision correlates with area preserved.
Tectonic Settings Preserved in Orogens
In order to better estimate the amount of juvenile crust preserved in orogens, it is important to identify tectonic settings preserved in orogens.Tectonic settings are inferred from rock associations and geochemical features as described in previous papers [18][19][20].The areal abundances of various settings are estimated from geologic maps at varying scales.Preserved tectonic settings represent an array of older and contemporary crust that survived subduction during closing of an ocean basin.They include both oceanic and continental tectonic regimes as represented by supracrustal and plutonic rocks.Greenstones (supracrustal assemblages in which mafic volcanics dominate) are perhaps most definitive in identifying ancient tectonic settings.The distribution of Proterozoic greenstones preserved in orogens grouped into arc and non-arc settings is shown in Figure 9.Although both types of greenstones are preserved throughout the Proterozoic, those associated with the formation of Nuna between 2100 and 1650 Ma are more frequently preserved than at other times.The exception is a peak at about 1300 Ma, which is defined by greenstones and associated plutons from two areas in the Eastern Grenville (Central Metamorphic Belt and the Adirondacks) and one in the Namaqua orogen in southern Africa.Unlike Nuna, very few greenstones are preserved that accompanied the assembly of Rodinia between 1200 and 1000 Ma, and the reason for this is an important outstanding question.
Tectonic settings preserved in orogens are summarized in Appendix 1.On average, the most abundant preserved setting is the continental arc ranging from 10% to 90% by volume (mean = 60%) (Figure 10).In most orogens, remnants of continental arcs comprise from 60% to 70% of exposed rocks, but in a few examples, such as the Limpopo, Usagaran-Tanzania, Ubendian, New Quebec, and Angara orogens, they comprise ≤20%.Some of these orogens may involve a large transpressive subduction component, such that arc magmatism was minimal.In contrast to continental arcs, oceanic arcs comprise a very small amount of orogens (<10%).This is in agreement with the results of Condie and Kroner [21], who suggest that oceanic arcs are not major components of continental growth.Other oceanic tectonic settings such as oceanic crust (including ophiolites), serpentinites, pelagic sediments, and oceanic islands and plateaus are also rarely preserved in most orogens.Collectively, they comprise <20% and often <10% of rocks preserved in orogens with an average value of only 8%.One outstanding exception is the Trans-Hudson orogen in Canada, in which nearly 40% of oceanic terranes are preserved in the central part of the orogen.The Eburnean and Birimian-Transamazonian orogens in West Africa and South America are also unusual in that they comprise about 30% of oceanic terranes.Both reworked crust and microcratons also occur in orogens.Reworked crust comprises chiefly Archean components that occur as some combination of basement, accreted terranes, and sediments.Reworked components generally comprise between 20% and 80% of orogens with an average of 32%, based on Nd and Hf isotope studies (Appendix1; Figure 10).Microcratons, such as the Sask craton in the Trans-Hudson orogen, are difficult to identify without geophysical data, because they often are not exposed at the surface.Thus, microcratons may be more abundant than suggested by the data in Appendix 1. Four orogens are unusual in that they comprise about 80% reworked components (Limpopo, Usagaran-Tanzania, Angara, and Ubendian).It is possible that these four orogens involved largely transpressive collisions with minimal amounts of concurrent arc magmatism.
Juvenile Crust Preserved in Orogens
There is a large database of Nd and Hf isotopes and geologic maps upon which the distribution of juvenile crust (both continental and oceanic) can be constrained [4,5,18,24].Juvenile crust includes crust that has been extracted from the mantle with a relatively short crustal residence time (≤200 Myr), most of which resides in remnants of continental arcs [21].Hawkesworth et al. [11] have suggested that peak igneous activity preserved in orogens is reached early in the collisional stage.To test this idea, we have calculated the preservation ratio of rocks in the subduction stage (ocean-basin closing) to those in the subduction + collision stage in Proterozoic orogens (Appendix 1).The ratio is calculated from a combination of igneous zircon ages and areal distributions of igneous rocks as estimated from geologic maps.Note that this estimate does not include possible contributions from mafic underplating.As shown in Figure 11, there is a considerable range in preservation ratio (expressed as percent of subduction stage), with igneous rocks formed during the subduction stage ranging from <10% to 90% of the total (mean = 63%; median = 67%).Clearly, a large fraction of the non-reworked rocks preserved in orogens date to the pre-collisional, ocean-closing stage and not to the actual collisional stage.
Of the 50 Proterozoic orogens studied, 36 have whole-rock Nd isotopic data available (Appendix 1).Only 10 of these have chiefly positive εNd(T) values (Birimian-Transamazonian, Baltica, Wopmay, NE Greenland, New Quebec, Volga-Don, Yapungku, and Olarian), whereas the others have negative or mixed values (Figure 12).Hence, most orogens contain significant volumes of reworked older crust.A similar conclusion is reached using the Hf isotope data from detrital zircons and has also been suggested for Phanerozoic orogens [5,13,24].From the combined Nd and Hf isotope databases, together with geologic maps of varying scales (for outcrop samples), the distribution of juvenile crust preserved in Proterozoic orogens has been estimated (Appendix 1).On average, 50% of continental arcs comprise juvenile input, whereas oceanic terranes (including oceanic arcs, crust, islands and plateaus) comprise about 90% juvenile input.The amount of preserved juvenile crust in collisional orogens ranges over two orders of magnitude with most orogens between 10 6 and 10 7 km 3 .As expected, the tightest grouping of preserved juvenile crust correlates with the assembly of Nuna and Rodinia.Preservation rate, defined as juvenile crust volume preserved per year of subduction, is shown in Figure 13.Although the rate of preservation ranges from close to zero to over 0.7 km 3 /year, most orogens have preservation rates in the range of 0.02-0.1 km 3 /year, with an average of about 0.14 km 3 /year.Neither the fraction of juvenile crust preserved nor the rate of preservation is related to the collision age nor to the duration of subduction.Preservation rates of juvenile crust are ≥0.4 km 3 /year in only The fraction of juvenile crust preserved in orogens ranges from about 10% to 60% (mean = 36%) and the amount of reworked crust from 40% to 75% (mean = 64%).The Limpopo orogen contains the smallest amount of juvenile crust (about 5%) with the Eburnean, Usagaran-Tanzania, Ubendian, Angara, Nagssugtoqidian, and Arctic orogens all containing ≤20%.At the other extreme, there are 10 orogens that contain ≥50% juvenile input (Birimian-Transamazonian, Wopmay, Trans-Hudson, Volyn Central, Halls Creek, Penokean-Yavapai-Mazatzal, Baltica, Xiong'er, Makkovikian-Labradorian, and Amazonia).Of the juvenile crust preserved, most dates to the subduction stage rather than the collision stage (Figure 14).An excellent example of this is presented by Dickin et al. [25] for the Grenville orogen.Although Hawkesworth et al. [6,11] have made a case for poor preservational potential for igneous rocks formed at convergent margins and high potential for syn-collisional igneous rocks, results of this study do not completely support this model.Although the preservational potential is certainly high during the collisional phase, the results indicate that the age of the igneous rocks preserved date chiefly to the pre-collisional, accretionary convergent margin stage, not to the syn-collisional stage.Thus, the preservation potential during the accretionary stage must also be moderate to high for these rocks to survive to a continent-continent collision, which we can think of as the final "capture stage" that makes it possible for rocks to eventually become part of a craton.
Although the six orogens with subduction durations of >500 Myr all contain around 50% juvenile components, many of the short-duration subduction orogens also contain similar amounts of juvenile crust (Figure 15).This seems to indicate that the duration of subduction is not a major control on the fraction of juvenile crust preserved in orogens; regardless of subduction duration, the fraction of juvenile crust preserved reaches a maximum of around 50%.It is important to note that the six orogens with subduction durations >500 Myr all have long-lived accretionary phases, with four out of the six comprising part of the Great Proterozoic Accretionary Orogen (GPAO).Because the GPAO had both a long duration and large area, it contains the largest volume of juvenile crust produced between 2 and 1 Ga.
Recycling of Crust in Orogens
Now we have seen how and where crust is preserved in orogens, the next question is when and how much crust is recycled into the mantle during orogen evolution.Pronounced minima occur in the ages of rocks preserved in orogens, as for example the long-recognized sparsity of ages between 1600 and 1500 Ma in the Laurentian portion of the GPAO [26,27] (Figures 16 and 17).This minimum also shows up in the detrital zircon age spectra from North American river sediments (Figure 18).Similar age minima occur in both the igneous and detrital age spectra of the Amazonia and Baltica portions of the GPAO at 1700-1600 Ma and 1750-1700 Ma, respectively.The cause of these minima may be related to (1) rocks of this age never formed, (2) they are covered with younger rocks, or (3) they have been recycled into the mantle [13].Accretionary orogens go through retreating and advancing phases [12].Retreating orogens grow by addition of new continental crust in forearc and back arc basins, whereas advancing orogens may lose significant volumes of crust by recycling into the mantle at subduction zones by a combination of subduction erosion and delamination [12,[28][29][30][31][32][33].Scholl and von Huene [29,30] have estimated that crust around the Pacific basin is being recycled back into the mantle at a rate of 3.2 km 3 /year.At this rate, the entire width of an accretionary orogen can be destroyed in a few hundred million years, and if this rate is typical of the last 3 Gyr, an equal volume of the current continental crust would have been recycled into the mantle during this time interval.
Although magmatic activity along subduction zones tends to be episodic on 10-50 Myr time scales and hence may be missing for short distances and short time intervals along an arc system [4], young subduction zones rarely if ever shut down over lengthy arc segments for >50 Myr.The sparsity of zircon ages for the 100 Myr period between 1600 and 1500 Ma in the Laurentian segment of the GPAO (Figures 16 and 17) probably does not reflect shut down of the continental arc system for such a long period of time.There are three lines of evidence for the former existence of rocks in the 1600-1500 Ma age range in Laurentia: (1) there are two segments of continental arc rocks of this age preserved in the Central Metamorphic Belt and adjacent areas in the eastern Grenville orogen [25,34,35], which could be remnants of a much more extensive arc system; (2) Nd and Hf model ages indicate sources for orogenic granitoids in this age window (Figures 4 and 5); and (3) detrital zircon ages from North America show evidence of rocks of this age range (Figure 18).The problem with detrital zircons is that we really do not know where they come from.Although studies of detrital zircon age distributions from the Belt Supergroup and Hess Canyon Group in the western United States have been interpreted by some investigators to reflect sources in Australia when Laurentia was part of the Rodinian supercontinent [36,37], it is also possible that at least some of these zircons come from unexposed Laurentian sources or from Laurentian sources that have been recycled into the mantle.Indicative that arc-related rocks formed in the Laurentian segment of the GPAO are numerous Nd model ages between 1600 and 1500 Ma that come from 1.5 to 1.0 Ga granitoids and volcanics in this orogen (Figures 4 and 19) [25,35].Hf model ages show much the same distributions (Figure 5).Studies of both the Japanese and Andean orogens have shown that large volumes of juvenile and reworked crust have been recycled into the mantle during the Phanerozoic, chiefly by subduction erosion, but probably also involving delamination [28][29][30][31]38].Calculated rates of subduction erosion in the central Andes are 250 km/Myr/km.If we use the same rate of removal of crust in the Laurentian segment of the GPAO (Figure 17) for the 1600-1500 Ma missing crust, for a width of 250 km and strike length of 7000 km, 1.75 × 10 6 km 2 of Proterozoic crust may have been recycled into the mantle.This could represent as much as 35% of the original crustal volume in the orogen.The enhanced subduction erosion in the Andes in the last 250 Myr may reflect opening of the Atlantic basin, during which the Andean arc changed from a retreating to an advancing arc.If a similar situation is applicable to the missing crust in the GPAO, Nuna may have begun to fragment by rifting of Siberia from Laurentia at 1500-1300 Ma, which changed the frontal arc system to an erosive advancing arc like the Andes (Figures 17 and 20).Likewise, the missing Proterozoic crust in the Baltica and Amazonia segments of the GPAO may reflect rifting of West Africa from Amazonia at 1600-1500 Ma, and of Baltica from West Africa at 1700-1600 Ma.One other interesting feature of the GPAO is the widespread occurrence of A-type granites and associated rhyolites along this belt, ranging from 1500 to 1350 Ma in the Laurentian Southwest and mid-continent regions, 1650-1500 Ma in the eastern Grenville, to 1850-1750 Ma in Baltica (Figure 21) [39][40][41][42][43].In effect, this felsic province within the GPAO is bracketed in time between the two supercontinents Rodinia and Nuna.If indeed, a large volume of continental crust in the age range of 1600-1500 Ma was recycled into the mantle by subduction erosion and delamination between 1500 and 1300 Ma in Laurentia, could this be related to the distribution of A-type granites of this age?As shown is Figure 20, subduction of significant volumes of continental crust may have enriched the mantle wedge in incompatible elements and water, and partial melting of this wedge would produce enriched mafic magmas.Fractional crystallization of these magmas or partial melting an enriched mafic underplate [44] may have given rise to A-type felsic magmas, some of which were erupted and many of which were intruded into overlying crust.A model involving an enriched mantle source has been proposed for A-type granites in southeastern Scandinavia [45].Because the Nd isotopic data from the granite-rhyolite province (Figure 17) indicate a source with a typical model age of 1600-1500 Ma, rocks of this age must have underlain the overlying crust (with an age of 1700-1600 Ma), the latter of which hosts most of the A-type granites (Figure 20).
Discussion and Conclusions
Information from 50 Paleo-and Mesoproterozoic orogens that are targets of this study clearly show that juvenile crust is preserved in orogens during both accretionary and collisional phases of the orogens.It is probable that both juvenile and recycled crust control zircon age distributions in detrital populations as pointed out by Hawkesworth et al. [11] and Condie et al. [13].Although single age peaks (or peak clusters) differ between igneous and detrital age spectra, as discussed extensively in previous studies [3,4], both igneous and detrital zircons show broad age maxima during the formation of phases of Nuna and Rodinia, yet a sparsity of ages in between these times.Somehow, geologists and rivers are sampling the same rocks, although at different scales.Geologists are sampling older orogens and this is not surprising since cratons are really amalgamations of older orogens.During collision of cratons, the crust in collisional belts thickens, rises and is preferentially eroded and delivers detrital zircons to basins that are later sampled for detrital zircon dating.Because geologists sample the root zones of these orogens that are left behind, it is not surprising the large-scale age distributions in both igneous (geologist-sampled) and detrital (river-sampled) zircons are similar.However, during some time segments of the accretionary phases of orogens, recycling of crust into the mantle may outweigh extraction of crust from the mantle, as for instance during the last 250 Myr in the Andes [30].It is not until the collisional phase of an orogen, as represented by the Himalayas today, that significant volumes of juvenile crust and derivative zircons can be preserved.A few hundred million years in the future, the Himalayas will be eroded away, some zircons will be preserved in sedimentary basins, and the root zones of the orogen with in situ zircons will be exposed at the surface.Although the overall age spectra should be the same in both sets of zircons, mixing and sedimentary recycling of the detrital zircons with zircons from other sources and incomplete sampling of igneous zircons will complicate detailed correlations.
Why less frequent ages are found in both igneous and detrital zircons between the assembly times of Nuna and Rodinia is less obvious.Although modern river systems do not really do a good job of sampling the accretionary phases of Proterozoic orogens that dominated the continents between the assemblies of the two supercontinents, this cannot be the entire answer, because most detrital zircons come from recycled crustal sources, not from exposed older orogens.Could it be that the less frequent zircon ages between 1600 and 1250 Ma reflect losses of juvenile crust by recycling into the mantle at subduction zones?Younger examples of subduction erosion and delamination are chiefly from accretionary orogens like the Andes.If similar recycling occurred in the Great Proterozoic Accretionary Orogen (GPAO), significant volumes of juvenile Proterozoic crust could have been recycled into the mantle before the Grenvillian collisions that preserved these rocks.Supporting this interpretation is the distribution of Hf model ages of detrital zircons [5,13,24].Hf model ages fall off very rapidly between about 1600 and 1250 Ma.Although this decrease has been interpreted to reflect a decrease in production rate of continental crust, it could alternatively represent an increase in recycling rate of juvenile crust into the mantle.In both dry and wet mantle, Korenaga [46][47][48] has made a convincing case for average plate velocities increasing during this time interval, which could lead to an increase in crustal production rate, and also perhaps to an increase in the recycling rate of crust into the mantle during periods of supercontinent breakup [32,49].The decrease in frequency of Hf model ages between 1600 and 1250 Ma could mean that crustal recycling rates increased faster than crustal production rates.
So what might cause such an increase in recycling?Most of the action in the GPAO and related accretionary orogens occurred during this time interval, and perhaps it was in these orogens that significant volumes of juvenile crust were recycled back into the mantle.This possibility re-emphasizes the fact that continental growth is a two-way process: extraction from the mantle and recycling back into the mantle [50].A major question that must be addressed in the future is that of what mantle processes control the balance of extraction and recycling of crust from or into the mantle, and why should one or the other be more important, especially during accretionary phases of orogens?
Figure 1 .
Figure 1.Frequency of onset of subduction and collision in Proterozoic orogens (data from Appendix 1, peak ages given in Ma).
Figure 4 .
Figure 4. Nd model ages of granites from Southwestern United States and the Mid-continent region of Laurentia.Data compiled in Appendix 1.
Figure 5 .
Figure 5. Hf model ages from detrital zircons from modern rivers in Laurentia.Data from Belousova et al. [5] and Condie et al. [13] and references therein.
Figure 6 .
Figure 6.Graph of the time between orogenic collisions and the onset of collision (data from Appendix 1).
Figure 9 .
Figure 9. Frequency of arc and non-arc type greenstones preserved in Proterozoic orogens.Data from unpublished database of the author; earlier versions of the database can be found in[22,23].
Figure 11 .
Figure 11.Frequency of preservation of rocks formed during the subduction stage of Proterozoic orogens (data from Appendix 1, col AG).The ratio is calculated from a combination of igneous zircon ages and areal distributions of igneous rocks as estimated from geologic maps.
Figure 13 .
Figure 13.Preservation rate of juvenile crust per year of duration of subduction for Proterozoic orogens (data from Appendix 1, col K).
Figure 16 .
Figure 16.Frequency of zircon ages from igneous rocks preserved in the Laurentian portion of the Great Proterozoic Accretionary Orogen (data from Appendix 1).Peak ages in Ma.
Figure 17 .
Figure 17.Paleomagnetic reconstruction of Nuna at about 1270 Ma showing age provinces in the Great Proterozoic Accretionary Orogen.Craton configurations modified after Evans and Mitchell [17], courtesy of Dave Evans.Red double lines: incipient rifting of cratons.
Figure 20 .
Figure 20.Diagrammatic cross section of the Great Proterozoic Accretionary Orogen along the southern margin of Laurentia showing advancing plate margin possibly caused by subduction erosion.
Figure 21 .
Figure 21.Frequency of Proterozoic A-type granites in Laurentia compared to distribution of accretionary and collisional phases of Proterozoic orogens in Laurentia (data from Appendix 1 and from Van Schmus et al. [40] and Goodge and Vervoort [42]. | 7,289.8 | 2013-05-29T00:00:00.000 | [
"Geology"
] |
X-Ray Computed Tomography Instrument Performance Evaluation, Part III: Sensitivity to Detector Geometry and Rotation Stage Errors at Different Magnifications
With steadily increasing use in dimensional metrology applications, especially for delicate parts and those with complex internal features, X-ray computed tomography (XCT) has transitioned from a medical imaging tool to an inspection tool in industrial metrology. This has resulted in the demand for standardized test procedures and performance evaluation standards to enable reliable comparison of different instruments and support claims of metrological traceability. To meet these emerging needs, the American Society of Mechanical Engineers (ASME) recently released the B89.4.23 standard for performance evaluation of XCT systems. There are also ongoing efforts within the International Organization for Standardization (ISO) to develop performance evaluation documentary standards that would allow users to compare measurement performance across instruments and verify manufacturer’s performance specifications. Designing these documentary standards involves identifying test procedures that are sensitive to known error sources. This paper, which is the third in a series, focuses on geometric errors associated with the detector and rotation stage of XCT instruments. Part I recommended positions of spheres in the measurement volume such that the sphere center-to-center distance error and sphere form errors are sensitive to the detector geometry errors. Part II reported similar studies on the errors associated with the rotation stage. The studies in Parts I and II only considered one position of the rotation stage and detector; i.e., the studies were conducted for a fixed measurement volume. Here, we extend these studies to include varying positions of the detector and rotation stage to study the effect of magnification. We report on the optimal placement of the stage and detector that can bring about the highest sensitivity to each error.
Introduction
The use of X-ray computed tomography (XCT) in industrial metrology has been steadily increasing in recent years as a means for dimensional inspection [1][2].This is especially true for complex parts where traditional inspection procedures would prove time-consuming or impossible.For example, fragile parts or parts with internal features such as those manufactured through additive manufacturing cannot be probed or accessed by contact-based measurement methods or line-of-sight optical methods.These distinct advantages in XCT have brought about the transition from its predominant use in defect inspection and medical imaging into dimensional measurements for engineering applications [3].As a result of such increasing demand and widespread use, there is a need for standardized test procedures to evaluate XCT instrument performance and to support claims of metrological traceability.This need was articulated in several publications in the early and mid-2000s [4][5][6].
To meet this need, the International Organization for Standardization (ISO) and the American Society of Mechanical Engineers (ASME) began working independently to develop XCT performance evaluation standards [7][8].ASME recently completed their work, resulting in the publication of the B89.4.23 standard in early 2021 [8].There are also published guidelines from the Association of German Engineers and the Association of German Electrical Engineers, the VDI/VDE, for specifying and testing the accuracy of industrial XCT systems [9][10][11].Note that there are other documentary standards published by ASTM International [12][13] and ISO [14], but they address imaging issues and are primarily intended for nondestructive evaluation, not dimensional measurements.There is currently ongoing discussion within ASTM to develop a documentary standard for dimensional inspection, but those discussions are at a very early stage.
In the writing of performance evaluation standards, one of the primary goals is to design test procedures that are sensitive to as many known error sources as possible.The first step is to identify all possible error sources and understand the effect of each individual error source on dimensional measurements.VDI/VDE 2630-1.2[10] lists and discusses several error sources associated with XCT systems.We have described the influence of uncorrected instrument geometry errors in cone-beam XCT instruments in a series of papers, of which this is the third part.In Part I of this series [15], we described a new simulation method, referred to as the single-point ray tracing method (SPRT), that enables rapid simulation of the effect of instrument geometry errors without the need for generating radiographs and performing tomographic reconstruction.We then utilized this technique to describe the effect of detector geometry errors on the sphere center-to-center distance error and sphere form error.In Part II of this series [16], we focused on the effect of rotation stage errors on the sphere center-to-center distance error and sphere form error, again using SPRT.Those studies identified the placement of spheres in the measurement volume so that each of the error sources could be captured most effectively, i.e., at their maximum sensitivities.Those studies also provided recommendations for documentary standards committees to consider as they developed performance evaluation procedures.However, those studies were limited to a fixed measurement volume, i.e., for chosen fixed positions of the detector (1177 mm from source) and the rotation stage (400 mm from the source) based on prior work reported by Ferrucci et al. [17].
Many XCT systems allow both the detector and the rotation stage to move, enabling numerous imaging magnifications.In fact, the same magnification can sometimes be realized through different combinations of source-stage (d) and source-detector (D) distances.To capture the effect of magnification, the ASME B89.4.23 standard and current draft of the ISO standard advocate testing at different measurement volumes (which can be realized by changing the positions of the rotation stage and/or detector), but they do not provide comprehensive guidance as to how the user should select these measurement volumes or all of the measurement lines within these volumes.Thus, we identified the need to extend the studies in Parts I and II to cover the working range of an XCT system with a movable detector so that we may provide more specific guidance to documentary standards committees developing these documents and to users of such systems that want to establish whether their instrument meets the https://doi.org/10.6028/jres.126.029manufacturer's specifications.In this Part III, we extend the work done in the first two parts [15][16] by repeating the SPRT simulations for several combinations of d and D. We discussed some early results in Ref. [18]; here, we present a more comprehensive description of the results and conclusions.
First, we briefly review literature in the area of performance evaluation of XCT systems using calibrated reference objects, particularly using spheres as metrological elements, since that is the method adopted by documentary standards committees.The use of a calibrated reference object consisting of ruby spheres mounted on shafts has been used reported by a number of researchers [4,[19][20][21].This reference object, often referred to as a sphere forest, is currently under consideration by ISO as a reference object for evaluating the performance of XCT systems.The use of calibrated reference objects such as a ball-bar [22], a tetrahedron of spheres [23,24], a ball-plate or a hole-plate [25][26][27], and spheres on a cylinder [28] has also been reported for performance evaluation of XCT systems.A summary of different reference objects used for evaluating performance of XCT systems can be found in Ref. [29].The effect of XCT geometry errors on cone-beam XCT systems has been discussed in Refs.[30][31][32].For a more general review of related literature on geometry errors in XCT systems, see Ferrucci et al. [33].
The rest of this paper is organized as follows.We describe the reference object used in the simulations and the different geometric error sources in Sec. 2. We discuss the simulation technique employed in Sec. 3. We present our results in Sec. 4 and conclusions in Sec. 5.
Reference Object and Error Sources
All simulations in the present work were carried out using a simulated reference object consisting of 125 spheres evenly distributed into five horizontal planes, each containing one centrally located sphere, 16 spheres arranged in an outer circle, and 8 spheres arranged in a smaller circle of half the diameter of the outer circle, as shown in Fig. 1.This is the same sphere arrangement described in Ref. [15].In Fig. 1, the source and detector positions are drawn to scale, but the detector size is not.The coordinate system used in this model has its origin at the source, and it has its axes directed as shown in Fig. 1.Detailed descriptions on establishing the coordinate system by defining each axis can be found in Ref. [17].While previous studies [15][16] considered the object to be of a single fixed size, in this study the height and diameter of the object's cylindrical shape are functions of d and D. In other words, the simulated object is scaled for each combination of source-stage and source-detector distance so that 98 % of the area of a 250 mm × 250 mm continuous (i.e., nonpixelated) detector is filled.This scaling is done while ensuring that all the spheres on the boundaries of the reference object are fully within the detector field of view (FOV).The purpose of such scaling is to obtain the largest possible magnitudes of distance errors for each combination of stage and detector positions.The sphere diameters are also scaled accordingly while ensuring that their projected images are within the detector FOV.The geometric error sources associated with the detector and stage are described in detail in Parts I and II [15][16] and in Ferrucci et al. [32][33].We present a brief summary here in the interest of completeness.The detector errors include the three location errors along each Cartesian axis of the instrument coordinate frame, and three angular positioning errors about the same axes.These are described in Table 1, where the assumed magnitudes of geometric errors are also included.
𝜃𝜃 𝑥𝑥
Detector rotation error about an axis parallel to X axis 0.2°
𝜃𝜃 𝑦𝑦
Detector rotation error about an axis parallel to Y axis 1°
𝜃𝜃 𝑧𝑧
Detector rotation error about an axis parallel to Z axis 0.2° The stage errors include a Z location error (along the axis connecting the source and rotation axis), and the error motions of the stage itself.These error motions, including encoder scale, axial, radial, and wobble errors (often referred to as tilt errors in documentary standards), are all assumed to have harmonic components and therefore are represented as sine and cosine functions of the rotation stage indexing angle, of the form sin() and cos(), where a represents the amplitude or magnitude of the error, n is the order of the harmonic, and θ is the table indexing angle.This study included harmonics of orders one through ten.These stage errors are described in Table 2, where the corresponding magnitudes are also included.In the case of the error motions, the magnitudes shown in Table 2 refer to the amplitude a.The complete value of the geometric error in these cases will depend on a, n, and θ as described above.The X and Y positioning errors of the stage have no physical meaning due to the way the coordinate system is defined [15].For any given geometry error parameter described in Tables 1 and 2, we performed (but not reported here) simulations for different magnitudes of the introduced geometry error to ensure that the center-tocenter distance error and sphere form error do in fact have a linear relationship with the introduced geometry error.https://doi.org/10.6028/jres.126.029
Methodology
The SPRT method introduced and described in detail in Ref. [15] is the simulation method adopted in this work.This method has been experimentally validated and has proven to be a faster and more practical alternative to the full XCT tomographic reconstruction methods for the purposes of estimating the effects of geometric errors using sphere-based objects.In the SPRT method, only the sphere centers are projected onto the detector, as opposed to the more traditional radiograph-based reconstruction of the entire object.As the stage makes a full rotation, the projection of the center of each sphere on the detector traces a locus.These obtained loci are used to determine the location of each sphere in the measurement volume through a least-squares-minimization-based back-projection algorithm.From the determined sphere centers, the center-to-center distances for each pair of the 125 sphere centers are determined.To estimate form error, circles consisting of 120 equally spaced points are constructed normal to each ray connecting the source and the detector, with their centers located on the previously identified least-squares centers.This is performed for each angular position of the stage as the stage rotates, and therefore the circles at different rotations form the spherical surface [15].The diameters of these circles correspond to the scaled diameters of the spheres in the reference object.The points lying in the interior of a convex hull generated from the resulting point cloud are truncated, and only the outer points are used for form error calculation.Here, we define form error as the difference between the maximum and minimum residuals from a least-squares best-fit sphere to the point-cloud data.While such a peak-to-valley approach is generally sensitive to outliers, the likelihood of outliers from the SPRT is small to none, due to its concept.
When a particular error source is analyzed, the assumed magnitude of the geometric error is used in the forward projection.For example, the actual value of is used to generate the simulated locus for each sphere.However, in the back-projection algorithm, an errorless instrument is assumed.This discrepancy between actual and assumed geometry parameters represents the magnitude of simulated errors and results in sphere center-to-center distance errors and sphere form errors.In this way, the effects of all geometry errors associated with the detector and stage on the center-to-center distance errors and form errors of the spheres on the reference object are studied.In this work, such simulations were performed at several stage and detector positions to study the effect of magnification.
For each error source under consideration, the pair of spheres that produced the highest center-tocenter distance error was previously identified in Parts I and II for a specific combination of d and D. The line joining this pair of spheres constitutes the line of highest sensitivity, i.e., center-to-center distance error (in mm) per millimeter or degree of geometric error.In this study, we tracked this pair of spheres across all stage and detector positions to identify the specific combination of d and D that resulted in the largest distance error.Similarly, the sphere producing the highest form error was previously identified in Parts I and II for one combination of d and D. In this study, this sphere was tracked through all combinations of stage and detector positions to identify the specific combination of d and D that resulted in the largest form error.The pairs of spheres producing the highest distance error as identified in Parts I and II, or the spheres producing the highest form error also identified therein, may not always be the same at all combinations of d and D.
In this study, we considered a detector that can travel up to a distance of 1200 mm from the source.In the simulations, d was varied from 200 mm to 1100 mm in steps of 100 mm.For each position of the stage, D was varied from d + 100 mm to 1200 mm, in steps of 100 mm.The diameter of the spheres in the appropriately scaled reference object ranged from 3.40 mm to 18.68 mm at the smallest and largest measurement volume, respectively.Figure 2 shows an overview of this approach.
Results and Discussion
In this paper, we have intentionally chosen a different approach for the presentation of the results from the previous parts.In Parts I and II, we discussed each error source separately, highlighting the spheres that produced the maximum distance error and the sphere that produced the largest form error.The results in those studies were based on just one simulation (for the specific case of d and D considered there) for each of the six detector errors and 121 rotation stage errors.Thus, in total, 127 simulations were performed.In this study, we perform 55 simulations (corresponding to all combinations of d and D considered here) for each of the six detector errors and the 121 rotation stage errors, resulting in a total of 6985 simulations.Given the enormous number of simulations, we intentionally chose to distill the information and present key findings instead of discussing each error source independently, as we did in Parts I and II.
Detector Errors
The lines of maximum sensitivity for each of the six error sources associated with the detector are shown in Fig. 3. Cases where multiple lines are shown for a single error source indicate that all of the lines shown are equally sensitive.The values of d and D for which this highest sensitivity is observed are also mentioned in each case.
For each error source, we plotted the distance error of the corresponding sensitive center-to-center length segment from Part I as a function of d and D. These plots should be simply treated as trends, and not as absolute in terms of locations or magnitudes of the highest sensitivities.They represent how the sensitivity in a given length varies when d and D are varied.They do not necessarily represent the highest sensitivity that can be achieved for a given combination of d and D. For example, there may be some values of d and D at which a different line produces a higher sensitivity than that shown in these plots for that combination.
We noticed that the highest distance error sensitivity generally occurred at one of two configurations.In the first configuration, the detector is positioned closest to the source, and the stage is positioned closest to the detector.This configuration, corresponding to d = 200 mm and D = 300 mm, shall henceforth be referred to as the "near" configuration.The second configuration is the one in which the stage is positioned as far from the source as possible, and the detector is positioned as close to the stage as possible.This configuration, corresponding to d =1100 mm and D = 1200 mm, shall be called the "far" configuration.We discuss these configurations through a few examples as follows.Consider next the case of detector rotation about the Z axis ( ).We tracked one of the pairs of spheres identified in Fig. 3 for each of the 55 combinations of d and D. The results are plotted as a function of d and D in Fig. 5, where each curve in the plot represents the sensitivity for a fixed source-stage distance d and varying source-detector distances D. Clearly, the largest sensitivity occurs at the far (d = 1100 mm, D = 1200 mm) configuration.A summary of the results for all the detector errors can be found in Table 3.
Rotation Stage Errors
Here, we present the lines of maximum sensitivity for the stage errors.The sensitive lines for the Z location error of the stage are shown in Fig. 6.Figures 7 through 10 show similar illustrations of sensitive lines for the remainder of the stage errors, namely, the error motions of the stage.These errors are represented by sine and cosine components of orders 1 to 10.However, the magnitudes of sensitivities corresponding to the first four orders were found to be the most significant and are therefore shown here.Further, only the cosine components of the error sources are shown.The sensitive lines for the sine components were observed to have similar orientations but rotated about the rotation axis.Lines of maximum sensitivity for first-order cosine components of stage error motions.From left to right: axial error, radial error along X, radial error along Z, wobble about X, wobble about Z, and scale errors in the encoder.These lines represent the optimal placement of a reference length that will produce the largest distance error for unit magnitude of each of these geometric error sources.Consider the case of rotation stage Z location error.In Part II, we showed that this error source is sensitive to the long body diagonal in a given volume.We then tracked this length across different combinations of d and D. The results are shown in Fig. 11.Clearly, the largest sensitivity occurs at the near configuration, i.e., with the stage located as close to the source as possible and the detector located as close to the stage as possible.The trends shown in Fig. 11 are similar to those shown in Fig. 4, except that the distances between the curves for different stage positions are so small that the curves appear to overlap.Consider the case of the first-order cosine component of wobble along the X axis ( , ).The sensitive line is shown in Fig. 7.We tracked this line through each of the 55 combinations of d and D. The results are plotted as a function of d and D in Fig. 12, which clearly indicates that the largest sensitivity occurs at the far configuration.A summary of the results for all the stage errors can be found in Table 4.
Observations and Results
Since the magnification for a given XCT measurement setup is determined by the ratio of D to d, consequently it is possible to achieve the same magnification by various combinations of d and D. For example, in Fig. 4, the configuration corresponding to d = 200 mm and D = 300 mm and the configuration corresponding to d = 400 mm and D = 600 mm both have the same magnification of value 1.5, but the sensitivities in the corresponding cases are significantly different.Therefore, it is clear that even when d and D are increased proportionately, i.e., maintaining the same magnification, some combinations of stage and detector distances are more sensitive to, and therefore would more clearly reveal, certain error sources.
Trends similar to those shown in Figs. 4, 5, 11, and 12 were observed for all error sources, with the highest sensitivity occurring in the previously described near (d = 200 mm, D = 300 mm) or far (d = 1100 mm, D = 1200 mm) configuration.In both these configurations, the magnification is the lowest possible for a given stage position.This was found to be true of all error sources studied here.
The configurations that captured the highest distance error sensitivity for each error source are given in Table 3 and Table 4.The latter tabulates these configurations for the cosine components of the error motions of the stage, where N represents the order of the harmonic.The sine components of these errors are not shown here but exhibited similar trends.
Note that not all commercially available XCT systems have the benefit of being able to move the detector as freely as discussed here.In instances where the detector is fixed, the highest sensitivities possible for that case can be realized by simply moving the stage as close to the fixed detector as possible.This case is shown by the points corresponding to a single detector position, or a vertical line, in Figs.4,5,11, and 12.
Detector Errors
For most of the error sources in this study, sensitivities for sphere form errors also showed trends similar to those of center-to-center distance errors in that maximum sensitivity was found at the near (d = 200 mm, D = 300 mm) and far (d = 1100 mm, D = 1200 mm) configurations previously described.In some cases, the peak sensitivity occurred at other configurations.However, in these rare cases, the sensitivity differed negligibly across the different configurations.
Further, for almost all error sources and for almost all d and D values, the spheres furthest away from the axis of rotation (i.e., spheres in the outer ring) produced the highest form error.Even in cases where the spheres that produced the highest form error were located in other positions (described by "Other" in Table 6), their form error often differed from those of the outer ring of spheres by very little.Here, we present examples of trends in form error sensitivities where peak sensitivities were found at the near (d = 200 mm, D = 300 mm) and far (d = 1100 mm, D = 1200 mm) configurations.Note that the actual magnitude of the form error is somewhat influenced by the implementation of the SPRT method when generating the form error point cloud.We noticed differences in form error of about 10 % between different implementations, especially between algorithms that retain and algorithms that discard data near the poles of the sphere.
Consider the case of detector Y location error ( ).One sphere in the outer ring of spheres was tracked through the 55 combinations of stage and detector positions, and the form error sensitivities are plotted as a function of d and D in Fig. 13.Clearly, the largest sensitivity occurs at the near (d = 200 mm, D = 300 mm) configuration.Consider the case of detector X location error ( ).As before, a sphere in the outer ring was tracked through the 55 combinations of stage and detector positions, and the resulting form error sensitivities are plotted as a function of d and D in Fig. 14.Clearly, the largest sensitivity occurs at the far (d = 1100 mm, D = 1200 mm) configuration.A summary of the results for all the detector errors can be found in Table 5.
Rotation Stage Errors
The form error sensitivities for stage errors showed trends similar to those of detector errors in terms of location of the spheres that produced the highest sensitivity, as well as the tendency to peak at the near (d = 200 mm, D = 300 mm) or far (d = 1100 mm, D = 1200 mm) configurations.We present some examples of stage errors where peak sensitivities occurred at these near and far configurations.
Consider the case of the first-order cosine component of the stage radial error in X ( , ).One sphere in the outer ring of spheres was tracked through the 55 combinations of stage and detector positions, and the results are plotted as a function of d and D in Fig. 15.Clearly, the largest sensitivity occurs at the near configuration.Similar to Fig. 11, the distances between the curves in Fig. 15 corresponding to different stage positions are so small that the curves appear to overlap to a certain extent.Consider the case of the first-order cosine component of the stage wobble in Z ( , ).A sphere in the outer ring of spheres was tracked through the 55 combinations of stage and detector positions, and the results are plotted as a function of d and D in Fig. 16.Clearly, the largest sensitivity occurs at the far configuration.A summary of the results for all the stage errors can be found in Table 6.
Observations and Results
Table 5 and Table 6 show the configurations that captured the highest form error sensitivities for each of the error sources.Table 6 contains the configurations that best captured the cosine components of the error motions of the stage, where N denotes the order of the harmonic.The sine components of these errors are not reported here but exhibited similar trends.16 https://doi.org/10.6028/jres.126.029
Conclusions
This work has reported, by means of simulation studies, the positions of stage and detector that result in high center-to-center distance error and sphere form error sensitivities for all geometric errors associated with the stage and detector in a given XCT instrument and identified the measurement lines and spheres within the corresponding measurement volume that produce these highest distance errors and form errors, respectively.
The key contributions of Part III in this series are as follows: • In Parts I and II, we identified the location of a sphere in a given measurement volume that resulted in the largest form error for a given geometry error source.Here, we tracked that sphere for different combinations of d and D and noted the stage and detector positions where the largest sensitivity occurred.• In Parts I and II, we identified the pair of spheres that produced the largest distance error for a given error source.Here, we tracked that pair of spheres for different combinations of d and D and noted the stage and detector positions for which this largest error occurred.• We abstracted the information from these simulation studies, resulting in the following main observations: o We identified two combinations of rotation stage and detector locations that result in large sensitivities; we refer to these combinations as the near position and the far position.o In the near position, the rotation stage is closest to the source, and the detector is closest to the rotation stage, i.e., d = 200 mm and D = 300 mm in our simulations.In, the far position, the detector is farthest from the source, and the rotation stage is closest to the detector, i.e., d = 1100 mm and D = 1200 mm in our simulations.Note that both the near and far positions are low-magnification positions, i.e., with the detector as close as possible to the rotation stage.We observed that, for the purposes of performance testing, specifying magnification is not sufficient.The positions of the detector and rotation stage (or d and D values) need to be explicitly stated.
Table 1 .
Description of detector errors.Symbol/ Error Source Description Assumed Magnitude Detector location error along the X axis 1.27 mm Detector location error along the Y axis 1.27 mm Detector location error along the Z axis 0.1 mm Error in the location of the rotation axis along Z axis 0.1 mm Axial error motion (along Y axis) 0.1 mm (amplitude) , Radial error motion component along X axis 0.1 mm (amplitude) , Radial error motion component along Z axis 0.1 mm (amplitude) , X component of the wobble error 0.2° (amplitude) , Z component of the wobble error 0.2° (amplitude) Error in the angular position of the stage (i.e., encoder scale error) 0.05° (amplitude)
Fig. 2 .
Fig. 2. Schematic showing the different rotation stage and detector positions considered in the simulation.
Fig. 4 .
Fig. 4. Distance error sensitivity to detector Y location error.
Fig. 6 .
Fig. 6.Lines of maximum sensitivity for the Z location error of the stage.
Fig. 8 .
Fig. 8. Lines of maximum sensitivity for second-order cosine components of stage error motions.
Fig. 9 .
Fig. 9. Lines of maximum sensitivity for third-order cosine components of stage error motions.
Fig. 10 .
Fig. 10.Lines of maximum sensitivity for fourth-order cosine components of stage error motions.
Fig. 11 .
Fig. 11.Distance error sensitivity to stage Z location error.
Fig. 12 .
Fig. 12. Distance error sensitivity to first-order cosine component of stage wobble along X.
Fig. 13 .
Fig. 13.Form error sensitivity to detector Y location error.
Fig. 14 .
Fig. 14.Form error sensitivity to detector X location error.
Fig. 15 .
Fig. 15.Form error sensitivity to first-order cosine component of stage radial error along X.
Fig. 16 .
Fig. 16.Form error sensitivity to first-order cosine component of stage wobble along Z.
Table 2 .
Description of stage errors.
Table 4 .
Configuration capturing highest center-to-center distance error sensitivity for stage error motions.Not significant (i.e., sensitivities smaller than 0.001 mm/mm or 0.001 mm/°), where all configurations show similar sensitivity and negligible magnitude.
a N: Order of harmonic.b NS: | 7,112 | 2021-09-29T00:00:00.000 | [
"Engineering"
] |
Homomorphic Filtering and Phase-Based Matching for Cross-Spectral Cross-Distance Face Recognition
Facial recognition has a significant application for security, especially in surveillance technologies. In surveillance systems, recognizing faces captured far away from the camera under various lighting conditions, such as in the daytime and nighttime, is a challenging task. A system capable of recognizing face images in both daytime and nighttime and at various distances is called Cross-Spectral Cross Distance (CSCD) face recognition. In this paper, we proposed a phase-based CSCD face recognition approach. We employed Homomorphic filtering as photometric normalization and Band Limited Phase Only Correlation (BLPOC) for image matching. Different from the state-of-the-art methods, we directly utilized the phase component from an image, without the need for a feature extraction process. The experiment was conducted using the Long-Distance Heterogeneous Face Database (LDHF-DB). The proposed method was evaluated in three scenarios: (i) cross-spectral face verification at 1m, (ii) cross-spectral face verification at 60m, and (iii) cross-spectral face verification where the probe images (near-infrared (NIR) face images) were captured at 1m and the gallery data (face images) was captured at 60 m. The proposed CSCD method resulted in the best recognition performance among the CSCD baseline approaches, with an Equal Error Rate (EER) of 5.34% and a Genuine Acceptance Rate (GAR) of 93%.
One of the factors that contributes to its popularity is the extensive use of surveillance cameras in various applications [9]. In the past two decades, numerous face recognition methods have been developed to recognize a person for various purposes, such as criminal detection, law enforcement, image spoofing, and other security applications [10][11][12][13][14][15][16][17]. The pioneer of face recognition utilizes either the visible light images or infrared images to identify a person [10][11][12]. The recognition of these two types of images is done in the same spectral band. In addition, some efforts to apply deep learning in face image recognition have been demonstrated in [14][15][16][17]. These works also considered the recognition between images in the same spectral band, i.e., the visible images and their various versions.
1.
We introduce the CSCD face recognition based on a Homomorphic filtering and phasebased matching method, which has a higher recognition rate than the state-of-the-art methods in the field. 2.
We propose a simpler CSCD face recognition method, which eliminates the effort required to select an appropriate feature and distance measurement. 3.
We confirm that the Homomorphic filtering is the most robust filter to distance changing in CSCD framework.
The remainder of this paper is organized as follows. Section 2 briefly reviews the CS and CSCD frameworks, as well as some related works. Section 3 explains our proposed method, while Section 4 describes the experimental setting. Section 5 presents the results and discussion, and Section 6 concludes the study. Figure 1 illustrates the face recognition scheme in (a) CS and (b) CSCD frameworks. CS matching refers to the matching of two face images captured under different spectra to provide a more accurate facial description [30,31]. In the CS system, the face images captured under the NIR spectral band are matched with the face images captured under the VIS spectral band. In the VIS spectral band, facial descriptions of people from different races show different characteristics [25]. In contrast, the NIR spectral band utilizes the calibrated IR sensor to overcome the different race factors such as skin color and facial characteristics. Due to this reason, the CS matching scenarios provide a more accurate facial recognition because they utilize a complementary facial description at different wavelengths. The complementary facial description can reveal facial features in a certain spectrum that may not be observable in another spectrum. The main concern in CS is to eliminate uneven illumination of images occurring in both spectra. We refer to CSCD when the probe and gallery images are captured under different spectra and at the different distances. When the images are captured at a long distance, another major issue in the CSCD arises in addition to uneven illumination, namely, deteriorated image quality. Zuo et al. [32] evaluated the cross-spectral face matching between the face image captured under the spectral band and those captured under the SWIR spectral band. Local Binary Pattern (LBP) and Generalized Local Binary Pattern (GLBP) features were used for the encoding process. An adaptive score normalization method was used to improve the recognition performance. The approach resulted in a better recognition performance. However, the improved performance highly depends on a score level fusion scenario.
Related Works
Klare and Jain [33] performed the cross-spectral face matching between NIR and VIS face images using the Local Binary Pattern (LBP) and Histogram of Oriented Gradient (HoG) features. The encoding process relied on the Linear Discriminant Analysis (LDA). Moreover, the previous methods [32,33] explored the cross-spectral face matching with relatively similar (close) distances, where the probe images and gallery data used for matching were captured at the same distance (either short-range or long-range).
Kalka et al. [23] pioneered the work in cross-spectral face image matching under various scenarios. The face images were captured under various conditions and scenarios, such as at a close distance, steady standoff distance (2 m), with frontal faces and facial expressions, and in indoors and outdoors. Kalka used the VIS spectral band as a gallery data and the SWIR spectral band (at 1550 nm) as a probe image.
In 2013, Maeng et al. [24] explored the Gaussian filtering and Scale Invariant Feature Transform (SIFT) for CSCD face matching. The noise at high frequency was reduced by using the Gaussian filtering. The facial features were then extracted using the SIFT feature extraction method.
Kang et al. [9] proposed the CSCD method, in which the Heterogeneous Face Recognition system (HFR) was employed. In the study, the HFR algorithm utilized three kinds of filters and two kinds of descriptors for the encoding process. The filters used in the HFR algorithm were Difference of Gaussian (DoG), Center-Surround Divisive Normalization (CSDN), and Gaussian, while the descriptors used were Scale Invariant Feature Transform (SIFT) and Multiscale Local Binary Pattern (MLBP). The facial representation was achieved by combining all of the features from the overlapped patches.
Shamia and Chandy [25] examined the use of a combination of wavelet transform, Histogram-Oriented Gradient (HOG), and Local Binary Pattern (LBP) for CSCD face matching. In their study, the VIS spectral band was used as a gallery image, while the NIR spectral band was used as a probe image. To reduce the gap between the NIR and VIS images, the VIS image's contrast was enhanced using the Difference of Gaussian filtering (DoG), while the NIR image's contrast was enhanced using median filtering. Note that the earlier works require a three-step recognition procedure: preprocessing, feature extraction, and distance calculation.
To the best of our knowledge, CSCD face recognition has not been addressed by a deep learning technique. The work proposed by Pini et al. [34] used deep learning for crossdistance and cross-device face recognition. The cross-distance experiments were carried out using the same device, while the cross-device tests were run at a fixed distance. The aim was to identify the best combination of data representation, preprocessing/normalization technique, and deep learning model that obtain the highest recognition accuracy rate. In addition, Pini proposed an image dataset named MultiSFace, which contains visual (VIS) and infrared images, high-and low-resolution depth images, and high-and lowresolution thermal images, captured from two different distances: near (1 m) and far (2.5 m). Instead of presenting the recognition results between the images of different spectra (VIS and infrared), the work only discussed the recognition results between several depth map representations of face, namely, normal images, point clouds, and voxels, generated by different devices. Note that normal images, point clouds, and voxels are all the derivation of the VIS images. Figure 2 illustrates the overview of the proposed approach. Here, the NIR images captured at a short distance, were used as a probe image, while the VIS images captured at a longer distance, were used as gallery image. Each block in the diagram is explained as follows:
Face Detection
The Viola Jones face detection [35] was used to detect the facial area. The Viola and Jones face detection is computationally simpler than the recent Convolutional Network approaches, such as Multi-Task Cascaded Convolutional Networks (MTCNN) [36], because it does not require expensive annotation as the CNN (MTCNN) does for image training.
There are two steps in the Viola Jones face detection system: training and detection. The detection step consists of two sub-steps: selecting the Haar features and creating an integral image. The training step also has two sub-steps: training the classifiers and applying the Adaboost. Here, the Viola Jones face detection steps are implemented as follows [37]:
1.
Convert the NIR and VIS images to grayscale, as the Viola-Jones algorithm detects the facial area within the grayscale image and searches the corresponding location on the colored image.
2.
Divide the NIR and VIS images into block windows. Every block is scanned from the left to the right.
3.
Compute the facial feature using Haar-like features of each block. The Haar feature can be obtained by subtracting the pixel in the black area with the pixel in the white area.
4.
Convert the input image into an integral image. Then, apply Adaboost to select features and train the feature by cascading process.
5.
Concatenate all the Haar-like features in each block window to determine the location of the facial area.
Homomorphic Filtering
The Homomorphic filtering technique aims to reduce the illumination variation as a result of different lighting conditions [38][39][40]. The illumination variations can be normalized using the filter function. The filter was set to decrease the effects of the illumination (in the low-frequency components primarily) and amplifies the reflectance (in the most of the high-frequency components). Previous work has shown that Homomorphic filtering is suitable to reduce the cross-spectral appearance differences [27]. Therefore, in this paper, homomorphic filtering was used to address the modality issue between two images captured at different spectral bands.
After the face detection stage, homomorphic filtering is applied to the NIR and VIS face images to enhance the facial features. Both the NIR and VIS face images are processed through similar steps. For simplicity, the NIR and VIS face images are annotated as I(x, y). The following are the Homomorphic filtering steps applied to both NIR and VIS images [27]: The face images I(x, y) are transformed into the logarithmic form. 2.
The logarithmic images (Z(x, y)) are then transformed into a frequency domain using the Fourier transform.
For simplicity: Here, Z(u, v) represents the image in the frequency domain, while F I (u, v) represents the Fourier transform of log(I(x, y)).
3.
Then, the images are multiplied with a high-pass filter H(u, v), which corresponds to a convolution operation in the spatial domain.
Here, C(u, v) denotes the filtered image in the frequency domain.
4.
The filtered images in the spatial domain C(x, y) are obtained by taking the inverse Fourier transform of Equation (4).
Band Limited Phase Only Correlation
After the Homomorphic filtering step, the resulted Homomorphic of VIS and NIR images were then transformed using the discrete Fourier transform (DFT) as follows: V IS(n 1 , n 2 ) and N IR(n 1 , n 2 ) are VIS and NIR images in the spatial domain while represents the amplitude component of VIS and NIR images respectively. θ V IS (k 1 , k 2 ) is the phase component of the VIS images while θ N IR (k 1 , k 2 ) is the phase component of the NIR images. The normalized cross-power spectrum is then used to compute the phase differences between the VIS and NIR images as described in [28].
Here, N IR(k 1 , k 2 ) is the complex conjugation of the NIR images. R V ISN IR (k 1 , k 2 ) represents the normalized cross-power spectrum between the VIS and NIR images. The frequency band is set only to keep the most important phase spectrum information. Therefore, Band-limited Phase-Only Correlation (BLPOC) can result in a maximum correlation peak between the two images. If the two images are similar, the BLPOC will result in a maximum correlation peak score. If the two images are different, the BLPOC will result in a minimum correlation peak score.
A threshold is determined to assess the peak score values. Based on the assessment, a decision is made. The decision rules are as follows: • For authentic users (probe image is member of gallery data). 1.
If the peak score > threshold, the probe matches the gallery image; then, the probe is verified/recognized.
2.
If the peak score < threshold, it is a false rejection rate.
• For non-Authentic User (probe image is a non member of gallery data). 1.
If the peak score > threshold, it is a false acceptance rate.
2.
If the peak score < threshold, the probe does not match the gallery image; consequently the probe is not verified/not recognized.
Experimental Setting
The experiment was conducted using the Long-Distance Heterogeneous Face database (LDHF-DB) [9]. The whole database was collected over a period of 1 month at the Korea University, Seoul. The LDHF-DB database consisted of both frontal VIS and frontal NIR face images of 100 different subjects (70 males and 30 females), captured at 1 m, 60 m, 100 m, and 150 m standoff in an outdoor environment. The resolution of the images was 5184 × 3456 pixels and the images were stored in both JPEG and RAW formats. Figure 3 shows the sample of cross spectral face images captured at 1 m and 60 m. Finally, (c) we compared the performance the proposed method with the existing CSCD baseline methods [9,24,25].
In each scenario, we also applied other photometric normalization filters, namely, TanTriggs filter and DCT filter, for comparison purposes. These filters were employed to filter the NIR and VIS images, replacing the Homomorphic filters (see Figure 2). We also evaluated a condition in which the face detection step is directly followed by BLPOC. We refer to this condition as "No-filter".
The experiment was performed using 100 NIR images as the probe images and 100 VIS images as the gallery images for both 1 m and 60 m distance. The total number of matching comparisons was 10,000 for each distance, while the total number of the genuine comparisons was 100, and that of impostor comparisons was 9900.
The recognition performance was evaluated using the Equal Error Rate (EER) and Receiver Operating Characteristic (ROC) curve parameters. The EER parameter is a single value, in which the False Acceptance Rate (FAR) is equal to the False Rejection Rate (FRR) while the ROC curves compute the ratio between the recognition rate (Genuine Acceptance Rate (GAR)) and FAR at different reference thresholds. The reference thresholds were calculated as in [28]. Table 1 presents the EER values and recognition rates of the proposed method and all comparison methods, which were calculated at six different ranges of BLPOC FBs, i.e., 10, 20, 30, 40, 50, and 60. From Table 1, we extracted the EER values of CS and CSCD, and plotted them as a function of BLPOC FB variations. Figures 4 and 5 show the effects of BLPOC FB variation on filtering operations in both CS (scenario (i) and (ii)) and CSCD (scenario (iii)) face recognition, respectively.
CS Face Recognition
In Figure 4, CS face verification at 1 m (scenario (i)) is plotted by solid lines, and that at 60 m (scenario (ii)) is plotted by dashed lines. Roughly, in both scenarios, the combination of BLPOC and Homomorphic filter resulted in the smallest EER values (i.e., the best performances) as the frequency band increased (see solid blue line and dashed pink line). In scenario (i), the EER values of the proposed method shows a steady increment as BLPOC FB increased. Here, the EER values were 40.9%, 5.2%, 11%, 11%, 29%, and 31% as the frequency band increased from 10 to 60. The EER value at the frequency band 20 declined, but then the EER values increased steadily. Thus we consider frequency band 20 as the breakdown point, where the method resulted in the smallest EER value. In scenario (ii), the EER values of the proposed method were 9.2%, 10%, 13%, 13%, 38%, and 40% as the frequency band increased. Here, the EER values were steadily increased from frequency band 10 to 60, and there was no breakdown point.
On the other hand, the EER values of the comparison methods showed either some irregular fluctuations, or no particular breakdown point as the BLPOC FB increased. For example, the EER values of the combination of BLPOC and TanTriggs filter (see solid dark yellow line) in scenario (i) were 44.14%, 34.3%, 28.5%, 54%, 35.6%, and 80.6%. The EER values of BLPOC combined with DCT filter (see solid bright yellow line) in scenario (i) also fluctuated. In scenario (ii), the EER values of BLPOC combined with both TanTriggs and DCT filters increased steadily as the BLPOC FB increased. In these cases, there were no breakdown points.
CSCD Face Recognition
As shown in Figure 5 At the frequency band 10, the EER value was 24.12%. The EER value decreased to 10.2% at the frequency band 20, and the value continued to increase at the subsequent frequency bands. At the breakdown point, the proposed method resulted in 10.2% EER and recognition rate of 93%, which is the best trade-off between EER value and recognition rate (see Table 1).
On the contrary, the EER values of other comparison methods have either more fluctuations or higher breakdown point at a larger BLPOC FBs. For example, the EER values of combination of No-filter and BLPOC have breakdown points at the frequency band 20 and 50. The EER values of DCT filter combined with BLPOC broke down slightly at the frequency band 40 (the EER at the frequency band 30 was 67.78%, which declined to 67% at the frequency band 40, and increased to 77.4% at the frequency band 50). The breakdown point of TanTriggs filter combined with BLPOC was at the frequency band 50, with 76% EER. Wherever the breakdown points are, the EER values of the comparison methods were far greater than those of the proposed method, which means that the performances of the comparison methods were poorer than that of the proposed method.
In Table 1, an anomaly is observed in the EER values of BLPOC-FB 10. Primary assumptions of EER values in each FB are that the EER value of scenario (i) should be the smallest, the EER values of scenario (ii) should be higher than those of scenario (i), but lower than those of scenario (iii). In other words, the image recognition at 1m should be easier than that at 60m, and image recognition in the CS scenario should be easier than that in the CSCD scenario. However, at the frequency band 10, most of the EER values of scenario (ii) were lower than those of scenario (i) and scenario (iii). In our proposed method, the filtering operation was followed by BLPOC. The filters reduced the illumination (lowfrequency) part and enhanced the reflectance (high-frequency) part, which contained the detail of the images. Then, the BLPOC-FB 10 limited these frequencies to only 10 lowest frequencies, excluding the enhanced reflectance component. This exclusion may result in the anomaly.
As shown in this section, the combination of BLPOC and Homomorphic filter at the frequency band 20 had the best trade-off between EER value and recognition rate (scenario (i) and (iii)). In scenario (ii), the proposed method achieved the best performance (the smallest EER) at BLPOC-FB 10, with 9.2% EER. Furthermore, even at BLPOC FB 20, the proposed method had the smallest EER compared to those of the comparison methods. Thus, in the following, we suggest that a further investigation on performance of photometric normalization schemes evaluated at BLPOC FB 20, is necessary.
Performance of Photometric Normalization
In this section, we present the performance (ROC curves) of each photometric normalization filter at BLPOC FB 20 in CS and CSCD face recognition. We plotted those ROC curves in Figures 6 and 7 for CS face recognition (scenario (i) and (ii)), and Figure 8 for CSCD face recognition (scenario (iii)). We also present the ideal ROC curve calculated according to the work in [1] to further analyze the performance of the proposed method.
CS Face Recognition
As shown in Figures 6 and 7, the proposed method achieved the highest performance in both short-and long-distance face recognition, compared to other comparison methods, with 97% GAR at 1% FAR, and 98% GAR at 1% FAR, respectively. Moreover, the GAR values of face recognition at the long distance were steady, and the ROC curves remained above the ideal ROC curve. On the other hand, the GAR values of the comparison methods reduced, and the corresponding ROC curves moved closer to the ideal ROC curve when the image recognition was conducted at a long distance. For instance, in scenario (i), the overall ROC curve of DCT filter was positioned at the second rank (with 97% GAR at 1% FAR-the same value as the proposed method). However, in scenario (ii), though ROC curve of DCT filter remained at the second rank, the GAR values at 1% FAR reduced to less than 90%. Moreover, in both scenarios, the ROC curve of TanTriggs filter was mostly positioned under that of No-filter. It implies that the recognition performance without using any filters was better than that using the TanTriggs filter. When the ideal ROC curve was used as a benchmark, it is observable that the proposed method held on its best performance, while those of the comparison methods declined, when the recognition is performed at a long distance. It indicates that the proposed method is more robust to long-distance CS face recognition. Figure 8 shows that in CSCD face recognition, the ROC of the proposed method continued to position above the ideal ROC curve, with 90% GAR at 1% FAR. On the contrary, the ROC values of all other photometric normalization filters moved down under the ideal ROC curve. In this case, it is demonstrated that combination of Homomorphic filtering and BLPOC is effective in cross-spectral cross-distance matching scenarios. Table 2 summarizes the matching performance using Homomorphic filtering in CSCD matching scenarios. Homomorphic filtering provides a better recognition performance in CS and CSCD scenarios with 5.2% EER at 97% GAR at 1m stand-off, 5.25% EER at 94% GAR at 60m stand-off, and 5.34% EER at 93% GAR, respectively. The recognition rates in CS scenario slightly decreased as the distance become longer. Furthermore, the recognition rate of CSCD decreased compared to those of CS, but only in a small proportion. Overall, the proposed method has shown a steady performance for CS, and it is robust for CSCD framework. Table 3 presents the comparison of the proposed method with the baseline CSCD face recognition methods [9,24,25]. All baseline methods used the Long-Distance Heterogeneous Face Database (LDHF-DB). The method of Kang et. al. resulted in 73.7% GAR at 1% FAR and an EER of 8.6%. The GAR of Maeng's method was 81% at 1% FAR, while the method of Shamia and Chandy resulted in 72% GAR at 1%FAR. The proposed CSCD method, which integrates Homomorphic filtering and BLPOC, outperformed the baseline approaches with 93% GAR at 1% FAR. The proposed method resulted in an EER of 5.34%. As mentioned earlier, the baseline methods follow a three-step recognition procedure: preprocessing, feature extraction, and threshold/distance calculation to determine if a face can be recognized/verified. In the preprocessing stage, a normalized image was obtained by employing histogram equalization and smoothing operation [24], while photometric normalization was applied in [9] by applying Different of Gaussian (DoG) and Centre Surround Divisive Normalization (CSDN), and wavelet and DoG [25]. Maeng extracted Scale Invariant Feature Transform (SIFT) features and determined a threshold to Euclidean distance values as a base for verification [24]. Kang et. al. used SIFT and Multi-Scale Local Binary Pattern (MLBP) as features and a threshold in a cosine-based similarity measure for verification [9]. Meanwhile, Shamia and Chandy applied histogram of gradient (HoG) and local binary pattern (LBP) and Euclidean distance [25].
Comparison with Baseline Cscd and Other Methods
The state-of-the-art methods incorporating the three-step recognition procedure have shown insufficient performance. However, we have a hypothesis that Homomorphic filtering can increase the recognition performance of face recognition in CSCD frameworks. Thus, we conducted an additional study by integrating the Homomorphic filtering with Local Binary Pattern (LBP) features and Hamming distance. This approach resulted in 89.23% GAR at 1% FAR and an EER of 6.9%, which is better than those of the baseline methods. These results confirmed our hypothesis that Homomorphic filter as a means of photometric normalization is more suitable for cross-spectral cross-distance face matching than other methods, such as DoG, and a combination of wavelet and DoG. The proposed method combined the Homomorphic filter as a means of photometric normalization and BLPOC as a means of matching/recognition. Our experiment proved that this combination can achieve the best performance for CSCD face recognition. In feature-based methods, the selection and representation of the most appropriate features are difficult. On the other hand, the BLPOC method does not depend on feature representation. All information from the phase component of an image is directly used in generating the BLPOC correlation peak the two images.
In addition to feature-and phase-based methods, there has been an effort of using a deep learning approach to solve face recognition challenge in different modalities [34]. However, the work in [34] employed the deep learning methods for cross-modalities environment that are different from those of the CSCD. It uses deep learning for either crossdevice or cross-distance face recognition. The cross-distance experiments are conducted with the same device, while the cross-device tests are run at the same distance. We did not find any results related to the infrared images in the study. Instead, the study showed the recognition accuracy rate when the CNNs were tested using image depths, point clouds, and voxels, which derived from visible light domain. Thus, despite the work in [34] using several devices, the recognition was done in the same spectrum, namely, the VIS spectrum. The reported highest accuracy rates of either cross-device or cross-distance were lower than 20%.
The simulation results confirmed that the Homomorphic filtering can reduce the illumination variation between images generated in VIS and NIR spectrum, and at the same time increase the images' detail in both spectra. Thus, the Homomorphic filtering can produce images with fewer variances between the two spectra and enrich the images' content. We argue that if these images are fed to a deep network, more distinctive features can be learned by the network, and eventually may increase performance of the CSCD recognition based on a deep learning approach.
Conclusions
In this research, a cross-spectral cross-distance (CSCD) face recognition method using Homomorphic filtering and phase-based matching is proposed. Homomorphic filtering is able to produce photometric normalized images in cross-spectral dimension: visual (VIS) spectrum and near-infrared (NIR) spectrum. Band-Limited Phase-Only Correlation (BLPOC) method is applied as a means of phase matching. The proposed CSCD method outperforms some standard approaches when evaluated in the short-distance and longdistance cross-spectral face matching. It also outperforms some baseline methods in the CSCD framework. Homomorphic filtering is able to suppress uneven illumination in crossspectral images. Therefore, the BLPOC can improved the recognition of the cross-spectral faces at various distances. The proposed CSCD method resulted in the highest GAR of 93% at 1% FAR, with an EER of 5.34%. Applying deep learning approaches to further enhance the CSCD face recognition performance will be our future work. Data Availability Statement: The study did not report any data. | 6,444.2 | 2021-07-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Bit Error Rate Performance Improvement for Orthogonal Time Frequency Space Modulation with a Selective Decode-and-Forward Cooperative Communication Scenario in an Internet of Vehicles System
Orthogonal time frequency space (OTFS) modulation has recently found its place in the literature as a much more effective waveform in time-varying channels. It is anticipated that OTFS will be widely used in the communications of smart vehicles, especially those considered within the scope of Internet of Things (IoT). There are efforts to obtain customized traditional point-to-point single-input single-output (SISO)-OTFS studies in the literature, but their BER performance seems a bit low. It is possible to use cooperative communications in order improve BER performance, but it is noticeable that there are very few OTFS studies in the area of cooperative communications. In this study, to the best of the authors’ knowledge, it is addressed for the first time in the literature that better performance is achieved for the OTFS waveform transmission in a selective decode-and-forward (SDF) cooperative communication scenario. In this context, by establishing a cooperative communication model consisting of a base station/source, a traffic sign/relay and a smart vehicle/destination moving at a constant speed, an end-to-end BER expression is derived. SNR-BER analysis is performed with this SDF-OTFS scheme and it is shown that a superior BER performance is achieved compared to the traditional point-to-point single-input single-output (SISO)-OTFS structure.
Introduction
Internet of Things (IoT) aims to connect objects, including drones, high-speed trains, mobile users and the like, at any time and place [1].The growing spread of IoT, Internet of Vehicles (IoV), Vehicle-to-Vehicle (V2V) and Vehicle-to-Everything (V2X) communication technologies has received widespread attention.IoV, V2I, V2V and V2X technologies also have great potential as one of the main scenarios of next-generation mobile communication technologies.Multimedia transmission and transmission for autonomous driving can be considered as application examples within these technologies in intelligent vehicles, known as smart vehicles.In this context, significant improvements are needed in terms of ultrahigh reliability and high quality of service.Formal methods used in the IoT environment for connected vehicle protocols can be found in [2][3][4].
IoT systems have to be able to operate in unpredictable mobile environments, such as moving vehicles [5].Mobile communication technologies are developing according to growing demands.The demands for mobile IoT-integrated device communications are seen as important because it is predicted that the number of mobile IoT-integrated devices will increase, and a large portion of these devices are predicted to be smart vehicles, smart transport systems and smart traffic systems.Mobility-related issues are important in this scope [6].
Orthogonal frequency division multiplexing (OFDM) is a waveform commonly used in fourth-generation (4G) and fifth-generation (5G) cellular communication systems.OFDM has found its place in practice as a good solution against inter-symbol interference (ISI) effects that usually appear in wideband communication channels.But the Doppler effects caused by high mobility causes performance degradation in existing systems that include OFDM waveform transmissions in mobile communications.Time-varying channels because of mobile environments create inter-carrier interference (ICI) effects on OFDM waveforms.For this reason, a new appropriate waveform known as orthogonal time frequency space (OTFS) modulation has been developed, especially for time-varying channels.OTFS modulation has been developed to deal with time-varying fading as well as Doppler spread in channel under high mobility conditions.OTFS can transform the time-varying channel into one that is stable, separable in the delay-Doppler domain and that can bypass the fast-fading channel characteristic.At the same time, OTFS shifts the information in the delay-Doppler domain instead of the time-frequency domain to achieve complete time and frequency diversity.Additionally, channel sparsity in the delay-Doppler domain can also be utilized by OTFS to enhance system performance.OTFS is convenient for mobile communications especially in cases of high mobility conditions, as well as Internet of Things (IoT), ultra-reliable low-latency communications and advanced mobile broadband scenarios.Information symbols put in the delay-Doppler domain can be converted to the standard time-frequency domain used in conventional waveforms such as OFDM [2,7,8].OTFS for vehicular networks (such as drones, high-speed trains and automobiles), OTFS for underwater acoustic communications, OTFS for non-terrestrial networks (such as LEO satellite communication networks), and OTFS for high-frequency-band (mmWave and THz bands) communications can be considered as practical examples or case studies of OTFS modulations [1].
One of the most important distortion effects on transmitted signals seen in the wireless channels is known as fading caused by multipath propagation.One of the techniques to overcome fading effects is to use multi-antenna systems like multi-input multi-output (MIMO) systems.This technique falls within the scope of antenna diversity.In recent years, research on MIMO structures and code design suitable for these structures has found wide coverage in the literature.Although transmitter antenna diversity methods are particularly convenient for base stations of cellular systems, they are not very suitable for mobile terminals in terms of size, cost and hardware complexity.In addition, the powerful shadowing effect caused by crowded vehicles and buildings can especially lead to serious performance loss in V2V and V2X communications.As a solution to this problem, the "cooperative diversity or cooperative communication" technique has been used.In cooperative communication, the source-transmitter provides transmitter diversity by sending its information directly to the destination-receiver through one or more cooperators (relays) [9].Cooperative communication technology achieves cooperative diversity through relays, which can cope with the impact of shadowing effect and improve communication reliability.In this way, the effect of a less costly MIMO system with high efficiency is achieved [2].
One of the cooperative diversity methods in which users can act as relays is the "amplify and forward (AF)" method.It is a cooperative communication method first proposed by the authors of [10].In accordance with this method, the other cooperating user amplifies and transmits the signal coming from the source user.
Another method, the "decode and forward (DF)" method, was first proposed by the authors of [11].In accordance with this scenario, the other cooperative user decodes the signal coming from the source user and transmits it digitally again.
Another method, "coded cooperation", was first proposed by the authors of [12].In accordance with this proposed method, the signal coming from the source user is re-encoded by another auxiliary user and sent to the destination.
In the "selection relaying" method mentioned in [13], the focus is on the amplifier gain, which depends on the fading coefficient between the source and the relay.If the measured gain is below a specific threshold, the source goes on its transmission to the destination.If the measured gain is above the threshold, the relay transmits using AF or DF to achieve diversity gain.
In [14], the relay remains silent when the S-R link is in outage and assists only when the mutual information involving the relay self-interference is greater than R bit/s/Hz, i.e., when the outage does not occur.
Previous works such as [15,16] have emphasized the performance degradation of terminal mobility on cooperative communication.On the other hand, this problem can be reduced by exploiting the robustness of OTFS in cooperative communication [2,[17][18][19][20][21].For example, in the study [2], the proposed cooperative OTFS system with AF and DF protocols achieved gains of approximately 15.1 dB and 12.0 dB, respectively, compared to the noncooperative OTFS system.The proposed cooperative OTFS system beats the traditional non-cooperative OTFS system as better diversity gains and end-to-end SNR can be obtained with the help of relaying.In [17], the authors express that the natural robustness of OTFS can mitigate the presence of the node mobility problem in collaborative communications.Therefore, they present an analysis of the end-to-end performance of MIMO-OTFS in decode-and-forward (DaF) cooperative systems.In [18], the authors highlight making cooperative communication an attractive model to overcome the restrictions of conventional point-to-point OTFS systems.In this context, they present an end-to-end performance analysis of OTFS index modulation in the DaF cooperative system.In [20], it is stated that OFDM systems put limitations such as sensitivity to the Doppler effect in high-speed mobile conditions.To solve this problem, OTFS-assisted cooperative transmission (CT) for UAV swarms is proposed, which can overcome the above-mentioned limitations.In the study [21], the presence of cooperative UAV transmitting jamming signals using OTFS modulation was shown to enhance the security of the legitimate LEO SatCom link.The study [22] emphasizes that UAV cooperation can perform significantly better than a noncooperative case with respect to the probability of outage of the OTFS-based LEO-SatCom transmission.Therefore, there is a growing attention in understanding the performance of OTFS in relay systems under high mobility circumstances.
Selective DF protocol-based cooperative communication, wherein the relay retransmits the symbol to the destination on the condition that the current SNR at the relay is greater than a threshold, is considered in [12].The paper expresses the end-to-end performance of a selective DF-based MIMO-OSTBC (Orthogonal Space Time Block Code) collaborative wireless system over time-varying fading channels.In [23], better end-to-end error performance was obtained using MIMO and STBC together with SDF protocol when compared to AF and DF protocols.In [24], it is expressed that the SDF-based collaboration protocol can be used with MIMO techniques; to further improve the PEP performance of the cooperative system, STBC and maxima rate combining (MRC) are used.In study [25], it was stated that, compared to the DF protocol, SDF-supported relays can prevent the transmission of noncorrect information to destinations to increase system performance.It has been emphasized that the SDF protocol is advantageous over its classical DF counterparts.As mentioned above, SDF protocol is a recently used method in cooperative communication [15,[23][24][25].In our work, we have also improved the system performance by using OTFS modulation in the SDF protocol or scheme where the receiver part is mobile.
There are efforts to obtain customized traditional point-to-point single-input singleoutput (SISO)-OTFS studies in the literature, but their BER performance seems rather low.It is possible to use cooperative communications in order improve BER performance, but it is noticeable that there are very few OTFS studies in the area of cooperative communications.To the best of our knowledge, we also address for the first time in the literature that the bit error rate (BER) performance of a mobile receiver is improved by using the selective decoding-and-forwarding (SDF) scheme in combination with OTFS transmissions in the scope of cooperative communications.
System Model
In this study, a single relay structure is taken into account.In our system model, there is a base station as a source, a traffic sign as a relay and a moving vehicle as a destination as seen in Figure 1.The direction of the moving vehicle is perpendicular to the source.This system model is very practical, because today it is frequently encountered base stations and traffic signs on the roads within the scope of 4G/5G cellular communication systems.In this scenario, only a base station and a traffic sign with a receiver/transmitter feature are sufficient.Therefore, this model is simply considered.The focus is on the performance improvement of the data (internet) connection when a moving vehicle communicates with the base station.When the bit error rate (BER) in the transmissions increase, interruptions or outages in the data (internet) connection also increase.This system model is proposed in order to minimize BER.
In this system model, transmissions are performed in two time slots in a downlink scenario.These time slots are expressed as transmission phases.Different protocols can be used when the relay processes the signal.In this proposed method, an SDF protocol or scheme is used, and this SDF protocol runs in two phases.In the first phase, the source broadcasts the symbol and the symbol is received by the destination and the relay.In the second phase, only if the code (symbol) transmitted by the source is correctly decoded in the relay does the relay then retransmit to the receiver/destination.
Phase 1: Source Broadcasts Symbol to Destination and Relay (for Destination)
The source broadcasts to the receiver (destination) and relay in this phase.It is assumed that there are two paths in each transmission in this system model.One of them is the line-of-sight (LOS) path and the other is the reflected path.In this phase, the direct path from the source to the destination (s,d) is considered as the LOS path, and the path from the source to the relay and then to the destination is considered as the reflected path (s,r,d).
For the destination, the received time domain signal is expressed as the point-to-point single-input single-output (SISO) model, as follows: where h s,d is the channel gain of the LoS path, τ s.d is the delay shift of the LoS path, s(t) is the transmitted signal from the source, h s,r,d is the channel gain of the NLoS (reflected) path, ν s,r,d = f c vCosθ c = υ r,d is the Doppler frequency (shift) of the reflected path and τ s,r,d is the delay shift of the reflected path.
For two propagation paths, a specific 2-tap doubly selective time-variant channel impulse response (CIR) for this transmission is given below.When s(t) passes through this channel, Equation ( 1) is obtained: Sensors 2024, 24, 5324 5 of 14 To make the mathematical expressions for the time-varying channel above easier to understand, we present below the received linearly modeled signal from the source to the destination, in time domain, in vector and matrix forms in the context of OTFS modulation: where s is the transmitted signal vector in time domain and n is the additive Gaussian noise vector.
The general mathematical expression of the channel matrix H s,d is expressed as where P is number of channel paths, l i = (M∆f )τ i represents the delay taps (bins as integer numbers), k i = (N/∆f )γ i represents the Doppler taps (bins as integer numbers), ∆f is subcarrier spacing, M is the number of subcarriers and N is the size of the time slots.Π denotes the MN × MN cyclic-shift matrix while ∆ k i MN denotes the MNxMN diagonal matrix, and they are given below, respectively.
In vectorized form, the received signal from the source to destination in the delay-Doppler domain can be written as where indicates an effective channel matrix, x is the transmitted signal in the delay-Doppler domain, and ∼ n = (F N ⊗ G rx )n is the noise vector.F N and F H N denote the N-point FFT matrix and the N-point IFFT matrix, respectively.For the rectangular waveforms, H rect e f f s.d Then, Equation (7) can be transformed below.
First Path (LoS Path): Since the vehicle moves in the V direction, the receiver moves perpendicular to the source.The first Path (LoS path) is the shortest path for i = 1.There is no delay or Doppler shift in that channel.Therefore, the channel with zero delay and Doppler in the first channel case, as presented in [24], and l 1 = 0, k 1 = 0 are obtained.
Since the movement direction of our vehicle is in the V direction, it moves perpendicularly to the source, as we stated before.For this reason, the velocity vector value along the LoS path line is zero.Since there is no velocity component in the source direction, the Doppler effect for the LoS path is zero, and therefore, the υ i value will be zero (very low).Therefore, the k i value is taken as 0.
Since the LoS path is the shortest path, the delay value will be minimum and the τ i value will be low.Therefore, the l i value can be accepted as 0.
Second Path (NLoS Path-Reflected Path): Since the vehicle moves in the V direction, the receiver moves perpendicular to the source.However, there is a velocity component in the relay direction.Additionally, path 2 (NLoS path) is the longest path for i = 2.There are delay and Doppler shifts in that channel.Therefore, the channel with both delay and Doppler in the fourth channel case, presented in [22], and l 2 = 1, k 2 = 1 are obtained.
Since the movement direction of our vehicle is in the V direction, it moves perpendicularly to the source, as we stated before.For this reason, the velocity vector component occurs along the NLoS path.Since there is a velocity component in the relay direction, the Doppler effect also occurs for the NLoS path, and therefore, the υ i value will be greater than zero.Therefore, the k i value can be accepted as 1.
Since the NLoS path is the longest, both the delay value and the τi value will be high.Therefore, the value of l i can be accepted as 1.
According to our model, as expressed above, P = 2, and k i = 0 1 and l i = 0 1 is defined for Phase 1 for the destination.
When M = 2 and N = 2 are taken into account, then Equation ( 4) is transformed to Substituting Π 0 and ∆ 0 in the equation, the H 1 s,d channel matrix is obtained, When i = 2, H 2 s,d = h 2 Π 1 ∆ 1 is obtained, where Sensors 2024, 24, 5324 7 of 14 H rect e f f s.d 2.2.Phase 1: Source Broadcasts Symbol to Destination and Relay (for Relay) The source broadcasts to the receiver (destination) and relay.For the relay, the received time domain signal is expressed as the point-to-point SISO model as follows: where h s,r is the channel gain of the LoS path, the Doppler frequency (shift) of the LoS path = 0, τ s,r is the delay shift of the LoS path, s(t) is the transmitted signal from the source, h s,d,r is the channel gain of the NLoS (reflected) path, ν s,d,r is the Doppler frequency of the NLoS path, and τ s,d,r is the delay of the NLoS path.
For two propagation paths, a specific 2-tap doubly selective time-variant channel impulse response (CIR) for this transmission is given below.When s(t) passes through this channel, Equation ( 24) is obtained.
To make the mathematical expressions for the time-varying channel above easier to understand, we present below the received signal from the source to the relay in the time domain in vector and matrix forms, in the context of OTFS modulation: The general mathematical expression of the channel matrix H s,r , effective channel matrix H rect e f f s,r and vectorized form of the received signal in the delay-Doppler domain are similarly obtained using Equations ( 4)- (8).
First Path (LoS Path): Path 1 (LoS path) is the shortest path for i = 1.Since the relay is stationary, the Doppler effect for the LoS path is zero, and therefore, the value of υ i will be zero (very low).Therefore, k 1 = 0 is obtained.
Since the LoS path is the shortest path, the delay value will be minimum and τ i will be low.Therefore, l 1 = 0 is obtained.
Second Path (NLoS Path): Since the vehicle is moving in the V direction, there is a velocity component in the relay direction.Additionally, path 2 (NLoS path) is the longest path for i = 2. Therefore, the channel with both delay and Doppler in the fourth channel case, presented in [22], and l 2 = 1, k 2 = 1 are obtained.Since the direction of motion of our vehicle is in the V direction, there is a velocity component in the relay direction as mentioned before.For this reason, the velocity vector component occurs along the NLoS path.Since there is a velocity component in the relay direction, the Doppler effect also occurs for the NLoS path, and therefore, the value of υ i will be greater than zero.Therefore, the value of k i can be considered as 1.
Since the NLoS path is the longest path, both the delay value and the τi value will be high.Therefore, the value of li can be considered as 1.
For LoS and NLoS paths, the values of k i and l i are 0 for i = 1 and 1 for i = 2 since they are the same for the source-destination scenario.Therefore, the channel matrix H s,r and the effective channel matrix H rect e f f s,r can be obtained from the equations written above for source-destination Equations ( 9)- (23).
In vectorized form, the received signal from the source to the relay in the delay-Doppler domain can be written for rectangular waveforms:
Phase 2: Relay Retransmits the Symbol to Destination
The relay afterwards retransmits to the destination only if it is able to correctly decode within the SDF scheme.For the destination, the received time domain signal is expressed as a point-to-point SISO as follows: where h r,d is the channel gain of the LoS path, υ r,d is the Doppler frequency (shift) of the LoS path, s(t) is the transmitted signal from the source, τ r,d is the delay shift of the LoS path, h r,s,d is the channel gain of the NLoS (reflected) path, and τ r,s,d is the delay shift of the NLoS path.The time-variant channel impulse response (CIR) for this transmission is given below: Sensors 2024, 24, 5324 9 of 14 To make the mathematical expressions for the time-varying channel above easier to understand, we present below the received signal from the relay to the destination in the time domain in vector and matrix forms, in the context of OTFS modulation: First Path (LoS Path): Since the vehicle is moving in V direction, the receiver has a velocity component in the direction of the relay.Path 1 (LoS path) is the shortest path for i = 1.Therefore, the channel with zero delay and one Doppler in the third channel case, as presented in [22], and l 1 = 0, k 1 = 1 are obtained.
Since the direction of motion of our vehicle is V, there is a velocity component in the direction of the relay as mentioned before.Since there is a velocity component in the relay direction, the Doppler effect for the path will be greater than zero, and therefore, the value of υ i will be greater than zero.Therefore, the value of k i is taken as 1.
Since the LoS path is the shortest path, the delay value will be minimal and τ i will be low.Therefore, the value of l i can be taken as 0.
Second Path (NLoS Path): In this scenario, the source acts as a reflector as it reflects the signal sent by the relay, which retransmits the incoming signal with the SDF protocol.
Since the vehicle moves in the V direction, the receiver moves perpendicularly to the source (relay).Path 2 (NLoS path) is the longest path for i = 2. Therefore, the channel with one delay and zero Doppler in the second channel case, as presented in [24], and l 2 = 1, k 2 = 0 are obtained.Since the direction of movement of our vehicle is in the V direction, it moves perpendicularly to the relay as we mentioned before.Therefore, the velocity vector component along the NLoS path is 0. Since there is no velocity component in the source direction, the Doppler effect for the NLoS path is zero, and therefore, the value of υ i will be zero (very low).Therefore, the value of k i is taken as 0.
Since the NLoS path is the longest path, the delay value will be maximum and the τ i value will be high.Therefore, the value of l i can be taken as 1.
According to our model, as expressed above, P = 2, and k i = 1 0 and l i = 0 1 is defined for Phase 2 for the destination.
When M = 2 and N = 2 are taken into account, then Equation ( 4) is transformed into In vectorized form, received signal from the relay to the destination in the delay-Doppler domain can be written for the rectangular waveform:
End-to-End BER Calculation for the Proposed SDF Scheme
The system model expressions for the three channels are given below, respectively, for the delay-Doppler domain: In our proposed method, we calculated the end-to-end bit error rate (BER) at the destination with OTFS waveform transmissions by implementing the SDF scheme, as presented below.
Using the total probability formula, the end-to-end BER at the destination can be expressed as follows: Pr(e) = Pr(e ∩ ∅) + Pr e ∩ ∅ where the event ∅ indicates the error at relay, the event ϕ indicates no error at relay, and the event e indicates the end-to-end error at the destination.Then, the total probability formula considering conditional probabilities is where Pr(e/∅), related with Equation (38), is the probability of error at the destination in event error at relay; Pr(∅), related with Equation (39), is the probability of error at relay; Pr(e /∅ is the probability of error at the destination when the relay decodes accurately and Pr ∅ is the probability of no error at relay.The probability of error at relay at high SNR is assumed to be zero as in below.Hence, the probability of no error at relay at high SNR is assumed to be one, as in below: Then, Equation( 42) is transformed to the equation below: Pr(e) ≈ Pr(e /∅)Pr(∅) + Pr e/∅ (43) Each model in Equations ( 38)-( 40) can be accepted as the same as the classical multipleinput multiple-output (MIMO) system model structure with vector-matrix notation because the channel matrices are square matrices with MN × MN dimensions in all structures.
The approximate average BER expression for a Zero Forcing (ZF) MIMO system with BPSK modulation in a Rayleigh fading channel is presented as where SNR is the signal-to-noise ratio at the receiver, L = r − t + 1 in which r > t is the diversity order of ZF, r is the number of receiver antennas and t is the transmit antennas in MIMO systems [26].
The approximate average BER txt MIMO (r = t and L = 1) channel in Rayleigh fading with ZF equalization is the same as the BER derived for the 1 × 1 (SISO) channel in Rayleigh fading, which is given as where L equals to 1.Because the channel for the symbol transmitted from each spatial dimension is similar to a 1 × 1 (SISO) Rayleigh fading channel, Equation ( 44) is transformed to Equation (45) when L = 1 is taken into account.Theoretical approximate average BER expressions for Pr(e/∅) and Pr(∅) can be obtained from the source-to-destination transmission given in Equation ( 38) and the sourceto-relay transmission given in Equation (43), respectively.Furthermore, the theoretical approximate average BER expression for the Pr(e /∅ can be obtained using the concatenated single-input multiple-output (SIMO) model given below because the transmitted signal is the same but the received signal is different in each transmission.
This model can be regarded as the same as the MIMO system model structure in which r > t.Therefore, the approximate average BER expression for this model can be obtained using Equation (47).
After obtaining Pr(e /∅), Pr(∅) and Pr e/∅ , the end-to-end BER calculation at the destination is obtained, as given below, using Equation (43):
Performance Evaluation
In this section, the BER performance of the proposed SDF-OTFS scheme is evaluated and presented.Evaluations and simulations are performed in MATLAB R2018a.P = M = N = 2 and BPSK modulation are also used in all simulations.Obtained H and H rect e f f matrices for the source-to-destination channel and the source-to-relay channel in Section 2 are used exactly the same.Maximum likelihood (ML) detection is used at the relay and the destination, since study [1] expressed that ML detection is convenient and easy to implement for the small numbers of MN in OTFS receivers.
Figure 2 shows the BER performance of different SISO-OTFS input/output models according to the SNR values with the abovementioned parameters.Simulations have been completed with 10 6 iterations.In each iteration, bit errors are taken into account by regenerating the new random transmitted bit sequence, channel coefficient gains and noises added at the receiver.The first of the different models is the OTFS time domain model in which there is a direct transmission between the transmitter and receiver antennas given in Equation (3).The H channel matrix is used, and the simulated SNR-BER graph is obtained according to this model.The second of the different models is the model known as the OTFS delay-Doppler domain model.The H rect e f f channel matrix given in Equation ( 8) is used, and the simulated SNR-BER graph is obtained according to this model.From Figure 2, it is understood that when these two channel models and the theoretical approximate average BER formula in Equation ( 45) are used, all of their SNR-BER performances overlap and are exactly the same.Furthermore, the same SNR-BER performances are also obtained for similar parameters of SISO-OTFS (Figure 4 of [27] and Figure 11 of [1] from the literature).
noises added at the receiver.The first of the different models is the OT model in which there is a direct transmission between the transmitter a tennas given in Equation (3).The H channel matrix is used, and the simu graph is obtained according to this model.The second of the differen model known as the OTFS delay-Doppler domain model.The chann in Equation ( 8) is used, and the simulated SNR-BER graph is obtained a model.From Figure 2, it is understood that when these two channel theoretical approximate average BER formula in Equation ( 45) are us SNR-BER performances overlap and are exactly the same.Furtherm SNR-BER performances are also obtained for similar parameters of SISO of [27] and Figure 11 of [1] from the literature).In Figure 3, the BER performance of the proposed SDF-OTFS schem Simulations were completed according to the end-to-end probability ca in the previous section.As shown in Figure 3, the proposed SDF-OTFS forms the point-to-point SISO-OTFS models considered in non-cooperativ non-cooperative case, we can consider source-to-destination (S-D) transm the case where there is no relay, and only point-to-point transmission o formance result obtained is the same as in Figure 2.For verification pur point-to-point SISO transmissions, such as source-to-relay (S-R), rela (R-D) transmissions and transmissions with theoretical approximate a tained from the SISO channel are also included and their performance res in Figure 2. It was shown that the proposed SDF-OTFS model offers b communication service against the non-cooperative case.For example, wh been obtained at 22 dB SNR with the proposed SDF-OTFS scheme, nearly been obtained at 22 dB SNR with one of the SISO-OTFS models.To a performance, a 37 dB SNR is required in one of the SISO-OTFS models SNR is sufficient for the proposed SDF-OTFS scheme.For the 10 −4 BER p approximately 100-times power efficiency is achieved when the propo scheme is used instead of the point-to-point SISO-OTFS model.In Figure 3, the BER performance of the proposed SDF-OTFS scheme is presented.Simulations were completed according to the end-to-end probability calculations given in the previous section.As shown in Figure 3, the proposed SDF-OTFS scheme outperforms the point-to-point SISO-OTFS models considered in non-cooperative cases.For the noncooperative case, we can consider source-to-destination (S-D) transmissions.This is the case where there is no relay, and only point-to-point transmission occurs.The performance result obtained is the same as in Figure 2.For verification purposes, different point-to-point SISO transmissions, such as source-to-relay (S-R), relay-to-destination (R-D) transmissions and transmissions with theoretical approximate average BER obtained from the SISO channel are also included and their performance results are given as in Figure 2. It was shown that the proposed SDF-OTFS model offers better quality of communication service against the non-cooperative case.For example, while 10 −5 BER has been obtained at 22 dB SNR with the proposed SDF-OTFS scheme, nearly 4 × 10 −3 BER has been obtained at 22 dB SNR with one of the SISO-OTFS models.To achieve 10 −4 BER performance, a 37 dB SNR is required in one of the SISO-OTFS models, while a 17 dB SNR is sufficient for the proposed SDF-OTFS scheme.For the 10 −4 BER performance, an approximately 100-times power efficiency is achieved when the proposed SDF-OTFS scheme is used instead of the point-to-point SISO-OTFS model.
Conclusions
In this study, to the best of authors' knowledge, it has been addres time in the literature that high performance is achieved for the OTFS w mission with an SDF cooperative communication scenario.In this context a cooperative communication model consisting of a base station/sour sign/relay, end-to-end BER expressions for a smart vehicle/destination m stant speed has been derived.BER analysis was performed with this SD was shown that a superior SNR-BER performance was achieved co point-to-point SISO-OTFS model.With the proposed SDF-OTFS model, th communication quality, low probability of outage and high energy effi opened for IoV applications that will be widely used in the future.
As future works, in order to reduce total power consumption whic for our method, an optimum power allocation method for base statio traffic sign/relay can be developed to minimize the probability of end-to data (internet) communication of a smart vehicle moving at a constant sp we can indirectly contribute to many areas such as reducing traffic cong dents and saving fuel/energy by increasing the communication quality o which would be an IoT or IoV element.
Conclusions
In this study, to the best of authors' knowledge, it has been addressed for the first time in the literature that high performance is achieved for the OTFS waveform transmission with an SDF cooperative communication scenario.In this context, by establishing a cooperative communication model consisting of a base station/source and a traffic sign/relay, end-to-end BER expressions for a smart vehicle/destination moving at a constant speed has been derived.BER analysis was performed with this SDF scheme and it was shown that a superior SNR-BER performance was achieved compared to the point-to-point SISO-OTFS model.With the proposed SDF-OTFS model, the door to better communication quality, low probability of outage and high energy efficiency has been opened for IoV applications that will be widely used in the future.
As future works, in order to reduce total power consumption which is a challenge for our method, an optimum power allocation method for base station/source and a traffic sign/relay can be developed to minimize the probability of end-to-end BER in the data (internet) communication of a smart vehicle moving at a constant speed.In this way, we can indirectly contribute to many areas such as reducing traffic congestion and accidents and saving fuel/energy by increasing the communication quality of smart vehicles, which would be an IoT or IoV element.
Figure 1 .
Figure 1.System model of the SDF-OTFS system.
Figure 2 .
Figure 2. BER performances of OTFS time domain model, OTFS delay-Doppler d theoretical approximate average BER in the SISO channel.
Figure 2 .
Figure 2. BER performances of OTFS time domain model, OTFS delay-Doppler domain model and theoretical approximate average BER in the SISO channel.
Since H s,d = H 1 s,d + H 2 s,d , we sum the matrices H 1 s,d and H 2 s,d , and then we obtain the H s,d matrice: | 8,040.2 | 2024-08-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Human Salivary Protein Histatin 5 Has Potent Bactericidal Activity against ESKAPE Pathogens
ESKAPE (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumanni, Pseudomonas aeruginosa, and Enterobacter species) pathogens have characteristic multiple-drug resistance and cause an increasing number of nosocomial infections worldwide. Peptide-based therapeutics to treat ESKAPE infections might be an alternative to conventional antibiotics. Histatin 5 (Hst 5) is a salivary cationic histidine-rich peptide produced only in humans and higher primates. It has high antifungal activity against Candida albicans through an energy-dependent, non-lytic process; but its bactericidal effects are less known. We found Hst 5 has bactericidal activity against S. aureus (60–70% killing) and A. baumannii (85–90% killing) in 10 and 100 mM sodium phosphate buffer (NaPB), while killing of >99% of P. aeruginosa, 60–80% E. cloacae and 20–60% of E. faecium was found in 10 mM NaPB. Hst 5 killed 60% of biofilm cells of P. aeruginosa, but had reduced activity against biofilms of S. aureus and A. baumannii. Hst 5 killed 20% of K. pneumonia biofilm cells but not planktonic cells. Binding and uptake studies using FITC-labeled Hst 5 showed E. faecium and E. cloacae killing required Hst 5 internalization and was energy dependent, while bactericidal activity was rapid against P. aeruginosa and A. baumannii suggesting membrane disruption. Hst 5-mediated killing of S. aureus was both non-lytic and energy independent. Additionally, we found that spermidine conjugated Hst 5 (Hst5-Spd) had improved killing activity against E. faecium, E. cloacae, and A. baumannii. Hst 5 or its derivative has antibacterial activity against five out of six ESKAPE pathogens and may be an alternative treatment for these infections.
INTRODUCTION
Bacteria causing nosocomial infections are increasingly becoming drug resistant, posing a serious health concern, especially in the Intensive Care Units (ICUs) and in surgical wards. A recent report estimated a total of 722,000 of such drug-resistant infections, with an astonishing 75,000 resulting in deaths (Magill et al., 2014). A group of bacterial pathogens including Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumanii, Pseudomonas aeruginosa, and Enterobacter species (referred to as ESKAPE pathogens) have been of particular concern in this regard (Rice, 2010). Two-thirds of all healthcare-associated infections are ESKAPE related (Boucher et al., 2009), and many of these bacteria use multiple drug resistance mechanisms to "eskape" killing by both conventional and some newer generation antibiotics (Rice, 2008).
Most nosocomial infections result from an exogenous inoculum including the hospital environment and medical personnel, resulting in colonization of ESKAPE pathogens within various patient niches. E. faecium, K. pneumoniae, P. aeruginosa, and Enterobacter species are common residents of mucosal surfaces such as the oral cavity and the gastrointestinal tract (Keller et al., 1998;Podschun and Ullmann, 1998;Rice, 2010;Vu and Carvalho, 2011). S. aureus is a skin commensal and A. baumanni has been known to have comparatively long survival rates on epithelial surfaces (e.g., skin; Houang et al., 1998;Coates et al., 2014). Also, P. aeruginosa and K. pneumonia were detected in 50 and 31%, respectively, of HIV-positive patients' salivas in hospital settings (Lopes et al., 2015). Thus, colonization of the oral cavity with K. pneumoniae, P. aeruginosa, and Enterobacter species can serve as a potential inoculum source for pneumonia (Sands et al., 2016). Furthermore, Candida albicans, an oral commensal fungus also present in the oral cavity, can contribute to a more robust P. aeruginosa lung infection if the two are present together (Lindsay and Hogan, 2014).
Salivary innate immunity is the first line of defense against transient and pathobionts in the oral cavity (Salvatori et al., 2016). This is illustrated by the example of salivary Histatin 5 (Hst 5), a cationic protein that has strong fungicidal activity against C. albicans . Changes in the levels of Hst 5 associated with immunodeficiency may increase susceptibility to oral candidiasis caused by C. albicans (Khan et al., 2013). However, Hst 5 was shown to be largely ineffective against oral commensal bacteria as well as cariogenic or periodontal pathogens (Devine and Hancock, 2002;Groenink et al., 2003;Dale and Fredericks, 2005).
However, antimicrobial activity of Hst 5 against the ESKAPE pathogens has never been examined. Hst 5-mediated candidacidal activity is an energy-dependent, non-lytic process, where multiple intracellular targets are affected after Hst 5 has been transported to the cytosol in an energy dependent manner . Given the fact that most ESKAPE pathogens are not present in large numbers in the oral cavity of healthy humans, and do not cause any known oral diseases, we hypothesized that salivary Hst 5 may play a substantial role in keeping at least some of the ESKAPE pathogens in check within the oral environment. Here we show for the first time that Hst 5 demonstrates killing activity against all ESKAPE pathogens except K. pneumoniae. Killing against some ESKAPE pathogens was lytic in nature; however, as for C. albicans, more than one mechanism of killing seems to be involved in Hst 5 activity against ESKAPE bacteria.
Strains, Culture Conditions, and Peptides
All ESKAPE strains used in this study are clinical isolates and are listed in Table 1. These isolates were collected over the past
Bactericidal Assay
Bactericidal assays were performed by microdilution plate method as described for candidacidal assays (Jang et al., 2010), with some modifications. Briefly, a single colony of each strain was inoculated into 10 mL of media and grown for 16 h at 37 • C, cultures were diluted to an OD 600 = 0.1 in fresh media and incubated with shaking at 37 • C to mid-log phase (OD 600 ≈ 1.0). Cultures were spun down at 1800 × g for 3 min and washed three times with 10 mM sodium phosphate buffer (pH 7.4; NaPB). Cells were re-suspended (10 7 cells/mL) in 200 µL NaPB (control) or in NaPB containing peptides (30 µM); then incubated at 37 • C (except for S. aureus that was incubated at room temperature for optimal growth). Aliquots were removed after 1 min, 1 h and 5 h incubation, then were serially diluted with phosphate buffered saline (PBS; 137 mM NaCl, 10 mM sodium phosphate, 2.7 mM KCl, pH of 7.4); plated and incubated for 24 h at 37 • C to visualize surviving CFUs. Assays were performed in triplicate.
Percentage killing was calculated as [1 − number of colonies from peptide-treated cells/mean number of colonies from control cells] × 100%. Bactericidal assays were also performed in 100 mM NaPB and PBS.
Live Cell Imaging of F-Hst 5 and PI Uptake by Time-Lapse Confocal Microscopy (Lab-Tek #155411, 8 chamber). Cells in mid-log phase were collected and washed three times and re-suspended in 10 mM NaPB. Cells (2 × 10 7 in 200 µL NaPB) were added to each well, allowed to settle for 15 min, then propidium iodide (PI; 2 µg/mL; Sigma) and F-Hst 5 (30 µM) were added. Time-lapse confocal images were recorded immediately after addition of peptide and PI. Confocal images were acquired with a Zeiss LSM510 Meta Confocal Microscope (Carl Zeiss, Germany) using a Plan Apochromat 63X/1.4 Oil Immersion objective. For E. faecium, S. aureus, K. pneumoniae, and E. cloacae, images were collected every 10 min; and for A. baumannii and P. aeruginosa every 2 min. In order to detect F-Hst 5 and PI simultaneously, the 488 nm line of the argon ion laser and a 561 nm DPSS laser were directed over an HFT UV/488/561 beam splitter, and fluorescence was detected using a Mirror or NFT 565 beam splitter in combination with a BP 500-550 filter for F-Hst 5 and an LP 575 or BP 650-710 filter for PI detection. ImageJ software was used for image acquisition and analysis.
Biofilm Killing Assays
To evaluate killing of bacterial biofilms by Hst 5, cells were inoculated at 1 × 10 7 cells/ml into 96 well plates and grown overnight at 37 • C and 90 RPM for 12 h to form biofilms. After 12 h, spent media was replaced with fresh media, and biofilms were grown for another 4 h in the same conditions. Media was removed and biofilms were incubated with 30 µM Hst 5 suspended in 10 µM NaPB with 10 µM propidium iodide (PI) for 1 h at 37 • C, except for S. aureus which was incubated at RT. After 1 h, biofilms were gently scraped to remove them from each well, diluted, and placed on microscope slides for image acquisition using a Zeiss Axioobserver.Z1 inverted fluorescent microscope, and an Axiocam 503M camera. Percent killing was calculated as the number of PI positive cells divided by the total number of cells from at least 25 fields from two independent wells.
Hst 5 Killing Activity Determination in Energy Deprived Cells
Overnight cultures of E. faecium, S. aureus, A. baumannii, and E. cloacae were diluted and pre-incubated with 10% of media with or without 10 mM NaN 3 at 37 • C. P. aeruginosa cells were preincubated in 10 mM NaPB with 10 mM NaN 3 at 37 • C due to altered cell viability in the presence of 10% media with NaN 3 . After 3 h incubation, cells were collected by centrifugation and washed two times with NaPB; then cells were used for Hst 5 bactericidal assay in NaPB as described above.
Susceptibility Assay
Minimum inhibitory concentration (MIC) was determined by broth microdilution method based on the guide of Clinical and Laboratory Standards Institute (Clinical Laboratory Standards Institute, 2012) with some modifications. Briefly, all bacterial strains except K. pneumoniae were cultured in respective media and grown overnight at 37 • C. Cells were washed with NaPB buffer, diluted in 10% of Mueller-Hinton (MH) broth (Sigma) to obtain a concentration of 5 × 10 6 CFU/mL that were used as the inoculum. Hst 5 peptide was serially diluted in 10% MH broth (since it is inactive in undiluted broth) in 96-well flat-bottomed plates (Falcon). After adding equal volume of target bacterial suspension to the peptide solution, the 96-well plates were incubated at 37 • C for 24 h. The MIC values were determined by visibly recording the lowest concentration of Hst 5 that inhibited growth.
Statistics
Statistical analyses were performed using GraphPad Prism software version 5.0 (GraphPad Software, San Diego, CA, USA) using unpaired Student's t-tests. Differences of P < 0.05 were considered significant. All experiments were performed at least thrice.
Hst 5 Has Strong Bactericidal Activity against Four ESKAPE Pathogens
In order to determine the bactericidal activity of Hst 5 against ESKAPE pathogens, six clinical isolates ( Table 1) were tested using bactericidal assays (performed in 10 and 100 mM NaPB buffer) to determine percent bacterial killing following incubation for 1 min, 1, and 5 h with 30 µM Hst 5. E. faecium showed time-dependent killing since only 18.1 ± 8.9 percent of cells were killed by Hst 5 at 1 h, while killing was enhanced to 63.5 ± 3.5 percent after 5 h incubation ( Figure 1A). S. aureus cells incubated for only 1 min with 30 µM Hst 5 showed 28.3 ± 2.3 percent killing which increased to 60.3 ± 5.7 and 69.7 ± 0.2 percent after 1 and 5 h, respectively ( Figure 1B). A MRSA strain of S. aureus had lower but still significant killing by Hst 5 after 1 h (33.8 ± 3.5%). Interestingly, Hst 5 did not show any killing activity against K. pneumoniae for up to 5 h of incubation ( Figure 1C). A. baumannii AB307-0294 had substantial sensitivity to Hst 5 with 31.4 ± 3.7, 90.0 ± 1.1, and 95. ± 2.3 percent killing at 1 min, 1, and 5 h, respectively ( Figure 1D), while the A. baumannii HUMC1 strain had even higher sensitivity with 100% killing at 1 h. Similarly, Hst 5 also showed strong killing activity against P. aeruginosa 94-323-0635, with a faster rate of killing. At just 1 min of incubation, 83.2 ± 1.3 percent killing was observed; while over 99.9 percent of cells were killed after 1 h treatment ( Figure 1E). P. aeruginosa PAO1 had identical sensitivity to Hst 5 with 100% of cells killed at 1 h. E. cloacae sensitivity was similar to that of A. baumannii showing 20.3 ± 1.5, 66.1 ± 0.6, and 78.1 ± 1.7 percent of cells killed by Hst 5 at 1 min, 1 h, and 5 h, respectively ( Figure 1F).
Since the activity of Hst 5 against C. albicans is abolished at higher phosphate buffer concentrations (Helmerhorst et al., 1999), Hst 5 killing activity against ESKAPE strains was also tested in a buffer with higher ionic strength. Killing activities of Hst 5 against E. faecium, P. aeruginosa, and E. cloacae were greatly reduced (Figures 1A,E,F) when carried out in 100 mM NaPB. However, bactericidal activity against S. aureus at 100 mM NaPB was similar to that observed in 10 mM NaPB (Figure 1B), while killing efficiency for A. baumannii were decreased by about 30 percent in 100 mM NaPB after 1 or 5 h incubation ( Figure 1D). Effects of PBS on killing were similar to those seen in 100 mM NaPB (data not shown). , and E. cloacae (F) cells in exponential growth were exposed to 30 µM Hst 5 in 10 mM NaPB or 100 mM NaPB for 1 min, 1, and 5 h. Aliquots taken at different time points were diluted and plated. CFU were determined after 24 h. Error bars represent the standard errors from at least three independent replicates of each strain.
Based on these bactericidal results, we next tested growth inhibition by Hst 5 for these ESKAPE pathogens by determining MIC values in 10% MH broth (since Hst 5 is inactive in undiluted broth). Hst 5 MIC values of 38, 47, and 90 µM were observed for A. baumannii, P. aeruginosa, and E. cloacae; while we could not determine MIC values for E. faecium and S. aureus due to media components that inactivate Hst 5 ( Table 2).
The activity of Hst 5 against ESKAPE pathogens grown as biofilms was determined after 16 h growth of each bacteria in 96-well plates ( Table 2). Biofilms were treated with 30 µM Hst 5 for 1 h, dead cells stained with PI, and percent killing calculated. Neither E. faecium or E. cloacae formed robust biofilms in our hands so that Hst 5 killing could not be reliably determined for these two pathogens. As for planktonic cells, both strains of P. aeruginosa in biofilms were effectively killed by Hst 5 after 1 h (59.5%); however Hst 5 killing of A. baumannii biofilm cells was reduced to 15.1%; while killing of S. aureus biofilm cells (both strains) was negligible (5.1%). Surprisingly, we observed that Hst 5 killed 19.5% of K. pneumoniae when in biofilms, although it had no activity against planktonic cells.
Overall, our results show that Hst 5 has a strong bactericidal activity against P. aeruginosa and E. cloacae (and weaker killing with E. faecium) at lower ionic strength environments; while exerting killing against A. baumannii and S. aureus over a range of ionic strengths. However, biofilm cells of A. baumannii and S. aureus were more resistant to Hst 5 killing. To confirm these findings in real time and to assess whether ionic strength may influence metabolic or membrane lytic mechanisms of killing, we next measured Hst 5 binding, uptake, and time of killing for these ESKAPE pathogens in 10 mM NaPB.
Bactericidal Activity of Hst 5 against E. faecium and E. cloacae Requires Internalization and Is Energy Dependent
Time-lapse confocal microscopy showed that Hst 5 (30 µM) rapidly (<1 min) bound with the surface of E. faecium cells, although no internalization was evident until after 10 min of incubation (Figure 2A). This slow internalization resulted in only 15% of cells taking up F-Hst 5 by 120 min; but in each instance, Hst 5 internalization was accompanied by PI uptake (indicative of cell death; Figure 2B). This close correlation between intracellular Hst 5 and PI uptake suggested that targets of Hst 5 are intracellular rather than its effects being membrane lytic. However, the percentage of killing from the bactericidal assay was higher than apparent PI staining, suggesting that some portion of killing was delayed. To evaluate this, we pretreated cells with an inhibitor of energy metabolism (NaN 3 ) as a means to determine whether the killing activity of Hst 5-treated E. faecium might be dependent upon cell metabolism. As expected, pretreatment of cells with NaN 3 significantly reduced the killing efficiency of Hst 5 at 1 and 5 h (Figure 2C), showing that some portion of Hst 5 killing of E. faecium requires target cell energy, likely for Hst 5 internalization or intracellular organelle localization. E. cloacae cells had a very similar response when treated with F-Hst 5, in that the peptide associated with the surface of all cells almost immediately, however entry of F-Hst 5 (and accompanying PI staining) was slow and only occurred in 20% of cells within 30 min ( Figure 3A). As for E. faecium, there was a close correlation between intracellular Hst 5 and PI uptake at all time points (Figure 3B), also pointing toward intracellular targets for bactericidal activity. E. cloacae cells pretreated with 10 mM NaN 3 prior to exposure with 30µM Hst 5 were more resistant to the killing action of F-Hst 5 (Figure 3C), similar to the energy dependent mechanism found with E. faecium.
Membrane Disruption Is Involved in Bactericidal Activity of Hst 5 against P. aeruginosa and A. baumannii
In agreement with its high bactericidal activity, F-Hst 5 completely covered the cell surface of P. aeruginosa within 1 min, and PI positive cells could be visualized within 2 min ( Figure 4A). Quantitative analysis showed that 75% of cells contained F-Hst 5 and were PI positive within 5 min of addition of F-Hst 5 ( Figure 4B). Both intracellular localization of Hst 5 and PI staining were rapid and simultaneous, pointing toward membrane lytic activity of Hst 5 mediated killing of P. aeruginosa. Pretreatment of P. aeruginosa cells with 10 mM NaN 3 did not reduce the killing activity of Hst 5 (Figure 4C), also suggesting energy-independent membrane disruption as the main pathway for Hst 5 killing activity against P. aeruginosa.
A. baumannii showed a very similar profile to P. aeruginosa after exposure to F-Hst 5 in that F-Hst 5 was associated with the surface of all cells within 2 min; and PI positive cells were visualized with 2 min (Figure 5A). Although intracellular uptake of F-Hst 5 was very rapid with 70% of cells containing F-Hst 5 in just 2 min, PI staining did not occur simultaneously so that 70% PI positive cells was seen only after 25 min ( Figure 5B). Interestingly, the portion of A. baumannii cells that showed slower PI staining already contained F-Hst 5; so we questioned whether bactericidal activity of these cells was energy-dependent. However, NaN 3 -pretreated A. baumanni cells had no difference in F-Hst 5 killing (Figure 5C), indicating that Hst 5-mediated killing of A. baumanni is partially due to membrane lysis and is energy independent, although some portion of killing might also be a result of its effect on intracellular targets.
Hst 5-Mediated Killing of S. aureus Is Delayed and Energy Independent
F-Hst 5 was visualized binding to the cell surface of all S. aureus cells within 1 min after addition, however intracellular FIGURE 3 | Antibacterial activity of Hst 5 against E. cloacae requires internalization and is energy dependent. (A) E. cloacae cells were exposed to F-Hst 5 (30 µM) and PI (2 µg/mL). The F-Hst 5 (green) and PI (red) uptake were measured in parallel by time-lapse confocal microscopy. Images were recorded every 10 min and selected images as indicated time points were shown. (Scale bar: 5 µm) (B) Quantitative analysis of F-Hst 5 uptake (green line) and PI uptake. Error bars represent the standard errors from four different fields of image. (C) Cells pretreated with 10 mM NaN 3 at 37 • C for 3 h showed more resistance to Hst 5. (***P < 0.001, Student's t-test).
Frontiers in Cellular and Infection Microbiology | www.frontiersin.org FIGURE 4 | Hst 5 bactericidal activity against P. aeruginosa cells is primarily mediated by membrane disruption. (A) P. aeruginosa cells were exposed to F-Hst 5 (30 µM) and PI (2 µg/mL). F-Hst 5 (green) and PI (red) uptake were measured in parallel by time-lapse confocal microscopy. Images were recorded every 10 min and selected images at indicated time points were shown (Scale bar: 5 µm) (B) Quantitative analysis of F-Hst 5 uptake (green line) and PI uptake. Error bars represent the standard errors from four different fields of image. (C) Cells pretreated with 10 mM NaN 3 at 37 • C for 3 h did not show significant difference in susceptibility to Hst 5.
FIGURE 5 | Activity of Hst 5 against A. baumannii is mediated in part by membrane disruption. (A) A. baumannii cells in exponential phase were exposed to F-Hst 5 (30 µM) and PI (2 µg/mL). The F-Hst 5 (green) and PI (red) uptake were measured in parallel by time-lapse confocal microscopy. Images were recorded every 2 min and selected images as indicated time points were shown. (Arrow, cells of F-Hst 5 uptake positive but without PI uptake; Scale bar: 5 µm). (B) Quantitative analysis of F-Hst 5 uptake (green line) and PI uptake. Error bars represent the standard errors from four different fields of image. (C) Cells pretreated with 10 mM NaN 3 at 37 • C for 3 h did not show significant difference in susceptibility to Hst 5.
Frontiers in Cellular and Infection Microbiology | www.frontiersin.org FIGURE 6 | The activity of Hst 5 against S. aureus is mediated by energy-independent mechanisms. (A) S. aureus cells were exposed to F-Hst 5 (30 µM) and PI (2 µg/mL). F-Hst 5 (green) and PI (red) uptake were measured in parallel by time-lapse confocal microscopy. Images were recorded every 10 min and selected images of indicated time points were shown. (Scale bar: 5 µm). (B) Quantitative analysis of F-Hst 5 uptake (green line) and PI uptake. Error bars represent the standard errors from four different fields of image. (C) Cells pretreated with 10 mM NaN 3 at 37 • C for 3 h did not show significant difference in susceptibility to Hst 5. localization and PI uptake occurred slowly (Figure 6A), so that by 120 min only 7% of cells contained F-Hst 5 and were PI positive ( Figure 6B). This was surprising since bactericidal activity of Hst 5 after 1 h incubation was found to have 60% killing (Figure 1). Furthermore, bactericidal assay with S. aureus pretreated with 10 mM NaN 3 did not decrease the killing efficiency ( Figure 6C). These results, combined with the saltinsensitive killing, suggest that there may be multiple targets for Hst 5-mediated killing of S. aureus that are non-lytic and energy independent.
Hst 5 Is Ineffective in Killing K. Pneumoniae due to Lack of Sustained Binding Since Hst 5 showed negligible killing against K. pneumoniae (Figure 1C), we examined the reason for this using confocal microscopy. F-Hst bound to most K. pneumoniae cells within 2 min, however by 10 min most binding was lost suggesting detachment of Hst 5 from the surface capsule ( Figure 7A). As expected, F-Hst 5 and PI uptake were extremely limited (only 2% of cells), even up to 150 min ( Figure 7B). These results suggest that F-Hst 5 is released from the K. pneumoniae after initial binding (perhaps due to lower binding efficacy in the presence of its capsule) and thus Hst 5 is unable to gain entry or lyse the cells and therefore is ineffective against K. pneumoniae.
Hst 5-Spd Has Improved Bactericidal Efficiency against ESKAPE Pathogens That Take Up Hst 5
We have previously reported that spermidine-conjugated Hst 5 (Hst 5-Spd) has greater activity against C. albicans because the spermidine conjugate translocates more efficiently into the yeast cells (Tati et al., 2014). Therefore, to determine if Hst 5-Spd has improved bactericidal activity against ESKAPE pathogens as well, we tested the killing activity of Hst 5-Spd [30 µM] against all the six strains (Figure 8). Interestingly, Hst 5-Spd had significantly higher killing activity compared with Hst 5 against E. faecium, E. cloacae, and A. baumannii (Figure 8). Hst 5-Spd bactericidal effects were most improved against E. faecium; increased by three-fold after 1 h incubation and by 1.5-fold after 5 h, while Spermidine (Spd) alone had no killing activity even after 5 h treatment ( Figure 8A). Hst 5-Spd bactericidal activity was also increased significantly in E. cloacae compared with Hst 5, although by only 15% at 1 h and at 5 h. Hst 5-Spd also showed a small but significant increase in killing activity against A. baumannii compared with Hst 5 (23% at 1 min and 10% at 1 h). However, after 5 h there was no significant difference perhaps due to the high killing that occurred as a result of Spd (30 µM) itself ( Figure 8D). These three species of ESKAPE pathogens were also ones that we found to involve Hst 5 uptake and intracellular targets (Figures 2, 3, 5), suggesting that improved Hst 5-Spd FIGURE 7 | Hst 5 is minimally active against K. pneumoniae. (A) K. pneumoniae cells in exponential phase were exposed to F-Hst 5 (30 µM) and PI (2 µg/mL). F-Hst 5 (green) and PI (red) uptake were measured in parallel by time-lapse confocal microscopy. Images were recorded every 10 min and selected images of indicated time points were shown. (Scale bar: 5 µm). (B) Quantitative analysis of F-Hst 5 uptake (green line) and PI uptake. Error bars represent the standard errors from four different fields of image. activity in these strains is a result of improved uptake. This is supported by our data showing that the killing ability of Hst 5-Spd against P. aeruginosa was not improved, in agreement with a membrane lytic mechanism for this organism. Furthermore, the Hst 5 resistant pathogen K. pneumoniae was not affected by Hst 5-Spd showing that this conjugate does not improve binding to this encapsulated ESKAPE pathogen.
DISCUSSION
Here we report the remarkable finding that although Hst 5 activity has been believed to be limited to fungi, Hst 5 also demonstrates very high bactericidal activity against several ESKAPE pathogens. Up to now, a few reports have described interactions between Hst 5 and bacteria. Hst 5 was found to have low killing against Streptococcus gordonii (a Grampositive commensal bacterium of the human oral cavity; Andrian et al., 2012). Zinc-mediated killing of E. faecalis bacteria by histidine-rich histatin analogs was found (Rydengard et al., 2006), and killing activity of another histatin derivative (P113) against P. aeruginosa and S. aureus in vitro was shown (Giacometti et al., 2005). Hst 5 may have other non-bactericidal activities in that it attenuates chemokine responses by binding to Porphyromonas gingivalis hemagglutinin B (Borgwardt et al., 2014). Here we expand the scope of Hst 5 activity by showing that full length Hst 5 and an Hst 5 spermidine conjugate both exert very significant killing of all but one ESKAPE pathogen by multiple mechanisms.
The mechanisms by which Hst 5, a naturally occurring salivary protein, kills C. albicans have been well-studied and suggest the involvement of multiple intracellular targets . The effects on fungal targets range from non-lytic leakage of ATP and K + ions to mitochondrial damage and oxidative stress generation ; and more recently, potential metal scavenging causing nutritional stress (Puri et al., 2015). However, it has been clearly shown that the killing is non-lytic and energy dependent. Interestingly, the ambiguity in the mechanisms of antimicrobial peptides seems universal. Many antibacterial peptides that were considered to be purely lytic are now believed to involve other killing mechanisms ranging from bacterial cell wall disruption to effects on protein and nucleic acid synthesis (Brogden, 2005). We found that Hst 5 can efficiently kill ESKAPE pathogens killing using a unique blend of lysis and energy dependency.
Antibiotic resistance has been a challenge for treating drug resistant bacterial infections and ESKAPE pathogens are no exception (Rice, 2010). One classic drug resistance mechanism entails efflux of the antimicrobial molecules by bacterial cells (Sun et al., 2014). However, Hst 5 resistance is mediated by efflux through C. albicans Flu1 transporters (Li et al., 2013) and is not likely to occur in bacteria, as there are no similar transporters known in bacteria. In contrast, C. albicans Dur3 transporters (used in polyamine uptake in fungi) are needed for Hst 5 uptake and candidacidal activity (Kumar et al., 2011). Interestingly all ESKAPE pathogens have spermidine/polyamine transporter homologs (Palmer et al., 2010;Ren et al., 2010;Park et al., 2011;Liu et al., 2012;Yao and Lu, 2014;Wang et al., 2016) that might mediate Hst 5 uptake in pathogens that require internalization for killing. Also, the improved bactericidal activity of the Hst 5-spermidine conjugate against E. faecium (whose spermidine transporter has the highest homology to the C. albicans Dur3 transporter, our unpublished data), and to some extent against E. cloacae and A. baumannii may be a result of better uptake due to similarity of polyamine transporters expressed in these cells. Biofilm formation is another resistance mechanism that leads to poor drug penetration and accessibility (del Pozo and Patel, 2007;Hoiby et al., 2010). It has previously been shown that Hst 5 is effective against C. albicans biofilm (Konopka et al., 2010) and thus Hst 5 can potentially make its way through the polysaccharide extracellular biofilm matrix of ESKAPE bacteria as well.
Prophylactic antimicrobial therapy, especially when applied topically, can be of great advantage to prevent surgical, wound, and burn infections. However, depending on the specificity of the agent used, this may also lead to the killing of healthy flora at the site of application that prevents colonization of pathogens. Here we show that salivary Hst 5 that has limited antibacterial activity against most human oral commensal organisms including Streptococcal sp. (found on the human skin; Dale and Fredericks, 2005;Belkaid and Segre, 2014), is extremely effective against P. aeruginosa and A. baumanni. This provides , and E. cloacae (F) cells in exponential growth were exposed to 30 µM of Hst 5, F-Hst 5 and spermidine in 10 mM NaPB for 1 min, 1, and 5 h. Aliquots taken at different time points were diluted and plated. CFU were determined after 24 h. Error bars represent the standard errors from at least three independent replicates of each strain. Hst 5-Spd conjugate showed more killing efficiency against E. faecium (A), A. baumannii (D), and E. cloacae (F) (**P < 0.01, ***P < 0.001, Student's t-test).
a novel potential therapeutic application for Hst 5 since P. aeruginosa is one of the most important causative agents for burn infections (Tredget et al., 1992). Furthermore, P. aeruginosa is responsible for a majority of bacterial eye infections related to contact lens use (Cope et al., 2016). While Hst 5 is intrinsically absent from human tears, its exogenous application to treat such infections seems plausible, given the high killing activity of Hst 5 against P. aeruginosa. Potential therapeutic use of Hst 5 has some limitations that are inherent to its cationic nature. Since high salt conditions negatively affect Hst 5 microbicidal activity (Helmerhorst et al., 1999;Jang et al., 2010), this may restrict the use of Hst 5 for treatment of systemic disease, although the Hst 5 derivative P113 was found to be effective for systemic use in a rat model of P. aeruginosa sepsis (Cirioni et al., 2004). However Hst 5 has a great potential for application topically, both on human skin and in the eye, especially when carried in hypotonic gels and solutions.
Multiple drugs are sometimes required to completely eradicate drug resistant infections. Although further testing of additional strains of ESKAPE pathogens needs to be done, we show here the killing activity of Hst 5 against ESKAPE pathogens, taken together with how Hst 5 can potentially affects multiple intracellular targets, presents the possibility of using this protein in synergy with other existing antibiotics.
AUTHOR CONTRIBUTIONS
Conceived and designed the experiments: ME, HD, SP. Performed the experiments: HD, AM, HN. Analyzed the data: HD, AM, HN. Prepared the paper: ME, HD, SP, TR.
FUNDING
This work was supported by NIDCR grants DE10641 and DE022720 to ME. | 7,473.2 | 2017-02-15T00:00:00.000 | [
"Biology",
"Medicine"
] |
Systematic Mapping Literature Review of Mobile Robotics Competitions
This paper presents a systematic mapping literature review about the mobile robotics competitions that took place over the last few decades in order to obtain an overview of the main objectives, target public, challenges, technologies used and final application area to show how these competitions have been contributing to education. In the review we found 673 papers from 5 different databases and at the end of the process, 75 papers were classified to extract all the relevant information using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method. More than 50 mobile robotics competitions were found and it was possible to analyze most of the competitions in detail in order to answer the research questions, finding the main goals, target public, challenges, technologies and application area, mainly in education.
Introduction
Robotics technology is increasingly present in our daily life and even more in industry. Inside this context, the emergence of robotics competitions around the world has provided great benefits for society. Robotics competitions are an excellent tool for the development of new solutions and innovations, to push the state of the art in several fields, benchmarking and even to motivate students to participate in science, technology, engineering and mathematics (STEM) areas and to encourage them to join engineering careers [1,2].
The concept of robotics competitions begins in 1977 when IEEE Spectrum magazine had the intention to make an Amazing Micromouse Competition. The first event took place in New York in 1979 where the goal was for a mobile robot to complete a maze as fast as possible. Later, Micromouse became very popular in Europe, Japan and USA until nowadays. Due to the success of Micromouse, Dean Kamen founded FIRST (For Inspiration and Recognition of Science and Technology) Association in 1989 and then the FIRST Robotic Competition season happened in 1992 in which high school students had to build and program a robot to complete a challenge [3,4].
Attached to the growth of the digital world other robotics competitions were being created, including different types of robots, themes, challenges and scenarios. Currently, it is possible to find many competitions related to mobile robots like humanoid robots, automated guided vehicles (AGV), unmanned aerial vehicles (UAV) and even underwater robots. The themes and scenarios can vary from rescue, dance, domestic service, logistics and manufacturing, marine services, virtual robots to soccer games. The challenges range from the simplest to the most complex and the goals facing industry, domestic tasks, education, natural disaster and benchmarking [4].
Among the types of robots and robotics competitions, the most common robots found are the mobile robots which are growing over the last years. Mobile robots applications have been widely implemented in industry and even in the domestic context. For some industrial tasks like transportation from one place to another, the AGVs can be useful once they are able to move in a dynamic environment with unexpected obstacles [5]. The service robots, which are mobile robots too, are designed for domestic tasks and are useful to assist people with disabilities. A famous competition related to this theme is RoboCup@Home, which started in 2006 as a new league of the RoboCup competition, which includes many other leagues [6]. The autonomous navigation of mobile robots also contributes to applications like autonomous cars [7].
According to advances in robotics and the contributions that robotics competitions have been providing, these competitions have been gaining attention in the education area, as a way to encourage students to STEM concepts, attracting them to pursue a career in the fields of technologies, promote the values of the engineering profession and also assist in teaching several multidisciplinary engineering topics and disciplines at universities [8,9]. Some of the most popular robotics competitions with a focus on education is the FIRST Robotics Competition, BotBall and RoboCupJunior [10].
The objective of this work is to present a systematic mapping literature review about the mobile robotics competitions which took place over the last few decades. The intention is to find many topics related to each competition like the target public and its age, the main places where the competitions take place, the different types of challenges, technologies applied and final application area. Finally, it is intended to have an overview of all the types of mobile robotics competitions with detailed descriptions, the different goals, the results that have been found and how the competitions can contribute positively to education. This paper is structured as follows: Section 2 explains the systematic mapping literature review process and describes the planning done for this theme and how to conduct the review. Section 3 shows all the numbers related to the papers found and details the conducting process of the review that was done. Section 4 presents, in a detailed form, all the mobile robot competitions found and discussed the answers to the research questions. Finally, Section 5 ends our review.
Methodology
This paper followed the systematic mapping literature review methodology, also called literature mapping, which is useful at the beginning of research for the contextualization of ideas. literature mapping aims to seek all the knowledge available about an idea and find, at the end of the survey, the most relevant papers according to your research questions. It is also used to complement a systematic literature review (SLR), which is another methodology of evaluating all available research and evaluating the relevant papers related to the main idea. The difference between SLR and literature mapping is that the last one is wide and SLR is more specific, but when the both are used together the best results are found. Commonly, first one develops literature mapping to have an overview of the theme and then an SLR is obtained in order to obtain more specific and detailed results [11][12][13][14][15].
The literature mapping process follows some steps which are practically the same as the SLR steps that will be described below. Before starting any SLR or literature mapping, the first step is to make a search on the Internet in order to verify if there already is a literature mapping about the intended topic. If there is a literature mapping or even an SLR, it is not necessary to conduct another one, however, if no results were found for the specific idea selected, the literature mapping or SLR can be implemented following the steps of the planning and conducting described below [13,14].
Planning the Research Questions
The first step to start the systematic mapping literature review is to elaborate the research questions, which are focused on a theme and the points to be discovered, understood or studied must be found in its answers. These questions must define clearly the problem to be solved. It is important to emphasize that the research questions of mapping are broader than those created for the SLR [11]. Taking into account that the main context of this work is mobile robotics competitions, the research questions are: Once the research questions are made, the next step is to perform the PICOC method proposed by Petticrew and Roberts [12], which assists in the article analysis process. The description about each topic is presented below:
Selecting the Keywords and Synonyms
The keywords and synonyms will help to obtain the search string, which will be discussed in the next section and it is related to each PICOC item. According to the theme of this work, the keywords and their synonyms chosen are presented in Table 1.
. Inclusion and Exclusion Criteria
The inclusion and exclusion criteria help to define the relevant papers for the study and which might answer the research questions. A paper which presents all the inclusion topics can be relevant, but if it includes one or more exclusion topics this paper must be excluded.
Inclusion criteria: • The work is written in English; • The work was published after 2001; • The work must have information about one or more robotics competitions; • The work must have included the "robotic competition" term.
Exclusion criteria: • The paper is not accessible; • The work is not written in English; • The work was published before 2001; • Work does not involve a robotic competition context; • Works that include the term "robotic competition" but does not answer any research question.
These criteria were chosen based on the fact that the first robotics competitions started to gain space in the 1990s, and even if there were some competitions that have already been created before, we chose to start some years later in order to ensure that concrete research and results could be collected [4].
Creating the Search String and Choosing the Sources
After all the steps before it is possible to select the databases and create the search string easily. The sources chosen to search for the papers were ACM Digital Library, IEEE, Scopus, Springer Link and Web of Science because they are important repositories for research about technology.
The search string is also called a query and is an equation that represents all the main terms of the search. This string needs to be put on each database chosen to search for papers related to the theme, but depending on the website the search string varies and needs some specific characters.
The search string created for this work was: ("robotics competitions" OR "robotic competition") AND ("benchmark" OR "challenges" OR "challenge" OR "evaluation" OR "performance" OR "robotics application" OR "technologies" OR "technology" OR "validation") • ACM Digital Library: on the website we used the advanced search and the term "All" for each term in order to find it in any place of the paper. IEEE: we used the same main equation shown at the beginning and added some terms like "Abstract", "Author Keywords" and "Title" in the same place as "All". This way we searched for the terms only in these topics of the paper; • Scopus: on the website we used the same main equation in the advanced search tab and just added the term "TITLE-ABS-KEY" in the query, indicating that the search for the words is done only on the title, abstract and keywords of the paper. The modified query was: TITLE-ABS-KEY (("robotic competition" OR "robotics competitions")) AND TITLE-ABS-KEY (("performance" OR "challenges" OR "challenge" OR "robotics application" OR "technologies" OR "technology" OR "validation" OR "evaluation" OR "benchmark")) • Springer Link: exactly the same query cited at the beginning was used on the website simple search; • Web of Science: the query was put in the search tab of the website and we added some terms at the beginning of the equation like "TI", "AB" and "AK" indicating a specific search as explained before. TI = (("robotics competitions" OR "robotic competition") AND ("benchmark" OR "challenges" OR "challenge" OR "evaluation" OR "performance" OR "robotics application" OR "technologies" OR "technology" OR "validation")) OR AB = (("robotics competitions" OR "robotic competition") AND ("benchmark" OR "challenges" OR "challenge" OR "evaluation" OR "performance" OR "robotics application" OR "technologies" OR "technology" OR "validation")) OR AK = (("robotics competitions" OR "robotic competition") AND ("benchmark" OR "challenges" OR "challenge" OR "evaluation" OR "performance" OR "robotics application" OR "technologies" OR "technology" OR "validation")).
Quality Assessment Checklist
New questions are defined in this phase in order to verify the quality of a paper when it is read completely and before putting it in the final review. These questions can be more specific and each one has a weight, the quality questions elaborated for this work are presented below. The answer value can be three values: 1.0 (if it answers the question fully), 0.5 (if it answers the question partially) or 0 (if it does not answer the question). Each paper can be evaluated with a maximum score of 9.0 and the cutoff score selected was 6.0 based on the most important questions of the list that needs to be answered fully, these were questions 4, 5, 6, 7, 8 and 9. These questions are more important than the three first because they are focused on the topics that we want to discover and are based on the research questions. Therefore, all the papers that exceed the score of 6.0 are included in the final review.
Data Extraction Form
Once we already have all the relevant paper for the research, the last step is to apply the data extraction form, in this phase a new set of questions is created to extract all the important information of the final articles in order to assist to answer the research questions set at the beginning of the mapping. The data extraction questions selected for this work are:
Conducting
After the planning stage is elaborated, the next step is to perform the conducting, which was done by following the PRISMA (Preferred Reporting Items for Systematic Review and Meta-Analysis) method that describes the phases of the conducting process [11]. The process is illustrated and exemplified in Figure 1.
• Identification: the papers found in each source using the query are saved and then the duplicate studies are removed. • Screening: just the title, abstract and keywords are read applying the inclusion and exclusion criteria, the papers that are not approved by the criteria are removed too. • Eligibility: for the remaining articles, we applied the quality questions, so the papers need to be read fully in order to obtain the answers for those questions and a good score. The papers that do not have a score above the limit must be deleted. • Included: the papers with a high score are classified for the final review and we performed the data extraction using the data extraction form questions [11]. The tool used to perform this systematic mapping literature review was the Parsifal [16]. It is good to organize the steps, plan the review, import the papers, answer the questions and at the end generate a report about the review.
Results
This section presents the results obtained in the conducting process. In the identification stage we searched the databases using the query string and 673 papers were found in total, 63 from ACM Digital Library, 28 from IEEE, 300 from Scopus, 222 from Springer Link and 60 from Web of Science. There were 104 reports duplicates, removing them left 569. In the screening stage we performed the inclusion and exclusion criteria and of the remained articles, 242 passed to the next phase and 327 were excluded because they did not include one or more inclusion and exclusion criteria.
In the eligibility stage, it is necessary to read all the articles fully in order to apply the quality assessment, then after this process, 168 papers were removed, leaving 74 papers in the included stage plus 1 which was a recommendation and which contained a mobile robotic competition that was not found in the chosen data sources, but was relevant for the research, totalling 75 papers. The articles classified for this last stage are those that will be used in the final review and which we will perform the data extraction on. All the conducting processes carried out are illustrated in the flowchart in Figure 2. Table 2 shows all 75 papers that went through the quality assessment and were classified for the data extraction stage, the respective answers to the quality questions and the final scores, which were above 6.0. It is possible to notice that for question QQ4 most of the answers were yes, including just two partial answers, indicating that most of the articles were based on any robotic competition. For question QQ9, all of the answers were yes, because all the papers answer at least one of the research questions before. The first two quality questions obtained good answers because the papers were well elaborated, including clear objectives and based on good research. Most of the QQ7 answers were no, because few articles really described the technologies used in the robotics competitions. For questions QQ3, QQ5, QQ6 and QQ8 the answer distribution was varied since some papers do not have a good results discussion, do not describe in detail the robotics competition which it is based on and do not present a competition application area. Figure 3 represents the number of papers selected per database on the left side and the number of papers classified per source and year on the right side. The selected articles are those obtained at the beginning of the review and the classified articles are those that remain at the end of the review, after applying the quality assessment and which will perform the data extraction. It is possible to notice that Scopus and Springer Link were the databases in which a large number of articles were found and classified. The database which was found the lowest number of articles was IEEE but there were few classified articles were from ACM Digital Library. It is possible to see that most of the articles classified for the included stage were papers from 2020, 2019, 2016 and 2015, which represents recent data and results that have been added over years of competitions.
Discussion
After applying the data extraction phase in all the 75 papers we obtained enough information about all the robotics competitions found. Table 3 lists all the 67 robotics competitions found in total and the respective references. It is possible to notice that robotics competitions found in most of the papers were RoboCup and First Robotics Competition, which is famous competitions focused more on education. The DARPA Robotics Challenge was also cited in most articles, which includes autonomous vehicles and complex tasks [3,35,49,53,81]. Figure 4 illustrates Table 3 and the numbers next to the competition name in the figure represent the number of articles in which the competition was cited. The spaces where the name is not specified are competitions cited in only one article. It is possible to see that the competitions most cited by the papers are RoboCup, FIRST and DARPA. The reason is probably that RoboCup is the biggest competition and one of the most famous, including many leagues in several domains, even football games with robots [49,81].
The FIRST is one of the oldest robotics competitions with a greater focus on education, attracting young students to careers in engineering and technology, besides that uses a well-known tool, LEGO kits [9,52]. The DARPA is a robotic competition that is more professional and industry-focused, focused on innovative solutions for problems and it has prize money. The first editions were focused on autonomous vehicles and then the other editions included humanoid robots [3,35,53]. Table 4 presents the relevant information about each robotics competition extracted from the last review stage and based on the data extraction questions presented in Section 2.1.7, each topic (second column) associated with each competition (first column) is related to the questions. It is noted that not all competitions listed in Table 3 are described in Table 4, because there are some competitions that were only cited in the papers, without detailed information. Therefore, in the table below are only the robotics competitions in which it was possible to obtain data through the selected papers, in total there were 38 competitions. Table 5 presents all the mobile robotics competitions linked to the research concepts in which they are involved in each related paper. The proposal of this table is to provide the key researches associated with the competition's challenges and the papers. Table 4. Description of robotics competitions taken from the data extraction phase.
Robotics Competitions
Agile Robotics for Industrial Applications Competition (ARIAC)
Description
It is an annual competition organized by the NIST (National Institute of Standards and Technology) since 2017. The main goal is to test the agility of industrial robot systems and to enable industrial robots on shop floors to be more productive, autonomous and to require less time from shop floor workers [77].
Where it takes place This topic was not found in the papers.
Target public Researches to practitioners.
Challenges and activities
Participants needs to implement a robot control system for a robot to overcome agility challenges in a simulated environment.
The robot needs to realize kitting tasks, building the kits by picking up all the required items, which can be found on shelves, on the conveyor belt or in bins. Target public Teams from academy, company and industry.
Challenges and activities
Challenges for autonomous robots of different domains (air, sea and land) in scenarios inspired by the 2011 Fukushima accident.
In 2013 Eurathlon coordinates a robotic competition based on land and on the next year based on sea. The third year is the Grand Challenge, where the robots of three domains (land, sea and air) needs to cooperate in order to achieve objectives in a scenario set up to simulate a nuclear power plant ravaged by a tsunami. The scenario was based on the Fukushima disaster. The Grand Challenge is composed of three missions: localizing two missing workers in the disaster area, surveying the area of disaster to identify dangerous leaks and finally closing valves inside the building and underwater to stem the leaks. Three days of the competition are to practice and the grand challenge is the last two days.
Technologies applied This topic is not discussed in the articles.
Application area This kind of competition can contribute to increase the state of the art on the air, land and sea autonomous robots to help in natural disasters.
Robotics Competitions
Balam Robot It is a local robotics competition in Guatemala started in 2015. The main objective is to show that technology is not complicated and mathematics or science are not boring for students [61].
Where it takes place Outreach Department of Universidad Galileo.
Target public Students.
Challenges and activities
They prepare six weeks having workshops of four hours per week. The main challenge of BRC 1.0 was to build a sumorobot, teams had to compete by rounds against other sumobots and those who remained inside the tatami where who remain as finalists. After various rounds a winner was determined.
Application area This competition can contribute to education.
Brazilian Robotics Olympiad (BRO) Description It was started in 2007, created by a team of several university professors with the mission of promoting robotics among brazilian students with or without previous knowledge of robotics, fostering their interest to engage in science, technology and engineering studies and carrers. The olympiad is fully free for participants, being annually organized by volunteers from several brazilian universities [42].
Where it takes place Brazil.
Target public Students.
Challenges and activities
The activities are divided in two modalities, practical and theoretical. The theoretical exams are designed to give the knowledge and contextualization about robotics, six levels of written tests are prepared by the organizers and based on the age of students. This model allows students to realize that what they are learning at school can be applied to solve real world problems. The practical exams are based on RoboCup Junior-Rescue mission. There is a simulated disaster environment where teams of four participants must build a robot fully autonomous to rescue victims. The robot must follow a safe path, avoid debris, overcome gaps, go over a mountain, identify victims and rescue them, taking them to a safe place. The best teams are selected by the Brazilian RoboCup Committee to represent Brazil in the RoboCup Junior international competition.
Technologies applied Arduino kits.
Application area Robotics competition has been an exciting and motivational tool for helping students to learn how to solve real problems in a practical way. It has been a good contribution for education.
Where it takes place The competition took place remotely.
Target public This topic was not found in the papers.
Challenges and activities The simulation system creates a virtual arena with obstacles, a starting grid, a target area and the bodies of the robots. The bodies are composed of a circular base and are equipped with sensor, actuator and command buttons. The participants must create a software which controls the movements of a team composed of five virtual robots.
Technologies applied Robot simulators.
Application area This topic was not found in the papers, but it can be concluded that this kind of competition can contribute to the development of virtual solutions, also for education and dissemination of technology areas through students.
Cybertech Description
It is a robotic competition organized anually by the Universidad Politécnica de Madrid (UPM) started in 2001 [21].
Where it takes place Madrid, Spain.
Target public Undergraduate students from universities all around the world.
Challenges and activities
The students have to design and build a robot that participates in different events. The events include: Maze event (robot have to get out of a maze in a minimum time), line-following event (robot must follow a black line over a white back-ground), solar cars event (participants have to build an autonomous device that should be able to move inside a circuit being propelled just by solar energy), simulated robots event (participants have to develop a computer program to control a virtual robot that moves in a simulated maze) and bullfighting event (each team has to build a bullfighter robot that fights in the arena against a bull robot provided by the organization).
Technologies applied
This topic was not discussed in the papers.
Application area This competition can contribute to the field of education, increasing the motivation of them towards engineering domains.
Where it takes place This topic was not found in the papers.
Target public This topic was not found in the papers.
Challenges and activities
Includes several manipulation tasks. First editions have the objective of promote autonomous driving of road vehicles and then in the others editions promote humanoid robots able to execute complex tasks and in the last editions the focus was to promote the development of adaptive vehicles for military purposes. Started as a competition for autonomous cars and recently a simulated challenge focusing on humanoid robotics using Gazebo.
Technologies applied
Gazebo.
Application area
Robotic competitions are important in the learning process of youngsters and it is becoming more and more usual in schools and universities last few years. The competitions can contribute to several areas, like industry, society, search, but one of the most are benefited has been education.
Robotics Competitions
It is an annual competition organized by e-Yantra and hosted at IIT Bombay. The objective is to teach robotics concepts to the college students using a Project Based Learning (PBL). The competition is totally online [34,45,69,71,78].
Where it takes place Indian Institute of Technology Bombay-India.
Target public College students of Indian Institute of Technology Bombay.
Challenges and activities
The competition is divided in different stages. Firstly, there is a preliminary test where participants answer questions related to aptitude, programming and electronics knowledge. In Stage 2 the participants combine the software and hardware parts to find the best solution, it also involves hardware testing, video and code submission. Each stage is subdivided into small tasks. In 2018 it was introduced a theme called "Thirsty Crow", which aims to teach "Marker Based Augmented Reality", for the first time. The teams need to build a robot (called Crow) capable of autonomously following the line and pick up the magnetic pebbles and drops them at the water pitcher marker. They also have to design and construct a 3D model of pebbles, water pitcher and Crow in Blender. They also have to write a python script related to the augmented reality part.
Technologies applied Marker based augmented reality using open source python libraries such as OpenCV and OpenGL; 3D modeling using blender software; ros; machine learning; image processing; microcontroller programming.
Application area This kind of competition can contribute to education, increasing the students' interest in STEAM areas and robotics.
EUROBOT competition Description
It is an international amateur robotics contest, organized by the Eurobot Association and founded in May 2004, but the contest was introduced already in 1998 [20,30].
Where it takes place Annually somewhere in Europe.
Target public Young engineering students.
Challenges and activities
During a match, two opponents robots are competing on the table for 90 seconds, each robot is performing tasks defined in the rules. The robots must be autonomous and a robot should not collide with other opponent, if this happens the team is disqualified. The winner is the robot that collect more points. In the Eurobot 2010 edition the robot must collect fruits and vegetables, represented by balls and cylinders. In the Eurobot 2011 edition two mobile robots must play a "chess up", the game is played on a playing table of the usual Eurobot size.
Application area
The main application area is education.
Robotics Competitions
European Land Robot Trial (ELROB)
Description
It was founded in 2006 by the European Robotics Group and organized by the Fraunhofer Institute for Communication, Information Processing and Ergonomics [27,33,39].
Where it takes place Annually at changing locations throughout Europe.
Target public This topic was not found in the papers.
Challenges and activities
The ELROB alternates between military and civilian and defines a variety of scenarios instead of only one single mission. These tasks include: security missions, convoying or reconnaissance by day and night. The team can choose between the alternative scenarios. The scenarios also include detection of objects and transportation, which can be carried out with a single vehicle or a convoy with at least two vehicles.
Technologies applied 2D and 3D laser scanner, 3D Lidar sensor, cameras, GPS and inertial sensors.
Application area
Provides an opportunity to exchange ideas, create solutions as well as a venue to evaluate and encourage state of the art research. This competition can be helpful for the members of teams because they are forced to work together in a determined time, contributing to education field.
European Robotics League (ERL)/ ERL Emergency Description
It is a multidomain robotic competition funded by the European Union Horizon 2020 Programme, which is focused on two indoor robotics competitions (ERL Industrial and ERL Service Robots) and one outdoor robotic competition (ERL Emergency Robots). The 2017 ERL Emergency competitions require flying, land and marine robots acting together to survey the disaster [53,54,66,74].
Where it takes place Many countries over Europe.
Target public This topic is not discussed in the papers.
Challenges and activities
The competition has a duration of 9 days and the robots has to perform tasks related to land, air and sea domains which emulate real-world situations inspired by the 2011 Fukushima accident. The missions include: Mission A: Search for missing workers. Mission B: Reconnaissance and environmental survey. Mission C: Pipe inspection and stemming the leak. Robots have to work in a catastrophic scenario. From a starting point, the vehicle had to submerge, pass through the gate and it was then required to perform different tasks without resurfacing. The tasks include inspecting and mapping the area and the objects of interest, identifying mission targets, such as the leaking pipe and the missing worker.
Technologies applied Cloud resource and 4G connection.
Application area This kind of competition can contribute to increase the state of the art on the air, land and sea autonomous robots to help in natural disasters.
FIRA HuroCup
Description It is a multi-event robot athletic competition intended to encourage breath in humanoid performance [24,65].
Where it takes place First edition in Seoul, Korea.
Target public This topic was not found in the papers.
Challenges and activities
HuroCup is part of the FIRA international robotic competition and consists of robot dash, penalty kicks, lift and carry, basketball, weightlifting, climbing wall and obstacle run, the robot with the best score over all events is the winner.
Technologies applied This topic was not found in the papers.
Application area This topic was not found in the papers.
FIRST Robotics Competition/ FIRST Lego League (FLL)
Description
It is an international competition which began in 1998 as a joint effort between the FIRST (For Inspiration and Recognition of Science and Technology) Organization and the LEGO Group to introduce robotics to students. The competition has a duration of six weeks. The Lego League is designed for young ages [3,4,9,18,22,29,50,52,58].
Where it takes place In different countries around the world.
Target public High school and university students, engineers, technicians, business, leaders and concerned citizens. Lego League: students from 9 to 14 years old.
Challenges and activities
Teams design and build tele-operated mobile robots to achieve a variety of tasks. In the Lego League they have to use LEGO kits to work on an authentic scientific-themed challenge, the themes include climate change, senior solutions, food safety, medicine, moving across a field, climbing ramps, hanging from bars and placing objects in goals. Each year, there is a new theme. The tasks first allow students to connect what they learn about robotics to what they could do in the face of real-world challenges and second, authentic tasks and plausible scenarios are structured to motivate students to overcome potential challenges in learning robotics.
Application area
This competition can connect students with professionals, enable them to solve real-world problems and develop 21st century skills. Robotic competitions have been a good tool for education because it aids universities to teach a variety of multidisciplinary engineering topics including design, programming and mechatronics.
Humanitarian
Robotics and Automation Technology Challenge (HRATC)
Description
It is a humanitarian demining international robotics competition, which the goal is to push boundaries of what technology can accomplish in this field. The first edition happened in 2014 [35].
Where it takes place The entire competition is performed remotely.
Target public This topic was not found in the papers.
Challenges and activities
The competition is divided in three stages: The simulation stage: teams must focus on their ideas and to develop their algorithms.
The teams have to focus on the actual problem, humanitarian demining. The field trials stage: in this stage each team will be able to run the software developed on the simulator on the actual robot, each team has 3 field trials, throughout 3 weeks. Competition day: each team is given two runs on a minefield using surrogate mines and false positives.
Application area
The main contribution of this competition is to increase the state of the art in the area of humanitarian demining.
IEEE Humanoid application challenge Description
The 2019 theme was robot magic [65].
Where it takes place This topic was not found in the papers.
Target public This topic was not found in the papers.
Challenges and activities
In the robot magic theme a humanoid robot can take on any role in a magic show.
Application area Provides opportunity for improving work in robotics and a large range of areas of artificial intelligence (vision, speech understanding, interacting with humans).
Indoor Aerial Robot Competition Description
It was inaugurated in May 2005 with the objective to identify best design practices and gain insight on technical challenges facing the development of unmanned air vehicles [19].
Where it takes place Swarthmore College.
Target public This topic was not found in the paper.
Challenges and activities
The tasks are based on line-following and teleoperation. The teams have to implement a line-following algorithm in real time which is invariant to changing lighting conditions. The points are based on how far the robots are able to travel.
Technologies applied
This topic was not found in the paper.
Application area It has been a means of discovering the best practices to solve real world problems.
International Aerial Robotics Competition (IARC) Description
It is an international competition focused on aerial robots [49,56].
Where it takes place This topic is not found in the papers.
Target public This topic is not found in the papers.
Challenges and activities
In this competition the agent (aerial robot) is required to contact targets (ground vehicles) sequentially and drive them to a certain boundary to earn score. The agent robot needs to be fully autonomous and the game has a duration of 10 min. In the IARC mission 7 called "Shepherd mission", there is a drone, 10 ground mobile robots and 4 mobile obstacles. First, the drone should be able to avoid collision with four mobile obstacles. Second, there are two ways to change the moving direction of each ground mobile robot. The final target of winning the competition is to drive at least 4 out of the 10 ground mobile robots across the green edge of the square arena within 10 min.
Technologies applied This topic is not found in the papers.
Application area This topic is not found in the papers.
Latin American IEEE Robotics Competition
Description It is an annually competition organized by the Department of Electrical Engineering of the Universidad de Chile and by the IEEE region 9 [17].
Where it takes place The first was held in Santiago-Chile.
Target public Engineering students.
Challenges and activities
The first competition ("beginners") was aimed for students to work in robotics and was based on Lego MindStorms building blocks, the proposed challenge is to design and programming a robot that cross a simulated minefield. The second competition ("advanced") is designed for experienced students' groups and consists of crossing a soccer field with obstacles using any kind of legged robots, the robots could be designed by participants, or could be bought or even adapted.
Technologies applied LEGO MindStorms.
Application area
The main contribution is for the area of education.
MicroFactory Description
It is a robotic competition designed to be low-cost and easily implementable in a small space and it is based on the Portuguese competition called Robot@Factory [41,46].
Where it takes place
This topic was not found in the papers.
Target public High school students and university undergraduate students.
Challenges and activities
The challenges are similar to the Robot@factory challenges but the ground area and complexity is reduced and the scenario material were simplified. In MicroFactory there are just 3 rounds.
Application area
The main contribution of this competition is for education.
Micromouse Description
It is one of the most popular competitions inside the context of mobile robots started in 1970s, being the first competition promoted by the IEEE. It is organized at the University of Trás-os-Montes e Alto Douro [3,60,70].
Where it takes place Editions are held worldwide.
Target public Students, researches and the general public.
Challenges and activities
A small autonomous mobile robot put in an unknown labyrinth must be able to map it, look for the best possible route between the starting point and the goal and travel it in the shortest time. The challenge is not solving the maze but how fast the robot can do it.
Technologies applied
Scanning and path planning algorithms,fFloodfill procedure, HIL simulator, self-localization using odometry and distance sensors.
Application area
The Micromouse competition is an important tool for the education, increasing the young students' interest in STEAM but to introduce other people to the field of robotics.
Mississippi BEST (MSBEST) robotics
Description It is a competition which has a mission to inspire students to pursue careers in STEM areas through robotic design and competition. [50].
Where it takes place Mississipi-USA Target public Middle and high school students.
Challenges and activities
The challenge has a duration of six weeks. The participants are supplied with kits of material and they have to put those material together to build a robot, participants have to do a search about the competition theme for that particular year, realize a brainstorm with the ideas on how to design the robot to perform tasks related to the theme. All the students are required to submit their notebook, team demographics and consent forms.
Technologies applied This topic is not discussed in the papers.
Application area
The main application area is education.
Description
It is an international robotics competition [1].
Where it takes place This topic was not found in the papers.
Target public This topic was not found in the papers.
Challenges and activities
The Challenge 1 of the MBZIRC competition consists of aerial drone interception scenario. First, there are fixed balloons randomly around the arena and the autonomous aerial system must automatically detect, get close and blow up. Second, another autonomous aerial system should capture a ball that is suspended from another drone that flies at high speed on a variety trajectory. All these tasks must be performed autonomously.
Technologies applied Time-of-flight cameras, machine learning, computer vision and Kalman filters.
Application area This competition can contribute to increase the state of the art in autonomous vehicles and drones, which has been attracting a lot of attention, for example, for urban air mobility (UAM).
Description
It is a competition started in 2012 and its focus on fully autonomous robots to complete a given theme challenge. These themes have included search and rescue, mining and agriculture [32].
Where it takes place Universities across Australia and New Zealand.
Target public Students.
Challenges and activities
In NIARC 2012 the theme was search and rescue, where teams have to develop a robot to navigate a grid based maze environment. The objective is the robot navigate in unknown maze and differentiate the victims. NIARC's 2013 theme was the mining industry, where the objective was the robot navigate to the mining area through the unknown entrances and differentiate the desired gold cubes and undesired grey rubble cubes. NIARC's 2014 theme was the agriculture industry and the objective is the robot navigate accurately to known but unmarked seeding areas to plant seeds.
Technologies applied
Real time control, FPGA, LabView.
Application area
Studies have shown the benefits of using robotics competitions to generate interest and motivation in studying engineering for high school students and general public. Besides that it can also contribute to develop the ability of the teams to work together in multidisciplinary work.
Where it takes place The first edition took place in Nagoya, Japan and now the competition take place in many countries all over the world annually.
Target public
Since senior participants like researchers and university students to hobbyists, high school, primary and secondary students.
Challenges and activities
There are different leagues: Junior: for young students. There are three ages categories and three leagues being them soccer, rescue and dance. This league keeps the same activities over the years to help students improving their solutions. Soccer game: teams with autonomous robots compete each other in a soccer game; Search and rescue: robots that can assist first responders in mitigating a disaster such as an earth-quake or an accident in an industrial environment; Home: service robots to realize household activities; Work: defines nine challenges inspired by industrial mobile manipulation and transport tasks; Logistics League: groups of three robots have to plan, execute and optimize the material flow and deliver products according to dynamic orders in a simplified factory; Humanoid Challenge: humanoid robots compete in three events: walking, penalty kicks and a free demonstration; Simulation 2D and 3D: two teams of eleven software agents compete against each other on a simulated soccer pitch; Small size: semi-autonomous soccer robots (diameter of 18 cm and height of up to 15 cm); Middle size: slow driving robots (that drive up to 4m/s) on small soccer fields enclosed by walls; Standard Platform: soccer game in which all teams compete with identical robots.
Technologies applied
Sensor, actuators, AI solutions, machine learning, multi-agent coordination, ROS, SLAM, image processing, wireless standard communication interfaces, LEGO Mindstorms, object recognition, speech recognition and gesture recognition.
Application area
How the competition includes several leagues the application area can be more than one, increase the state of the art in the leagues areas, attract more students to STEM concepts contributing to education through RoboCup Junior and also increase the development of solutions for industry, daily life and natural disasters like RoboCup Home, Logistics League and Search and Rescue.
Robomagellan Description
It is an outdoor navigation competition hosted by RoboGames [7].
Where it takes place This topic is not discussed in the papers.
Target public This topic is not discussed in the papers.
Challenges and activities
The competition requires the robot moves in an unconstrained and unstructured real-world outdoor environment with different obstacles.
Application area
The robotic competition contributes with the advances in the field of robotics.
Roboparty Description
It is an educational robotic event with a duration of three non-stop days [3,70].
Where it takes place Universidade do Minho, Guimarães, Portugal Target public School-age children.
Challenges and activities
The students learn by experience how to build the Bot'n Roll robotic platform (mechanics, soldering electronics components and assembling the parts). Then, three challenges are executed to test their robots and the developed algorithms.
Technologies applied This topic is not discussed in the papers.
Application area
The main application is education.
Description
It is the first AUV competition and the first edition was in 1997. Currently it is the most popular competition in the AUV world and every year the competition has a different theme [73].
Where it takes place USA.
Target public Students (high school and university)
Challenges and activities The AUV mission consists of passing a gate, touching a buoy, dropping and retrieving objects and launching a plastic marker inside a target hole.
Technologies applied This topic was not found in the papers.
Application area
The main application area is education.
Description
The first robot competition in the area of assistive robotics, it is conducted in conjunction with the annual international Trinity College Fire-Fighting Home Robot Contest (TCFFHRC) in 2009. The vision of the competition is: bringing people with disabilities as clients of RoboWaiter design and Integrating the RoboWaiter project in a robotics course [25,31].
Where it takes place Hartford, Connecticut Target public Traditional participants are students, hobbyists and engineers.
Challenges and activities
Each robot has three runs and must navigate autonomously from its home position to a scale-model refrigerator, pick up a plate of food from a shelf, navigate to the table where a person with mobility impairment is sitting, places the plate on it and return to home position. Robots must avoid collisions with obstacles (sink, chair and elderly person).
Technologies applied This topic was not found in the paper.
Application area Development of solutions based on assistive robots to help people with disabilities realizes activities more easily. Other application area can be the education once the competition encourages students to STEAM areas. It aims to provide tools for benchmarking to the robotics community by designing and setting up competitions that increase scientific and technological knowledge. It is inspired by the RoboCup [44,48,59].
Where it takes place The first was held in Toulouse, France in 2014. The final was held in Lisbon, Portugal in 2015.
Target public This topic was not found in the papers.
Challenges and activities
RoCKIn@Work: there is a medium sized factory which produces small to medium sized lots of mechanical parts and assembled mechanical products, the robots must try to optimize its production process to meet the increasing demands of their customers. RoCKIn@Home: the robots must assist a person and supporter quality life, it is based in an apartment with all common household items like windows, doors, furniture and decorations.
Technologies applied This topic was not found in the papers.
Application area Robotics competitions have been a good way to promote comparison of different algorithms and systems, allowing for the replication of their results. Robotics competitions also contributes for promoting education and research to push the field forward.
Robotic Day Line Follower Competition Description
It is an annual competition which has been occurring during the last 15 years and it is growing every year. This competition used topics that can be used as benchmark, comparing different performances [68].
Where it takes place Prague, Czech Republic.
Target public This topic is not discussed in the article.
Challenges and activities
The participants' robots must to run in a way and follow a black line. They need to pass obstacles and the robot that complete the route in the shortest time qualify for the finale. In the final round the races are held on a knock-out.
Technologies applied
Time of flight distance sensor and computer vision.
Application area
In this competition context it is possible to apply the activities in multidisciplinary approach contributing to education. It can also have an importance in research and development, because the outcomes can be applied to solve real world problems, for example, in manufacturing and service robots.
Robot@Factory Robot@Factory Lite
Description It is an annually competition started in 2011 recently included in Robotica, the main Portuguese Robotics Competition. Robot@Factory Lite is a simplified version. The goal is to stimulate students and researchers to develop solutions to the challenges proposed by them [5,40,41,46,63,67].
Where it takes place Portugal.
Target public Secondary school and universities students.
Challenges and activities
The competition deal with the problem of the transportation of materials inside a factory. The main idea is an AGV organize the materials in warehouses with processing machines. There are four warehouses with two machines, the robot must deliver the parts in their correct locations. The AGV must be fully autonomous. The competition is divided in three days, each day has a round.
Technologies applied SimTwo simulator, which is provided by the competition and hardware in the loop (HIL), a software where the competitors insert their microcontroller in the loop of the simulation.
Application area
The main application area is education.
Robotics Competitions
"Schüler bauen Roboter" program Description "Schüler bauen Roboter" is a German project that brings together schools and universities [57].
Where it takes place Technical University of Munich, Germany.
Target public Target group is 14 to 18 years old high school students.
Challenges and activities
In the first school year the students can build a robot that solve a given task and at the end of the year, the different groups can compete against each other. Usually the competition starts in September, when the school year begin.
Technologies applied This topic was not found in the paper.
Application area
The main application area is education, once the competition was created to take place inside a university to help them to encourage students in the STEM areas and get skills in programming, electronics, robotics, etc.
SICK robot day Description
It is a bi-annual competition hosted by SICK AG, a producer of sensor systems [38].
Where it takes place Waldkirch, Germany.
Target public This topic was not found in the papers.
Challenges and activities
The robots must navigate autonomously and avoid obstacles and collision with other robots. The goal is to deliver as many objects as possible, where each correctly delivered object was awarded one point and each erroneous delivery one penalty point. With a limit time of 10 min, each robot had to alternately collect labelled objects at filling stations and transport them to delivery stations based on the object label.
Technologies applied This topic was not found in the papers.
Application area This topic was not found in the papers.
Description
It is a German robotics competition started in 2013. The second edition occurred in 2015. The main of the competition is to accomplish (conclude, finish) this activity as autonomous as possible by means of unmanned vehicles. The focus is mobile manipulation for planetary exploration [44].
Where it takes place Germany.
Target public Universities, research institutes and subject matter experts (SMEs).
Challenges and activities
SpaceBot has only one scenario which involves typical exploration tasks carried out on a planetary surface after landing on a planet. The robots have to locate and identify objects in a complex terrain. The target objects needs to be conveyed (transmitted) to a base station. The robots needs to be autonomous, communication was only allowed through a shaped network connection that imposed restrictions on the ports used. The tasks to be accomplished were: explore and map the terrain, find artificial objects, collect the two objects and bring them to a third and finally return to landing site.
Technologies applied This topic was not found in the paper.
Application area This topic was not found in the paper.
Robotics Competitions
It is the first underwater robotics competition in Europe. SAUC-E started in 2006 in the UK and then has been hosted by CMRE since 2010, the main goals are: advance the state of art of AUV, promote creative environment among researches, get closer contact between the university teams and companies invited to participate [2,53,62,73].
Where it takes place In many countries around Europe.
Target public University students.
Challenges and activities
The typical tasks includes passing through the gate, mapping and inspecting an underwater pipeline structure, localizing on the seafloor a pinger that emit an acoustic wave and localizing underwater buoys and objects. The task must be done totally autonomous by the robot.
Technologies applied This topic was not found in the papers.
Application area
The main application area is education.
VEX Robotics Competition
Description It is a competition which aim to engages participants from elementary through university students in learning about STEM concepts. This competition was launched in 2005 and today it is one of the largest extracurricular robotics program in the world [32,39,64].
Where it takes place The place is not discussed in the articles.
Target public Middle school, high school and university students.
Challenges and activities
This topic was not found in the papers.
Technologies applied This topic was not found in the papers.
Application area
The main application area is education.
World Robot Olympiad (WRO) Description
It was founded in 2004 and the initial mission is: to bring together young people all over the world to develop their creativity, design and problem solving skills through challenging and educational robot competitions and activities [4,55].
Where it takes place
This topic is not discussed in the papers.
Target public Students.
Challenges and activities
There are three categories: regular category (the robots complete tasks and it is open for 13-15 years), open category (build a robot model) and football category (teams build two robots who compete against another team's robots in a robot football match). The theme of the 2018 edition was "Precision Farming" requires the students to design a robot which is able to plant different coloured seedlings in the corresponding farm areas.
Technologies applied LEGO Mindstorms.
Application area
The main application area is education.
Robotics Competitions
World Robot Summit (WRS)
Description
It is an international competition started in October 2018, organized by the Japanese government to accelerate research and development of robots in the areas of daily life, society and industry in order to promote a world where humans and robots successfully live and work together [59,75].
Where it takes place Tokyo, Japan Target public This topic was not found in the papers.
Challenges and activities
The leagues: rescue, service and assembly. Service: the tidy up here task consists of moving objects from incorrect positions to the right positions. There are four rooms, children, dining, kitchen and a living room. There are two types of objects, 45 known units and 10 unknown units. Assembly: aims to develop robots to allow the assembly of complex systems with varied products. There some tasks like task board, kitting, assembly and surprise assembly.
Application area
The competition can contribute to development of novel solutions, providing a benchmark for robot assistance, not only for disabled people but also for elderly people and healthy people, by assisting daily housekeeping tasks. Can also contribute to the manufacture industry developing solution to assembly products. FIRST [3,4,9,18,22,29,50,52,58] [ 3,4,9,18,29,50,52,58] [ 3,4,9,18,29,50 At the end of the review and after all the information is extracted, the research questions proposed in Section 2.1.1, in the beginning of the review, are finally answered. As Table 4 shows, there are a lot of robotic competitions around the world, since there are big and international competitions with several challenges and also simple competitions that are often done in a specific region or school. The following discussion is related to answering the research questions.
A theme very commonly found in the robotics competitions was industry, like RoboCup Work and RoboCup Logistics League, WRS Assembly task, ARIAC, RoCKIn@Work and SICK robot day [37,38,48,59,[75][76][77]. Robot@Factory and MicroFactory have a focus on education, but their themes are focused on industry and logistics too [41,46,63,67]. Other types of competitions found were those facing the domestic field, like RoboCup@Home, WRS Service Challenge, RoCKIn@Home and RoboWaiter, which their challenges usually are to perform house tasks in order to help elderly people or those with disabilities [6,25,28,37,75].
Some competitions found have sports as a theme and often have soccer games, like RoboCup@Soccer, FIRA HuroCup and WRO football category [4,24,81]. Besides that, there are competitions that include magic and dance like IEEE Humanoid Application Challenge [65], in which the robot needs to perform a magic show and RoboCup@Junior Dance Challenge [10,29] which the robot needs to perform a dance and its focus is more on education. There also are competitions totally online for example the CiberMouse@RTSS08 [23].
Many of the robotics competitions found have a focus on search and rescue based on natural disaster scenarios, including outdoor, indoor, terrain and even underwater environmental. Some examples are RoboCup Search and Rescue League, NIARC 2012 edition, EURATHLON, ERL and ERL Emergency, being the last three focused on the 2011 Fukushima accident. The NIARC is a competition that changes the theme every year, in the 2013 edition the theme was the mining industry and in 2014 was the agriculture industry. The EUROBOT competition follows the same way, changing its theme every year, in the 2010 edition the theme was collecting fruits and vegetables and the 2011 edition was a chess game [20,32,53,54,66,81].
There are also robotics competitions that are important tools to push the state of the art in a specific field, for example in the area of drones like MBZIRC and IARC [1,56]. HRATC contributes in the area of humanitarian demining and DARPA in autonomous vehicles [35]. SAUC-E, which can also cited, was the first underwater robotic competition in Europe and promotes advances in the state of the art of Autonomous Underwater Vehicle (AUV) [73].
RQ2: Where do the mobile robotics competitions take place currently and who is their target public?
Most of the international robotics competitions take place in many countries all over the world because it has an international scope [3,6,43,58]. Some educational competitions take place in universities or in a specific region, for example the "Schüler bauen Roboter" program that takes place in the University of Munich, Germany or Cybertech that take place in Universidad Politécnica de Madrid (UPM), Spain [21,57].
The target public of the competitions can vary a lot, but we can conclude that for the most part, it is students. The competitions focusing on education always include students and some other big, famous and international competitions have a larger target public including young students and senior participants, like engineers and business [3,6,43,58].
RQ3: What type of robotics challenges are addressed by the mobile robotics competitions?
The challenges addressed by the educational robotics competitions usually includes tasks that the participants have to build a robot to perform some activity, in order to allow them to apply the knowledge learned in classes in real-world problems and put it into practice. Sometimes there are several stages, like eYRC for example, in which the students firstly answer questions, realize a preliminary test or learn concepts about programming and electronics and then, they use tools provided by the competition to find the best solution for a problem, involving hardware and software approaches [21,29,42,45,50,57,71,78].
This kind of competition usually elaborates simple tasks like dance and games and with a focus on students working in teams and developing soft skills [3,10,29]. However, there also are competitions focused on education which includes industry challenges too, for example the Robot@Factory and MicroFactory [41,46,63,67].
In the robotics competitions focused on industry, the challenges can vary between transport products and logistics tasks, in which the robots must navigate, avoid obstacles and deliver products in correct positions, perform assembly tasks and kitting processes, build kits by picking up items, organize materials in warehouses, among others [37,38,41,46,48,59,63,67,[75][76][77]. There also are challenges addressed to evaluate the robot speed, like activities in a maze (Micromouse) or delivering as many objects as possible (SICK robot day), in these kind of tasks the robots needs to be as fast as possible [3,38,60,70].
Home challenges are usually focused on the robot helping in household activities inside an environment that simulates a house including rooms. The tasks found in this kind of competition were organizing objects from an incorrect place to a correct place, assisting people in opening or closing windows and doors and kitchen activities like opening the refrigerator, picking up a plate of food from a shelf and putting it in a table [6,25,28,37,75].
The challenges included in search and rescue competitions can be exploration tasks, climbing ramps, walking on dirty terrain with poor visibility, looking for missing people, pipe inspection and stemming the leak, reconnaissance and environmental survey and usually, the scenarios simulate a natural disaster and it can vary between underwater, buildings, small places, indoor or outdoor environments [19,20,32,53,54,66,81].
In soccer games, the challenges usually include teams of autonomous robots competing against each other. FIRA Hurocup also includes other sports activities for the robots for example basketball, climbing wall, penalty kicks, lift and carry, weightlifting and obstacle run [4,24,81].
RQ4: What type of technologies are used in the mobile robotics competitions?
RQ5: What is the final application area of the mobile robotics competitions?
In general, the mobile robotics competitions can contribute with advances in many fields, like industry, daily life, search and rescue in disaster scenarios, but one of the most benefited has been education.
Most of the competitions have been contributing to education recently, increasing the students' interest in STEM concepts, introducing other people to the field of robotics, connecting students with professionals, enabling them to solve real world problems, encouraging them to join engineering careers and developing 21st-century skills. It is becoming more and more usual in schools and universities last few years because it aids them to teach a variety of multidisciplinary engineering topics including design, programming and mechatronics. Besides that it can also contribute to develop the ability of the teams to work together in a multidisciplinary work [3,4,[8][9][10]17,[20][21][22]29,30,32,[41][42][43]45,46,50,52,55,57,61,63,64,67,70,71,73,78].
Another application area very common in robotics competitions is industry, contributing to the development of new solutions related to manufacturing and logistics, assisting people in repetitive tasks and reducing human errors. Besides that, the competitions have been a good way to promote comparison of different algorithms and systems and a means of discovering the best practices to solve real-world problems [37,38,41,46,48,59,63,67,[75][76][77].
The competitions focused on home environments contributes to the development of solutions based on assistive robots, providing a benchmark for robot assistance, not only for disabled people but also for elderly people and healthy people, by assisting daily housekeeping tasks [6,25,28,37,75].
Search and rescue area can be benefited by the robotics competitions focused on this theme because encourages the development of solutions that allow autonomous robots to work and help in areas inaccessible for humans or in natural disasters areas [20,32,53,54,66,81].
There are some competitions that assist to push the state of the art in domains that are growing like air, land and sea, providing an opportunity to exchange ideas, create solutions as well as a venue to evaluate and encourage state of art research. Autonomous vehicles and drones are fields that have been attracting a lot of attention, mainly through robotics competitions [4,53,54,65,66].
RQ6: How have these competitions been contributing positively to education? As we can see in Table 4 a large part of the competitions has a final application area in education. Associated with the last extracted form question DQ7 (Which robotics competitions contributes positively to education?) we listed most of the robotics competitions with a focus on education. Most of them have as objective the dissemination of technology and STEM concepts through young students, encouraging them to pursue career in these fields, developing skills in programming, electronics, robotics, working in team, increasing the motivation towards engineering besides that assist universities to teach multidisciplinary domains [3,4,[8][9][10]17,[20][21][22]29,30,32,[41][42][43]45,46,50,52,55,57,61,63,64,67,70,71,73,78]. The SAUC-E, for example, has one of the objectives like getting closer contact between the university teams and companies invited to participate [53].
The target public involves young students of primary and secondary schools, high school students and university and engineering students. A famous robotic competition called FIRST Lego League accept students from 9 to 14 years old, a lower age level than e-Yantra Robotics Competition for example, which includes college students of the Indian Institute of Technology Bombay [4,58,78].
Some competitions take place directly at universities like NIARC, which occurs in universities across Australia and New Zealand, Cybertech, which occur in Universidad Politécnica de Madrid, Balam Robot Competition (BRC) in Universidad Galileo and also "Schüler bauen Roboter" program in Technical University of Munich [21,32,57,61]. The other robotics competitions can take place in any other place or more than one like SAUC-E and EUROBOT competition, for example, which occur in many countries around Europe [20,30,62,73] or even RoboCup and FIRST which are competitions that take place all over the world [3,6,43,58].
FIRST Robotics Competition is one of the oldest and the most famous competition focused on education which began in 1998 with the FIRST Organization and the LEGO Group [52]. The most recent VEX robotics world championship run by the Robotics Education and Competition (REC) Foundation in April of 2018, became the "largest robot competition" in the world according to Guinness World Records [64]. Table 6 summarizes the research questions answers. On the left side there are the six research questions that this work aims to answer and on the right side are presented the main topics of each answer which were already discussed above but with fewer details, in order to provide better insights.
Research Questions Answers
RQ1: What type of mobile robotics competitions exist in the last few decades and with what aim?
Educational: attracting students to STEM areas encouraging them to enter in technologies careers. Industry: manufacturing and logistics. Domestic: assist people in household activities, especially those with disabilities. Sports: amusement for young students with focus on education. Search and rescue: creation of technologies to assist in natural disaster. State of the art: push the state of the art in a specific field like autonomous vehicles, drones and underwater robots.
RQ2: Where do the mobile robotics competitions take place currently and who is their target public?
Most take place in many countries all over the world. Some educational competitions usually take place inside a university or school in a specific region. The target public vary a lot, from young students until professionals and engineers.
RQ3: What type of robotics challenges are addressed by the mobile robotics competitions?
Education: include tasks in which the participants have to build a robot to perform some activity or sometimes just program the robot to realize a specific task, for example follow a line, dance, games, etc. Industry: the challenges vary between transport products and logistics tasks, in which the robot must navigate, avoid obstacles, pick up items, etc. Domestic: usually the robots have to help people with household tasks inside an environment that simulates a house. Some tasks include organizing items, opening the refrigerator, picking up a plate of food, etc. Search and rescue: tasks like climbing ramps, walking on dirty terrain with poor visibility, looking for missing people, pipe inspection, etc. Besides that this kind of competition can include underwater fields, buildings, small places, indoor and outdoor environments. Soccer: the soccer challenges usually includes teams of autonomous robots to compete against each other.
RQ4: What type of technologies are used in mobile robotics competitions?
Research Questions Answers
RQ5: What is the final application area of the mobile robotics competitions?
Education, industry, daily life, household tasks, search and rescue in disaster scenarios and amusement.
RQ6: How have these competitions been contributing positively to education?
Dissemination of technology and STEM concepts through young students, encouraging them to pursue a career in these fields, developing skills in programming, electronics, robotics, working in a team, increasing the motivation and besides that assisting universities to teach multidisciplinary domains.
Conclusions
In this paper, a systematic mapping literature review was developed and it was found a large number of articles cited or/and described many mobile robotics competitions that took place over the last few decades. It was possible to analyze in detail most of the competitions and to conclude that these competitions are growing and becoming more common in several domains with diverse objectives, mainly in education. It was observed that the number of competitions since 2001 is gradually increasing each year.
Among the most cited robotics competitions in the articles are RoboCup, FIRST and DARPA. RoboCup is the biggest robotic competition with more than 10 leagues, covering several areas in a single competition but with different challenges. FIRST is one of the oldest robotics competitions, very famous too, and has a focus on education and each year a new theme is chosen. DARPA is a competition that is more professional and has prize money and is focused on autonomous vehicles.
It is possible to conclude that education is the area most benefited by the mobile robotics competitions. The number of competitions focused on contributing to education is growing because they have been provided how powerful they could be in attracting students for technological areas and positive results have been observed. The robotics competitions focused on education usually have objectives focused on encouraging young students to pursue careers in STEM areas, develop skills, teach how to work in team, assist teachers and universities in multidisciplinary domains and expose students to real problem solving and practical application of their knowledge. Therefore, the robotics competitions have been a good contribution tool not only for education but for different areas, helping people, engineers, researchers, business and students to solve real problems through the use of robots and creating innovative solutions, showing that the robots can be assist us to construct a better quality of life for people and consequently a world better for all.
As future work, we intend to perform a systematic literature review, which is more specific than a mapping, of the mobile robotics competitions all over the world and the research questions could be more focused on education, industry or benchmarking.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 16,294.6 | 2022-03-01T00:00:00.000 | [
"Engineering",
"Education",
"Computer Science"
] |
Gauge theories of Partial Compositeness: Scenarios for Run-II of the LHC
We continue our investigation of gauge theories in which the Higgs boson arises as a pseudo-Nambu-Goldstone boson (pNGB) and top-partners arise as bound states of three hyperfermions. All models have additional pNGBs in their spectrum that should be accessible at LHC. We analyze the patterns of symmetry breaking and present all relevant couplings of the pNGBs with the gauge fields. We discuss how vacuum misalignment and a mass for the pNGBs is generated by a loop-induced potential. Finally, we paint a very broad, qualitative, picture of the kind of experimental signatures these models give rise to, setting the stage for further analysis.
Motivation
The Higgs mechanism [1] in the Standard Model [2] (SM) does an excellent job at parameterizing the mass spectrum of elementary particles in a consistent way, but leaves many questions unanswered.
We would like to understand why the Higgs mass is so low and to explain the huge disparity among fermion masses.
One possible explanation of the lightness of the Higgs boson is to realize it as a (pseudo) Nambu-Goldstone Boson (pNGB) of a broken global symmetry. This approach was pioneered in [3] and goes under the name of "Composite Higgs". One way to deal with the disparity of fermionic masses and, in particular, to explain the origin of the top quark mass without reintroducing fine-tuning is to also have additional "partners" mixing with SM fermions. This new ingredient was introduced in [4] and goes under the name of "Partial Compositeness".
Much work has been done in this area using the effective field theory description based on the CCWZ formalism [5]. There was also a huge effort to realize these construction using extra-dimensions.
There are by now exhaustive reviews such as [6] providing all the necessary background to these subjects.
A much less studied approach is that of constructing UV completions for these models using a strongly coupled "hypercolor" gauge theory with purely fermionic matter ("hyperquarks"). The philosophy behind this proposal is so old fashioned that it almost appears new! Fermionic models of BSM go all the way back to the old technicolor idea and were also tried in the context of composite Higgs and partial compositeness. The recent model building activities try to combine the two. Few explicit proposals have been made so far: [7], [8] and [9] and a partial classification of the available options was made in [10]. (For earlier attempts using supersymmetry, see [11]. Alternative avenues being explored are found in [12].) The LHC is now entering a phase where the potential for discovery is at its highest point, due to the increase in luminosity and energy. It is thus timely to chart the various scenarios implied by the above class models. In this work we are particularly interested in presenting the underlying theories in detail and in identifying the broad features that may allow one to discern one class of models from the others. We leave instead a detailed phenomenological analysis for future work. For recent phenomenological work in the area a surely incomplete list is [13].
Overview of the results
In a nutshell, the models we are considering are based on an asymptotically free gauge theory with simple hypercolor group G HC and fermionic matter in two inequivalent irreducible representations (irreps) 1 . The requirement of two different irreps arises from the need to construct top-partners carrying both color and EW quantum numbers. With the notable exception of a model by L. Vecchi [9], this requires at least two separate irreps; one, generically denoted by ψ, carrying EW quantum numbers in addition to hypercolor, the other, χ, carrying ordinary color as well as hypercolor.
At low energies, the theory is expected to confine after having spent a part of the RG evolution in or near the conformal window, somewhat in the spirit of [14,15]. This is the main dynamical assumption needed for some of the operators in the theory to develop the large anomalous dimensions required to solve the hierarchy problem. However, contrary to the above-mentioned proposal, here we use fermionic operators [4] to generate the mass of the top quark, eluding the potential problems with fine-tuning pointed out in [16].
Here we are only interested in the behavior of the theory below the dynamically generated scale Λ, (expected to be of the order of 10 TeV, to fix the ideas). The conformal behavior occurs above this scale, up to the "flavor" scale Λ UV > 10 4 TeV. In this range the theory could have additional d.o.f./operators driving the conformal behavior and being ultimately responsible for its ending at the scale Λ.
Below Λ, the strong IR dynamics of one of the two types of hyperquarks (ψ) induces the symmetry breaking needed to realize the composite Higgs scenario. The three minimal cosets preserving custodial symmetry are SU (5)/SO (5), SU (4)/Sp (4), and SU (4) × SU (4) /SU (4) D . The SM EW group is embedded into the unbroken symmetry. The vacuum is misaligned, inducing a Higgs v.e.v., by the combined action of the one loop potential induced by the SM gauge bosons and the top quark as well as possible hyperquark bare masses of UV origin.
The second irrep (χ) is needed to realize the QCD color group. Its dynamics may or may not lead to additional pNGBs 2 . Top partners arise as G HC invariant trilinear combinations of the two types of hyperquarks. The top quark acquires a mass via a linear coupling of these partners to the SM fields Q 3 L ≡ (t L , b L ) and u 3 R ≡ t R . The remaining SM fields may instead be coupled bilinearly and acquire a mass via the more standard mechanism. This hybrid solution, proposed in [8,18,19], has the extra advantage of suppressing unwanted contributions to dipole moments or flavor violating operators and could be realized at low energies via the mechanism explained in [20].
With the exception of the Wess-Zumino-Witten (WZW) term, we consider only SM tree level couplings that preserve a parity symmetry, P π , changing sign to all the pNGBs except for the Higgs itself. Heavier pNGBs thus decay into lighter ones plus a SM gauge boson or a pair of SM fermions if the decay into a gauge boson is not kinematically allowed.
This parity symmetry is however broken in some cases by the anomaly encoded in the WZW term, and this allows the lightest pGNBs to decay via di-bosons with a very narrow, but still prompt, decay 1 We work with Weyl fermions and count a complex irrep and its conjugate as one. 2 Note that the condensate ψχ would break the hypercolor group and cannot arise in vector-like theories such as these [17].
width. It is interesting to notice that [21], for the coset SU (4) × SU (4) /SU (4) D , the decay of some of the pNGBs is forbidden by the existence of another symmetry, G π , thus providing a possible Dark Matter candidate. For the scope of this paper we only assume that in the SU (4) × SU (4) /SU (4) D scenario the lightest pNGB odd under this additional symmetry is collider stable, leading to the usual signatures-E T or highly ionizing tracks depending on the charge. (The requirement of this pNGB being neutral is necessary only in order to have a DM candidate, not simply a collider stable particle.) The leading production mode for the pNGBs associated with the EW coset are Drell-Yan production and vector boson fusion.
If the dynamic in the color sector also leads to symmetry breaking, (as we assume through the paper for illustration purposes, since this case leads to additional interesting phenomena), there will be additional colored pNGB with a mass higher than the EW ones since it is due to gluon loops.
All models have a neutral pNGB in the octet of color that can be singly produced and decay via an anomalous coupling. Some models also include additional charged and colored pNBGs in the triplet or sextet that, under the assumption of P π -parity, decay to two jets and a lighter EW pNGB. Their charges are fixed by the structure of the top partners.
An universal feature of all of these models is the presence of two additional scalars arising from the two spontaneously broken U (1) axial symmetries associated to the two fermionic irreps. One of these bosons is associated to a G HC anomalous current and it is thus expected to acquire a large mass just like the η in QCD. The remaining one is instead naturally light in the absence of additional UV mechanisms such as bare hyperquark masses. Both couple to gluons via the anomaly and could provide an explanation of the current 750 GeV di-photon excess [22]. Indeed, such an interpretation has already been put forward in [23] for the case of the light U (1) boson. (More details about the role of pNGBs in explaining the excess are given in [24].)
Organization of the paper
The paper is organized as follows: In Section 2 we present the class of models of interest. We then turn to study their different sectors beginning in Section 3 with the pNGBs associated to the EW coset. We study the generation of the potential, its symmetries, present a couple of prototypical spectra, work out all the couplings of relevance for LHC physics and briefly comment on the main phenomenological aspects. In Section 4 we discuss the colored objects in the different theories, pNGBs and top partners, show how their quantum numbers are related and how this affects the phenomenology. In Section 5 we comment on the remaining two pNGBs universally present in this class of models.
Technical details are collected in the appendix. Appendix A lists all the gauge theories having a composite higgs and a top partner under the requirements discussed in Section 2 and 3 and discusses their IR properties. Appendix B contains the conventions for the explicit construction of the EW cosets. Appendix C lists additional couplings (anomalous and non) that did not find a place in the main text.
The models, streamlined classification
In this section we summarize the models of interest in this paper. We take the opportunity to slightly expand and streamline the classification presented in [10].
We want to realize the "composite Higgs" coset by condensation of a set of fermionic hyperquarks ψ transforming in some irrep of a simple hypercolor gauge group G HC . Recall that the three basic cosets one can realize with fermionic matter depend on the type of irrep to which the fermions belong.
One possibility is to mimic ordinary QCD. Working with left-handed (LH) fermions only, a set of n pairs of LH fermions (ψ i ,ψ i ) in a (R,R) irrep of G HC , with R complex (C) andR its conjugate, breaks the global symmetry SU (n) × SU (n) → SU (n) D after condensation ψ i ψ j ∝ δ i j . (The U (1) factors will be studied separately because of possible ABJ anomalies. Here we concentrate on the non-abelian factors.) If, on the other hand, we consider just a single set of n LH fermions ψ i in a real (R) (respectively pseudo-real (PR)) irrep, the symmetry breaking is SU (n) → SO(n) (resp. SU (n) → Sp(n)) since the condensate ψ i ψ j turns out to be symmetric (resp. anti-symmetric).
Since we want to obtain the top partners as fermionic trilinears, we also need to embed the color group SU (3) c into the global symmetry of the composite theory. For this purpose we introduce a second fermionic irrep χ coupling to color as well as hypercolor. The minimal field content allowing In all of these cases we need 6 LH fermions altogether, to be divided into three pairs (χ,χ) in the case of a complex irrep. Top-partners are constructed by G HC invariant trilinears of type ψχψ or χψχ depending on the model as shown in Appendix A.
All combinations of R, PR and C irreps are in principle possible. The minimal cosets are shown in Table 1. The three cases crossed out are those that do not give rise to top partners. This can be easily seen e.g. for the case in which both irreps are pseudo-real since the product of three pseudo-real irreps cannot contain a singlet. For each remaining case one can look for possible hypercolor gauge groups and irreps that satisfy the remaining constraint of asymptotic freedom. These are listed in Appendix A for completeness. More details can be found in [10]. The three cases crossed out are those that do not give rise to top partners because the nature of their congruency classes prevents the formation of singlets. Table 1 also shows a "ubiquitous" non-anomalous U (1) u factor arsing from the spontaneous breaking of the G HC -anomaly-free abelian chiral symmetry. This symmetry is obtained by constructing the anomaly free linear combination of the two axial symmetries U (1) ψ A and U (1) χ A rotating, respectively, all the ψ (or ψ,ψ) and χ (or χ,χ) by the same phase. For each pair of complex irreps there is also one vector-like U (1) ψ V or U (1) χ V factor which is both anomaly free and unbroken. To understand the type of pNGBs arising in the various cases, we look at the decomposition under SU (2) L × U (1) Y of the irrep of H under which the pNGB's transform 3 . The decomposition is shown in Table 2.
General non-minimal cosets are discussed in [35].
As for the color cosets, arising when the χ also condense, a generic prediction is the existence of an electrically neutral color octet pNGB. In addition, we have a pair of electrically charged pNGBs in the (3, 3) of SU (3) c for the SU (6)/Sp(6) case or in the (6, 6) for the SU (6)/SO(6) case. The charges are discussed in Section 4.
Top partners can be broadly divided into two separate groups: those of type ψχψ and those of type χψχ. (We are being schematic here, and only indicate the relative number of ψ or χ-type hyperquarks, without indicating the specific Lorentz and hypercolor contractions.) Top partners of the first type require coupling to top quark spurions in a two index irrep, while partners of the second type give rise to single index irreps.
There is a sense in which models of type ψχψ are more promising than the others. Top-partners of type χψχ force one to chose the fundamental irrep for the spurions. For the SU (5) case this leads to the 5 that, although being compatible with the Z → b LbL custodial symmetry [36,8], gives rise to effective potentials that tend to break the usual custodial symmetry [37]. The case of SU (4) × SU (4) leads to problems already at the Z → b LbL level and we exclude these models from the list in Appendix A.
3 The Electro-Weak sector 3
.1 The potential
The pNGBs acquire a mass from a loop-induced [38] potential that breaks the shift symmetry explicitly.
We consider three kinds of contribution to the potential. The first one is the contribution from the loop of gauge bosons, which is uniquely determined by the gauge structure up to an overall dimensionless positive constant B. It can be written as 4 for the SU (4)/Sp(4) case and for SU (4) × SU (4) /SU (4) D . For the SU (5)/SO(5) coset both expressions are equivalent in our conventions from Appendix B. Actually, for all three cases the formula could be written in a uniform notation using the matrix Σ defined in Appendix B instead of U , but we choose to work with U because of its easier transformation properties under the full symmetry group.
The constant B and related ones are the so-called low-energy-coefficients (LEC) (in units of f ) that encode the information about the spectrum of the strongly interacting theory. Lacking direct experimental information, they could be estimated on the lattice. Some work in this direction has already been done in the context of a specific model [39]. (For more general results on the lattice, see 4 We chose to use the pNGB decay constant f as the only dimensionfull parameter. This simplifies the notation but hides the scaling properties of the formulas. See Appendix B for the conventions on the generators and the non-linear pNGB matrix U . the review [40].) These models necessarily involve representations of the hypercolor group other than the fundamental and pose additional challenges. In the context of phenomenology they have also been used in e.g. [41]. For a clear discussion of how they are generated and can be computed in the context of effective theories of partial compositeness, we refer to [42] and references therein.
We also have the option of adding bare hyper-quark masses with µ a dimensionless matrix preserving the custodial symmetry and B some other dimensionless constant. For definitiveness we take µ = 1 if needed.
Lastly, we need to take into account the effect of the top quark, which leads to vacuum misalignment [43]. This can be done introducing spurionic fields transforming under a particular irrep of the unbroken flavor group. Here is one instance when having a candidate UV completion helps in picking the particular irreps to consider.
We restrict to the case where only the Higgs acquires a v.e.v. since we want to preserve the SM-like properties of the Higgs boson as well as the tree level mass relation m 2 W = cos 2 θ W m 2 Z . Since we are only allowing the Higgs direction to be turned on, the matrix of v.e.v.s is easily exponentiated and we find it convenient to introduce a matrix Ω(ζ) for all three cases denoting the vacuum misalignment and depending on v = 246 GeV through sin ζ = v/f . In terms of the original Higgs fieldĥ gaining a v.e.v. we have ζ = ĥ /f . In other words v = f sin( ĥ /f ). The fields appearing into the effective lagrangian are always the canonically normalized fields with zero v.e.v.. The expression for Ω and U is found in Appendix B.
It is then a straightforward matter to check which, among the irreps of G = SU (5), SU (4), SU (4)× SU (4) with up to two indices, contains spurions for the third generation quarks that preserve the custodial symmetry in the sense specified above. The decomposition of G → SU (2) L × SU (2) R is useful at this point and it is reproduced in Table 3 for convenience.
The spurions irrep should be matched with the type of baryon arising in the UV completion. If, in a particular model, the composite top partners arise from bound states of type χψχ, then the spurions to be used are those in the one index irrep (the fundamental). Vice-versa, if the top partners in a model are of type ψχψ, one should use two indices irreps, to be further restricted to symmetric, anti-symmetric, adjoint or bi-fundamental if required by the symmetries of the particular model. From Table 5 in Appendix A one can reconstruct the requirements case by case.
A spurion S in a two-index irrep of SU (n) transforms as S → gSg T if in the S 2 or A 2 irrep and S → gSg † if in the Ad. In the SU (4) × SU (4) /SU (4) D case one should instead talk about Table 3: Decompositions of the irreps of G to be used to identify candidate spurions.
properties of the pNGB field U are U → gU g T for the SU (5)/SO(5) and SU (4)/Sp(4) cosets and U → gU g † for SU (4) × SU (4) /SU (4) D . Thus, we see that, to leading order, the potential for twoindex representations is proportional to the expressions in Table 4. Spurions like (F, F) must couple to top partners containing one ψ and oneψ. Spurions of the type (R, 1) or (1, R ) such as (F, 1), (A 2 , 1) etc., do not give rise to a non-trivial invariant since we need to multiply directly U and U † .
In the cases of SU (5)/SO(5) one could also consider spurions in the fundamental F of SU (5). In this case the leading contribution to the potential is of forth order and proportional to 5 The F for the coset SU (5)/SO(5) runs into trouble with the desire to have a vacuum that preserves custodial symmetry. In this case, coupling generically the pNGBS to spurions in the fundamental will induce a tadpole for the field φ − + − φ + − which should be suppressed in order to avoid tree level corrections to the ρ-parameter. If we were to take this fact also as a strict guideline, we would be led to exclude all the cases in Appendix A giving top partners of type χψχ, although this may be a bit too drastic at this stage.
In the above formulas S could carry a SU (2) L index in the case it corresponds to Q 3 L . This index is then also summed over in the obvious way. Notice that terms proportional to tr(SU * ) + tr(S * U ) or (S † U S * ) + (S T U * S) are not allowed due to the need to preserve the spurionic U (1).
The parity transformations P π and G π
We are now in the position of defining more concretely the parity symmetries of relevance for these models, starting with P π . For the scope of this paper we will think of P π as an accidental symmetry of the non-anomalous pNGB Lagrangian coupled to the SM. Its action changes sign to all the pNGB except the Higgs doublet(s) and can be realized in all three cases as U →P π U †P π with the matrixP π defined asP for the three cosets SU (5)/SO (5), SU (4) × SU (4) /SU (4) D and SU (4)/Sp(4) respectively.
To see that the transformation accomplishes its task note first thatP π Ω * = ΩP π for SU (5)/SO (5) and SU (4)/Sp(4) andP π Ω † = ΩP π for SU (4) × SU (4) /SU (4) D . This allows one to move the action ofP π pass the vacuum misalignment matrix directly onto the pNGB matrix Π (c.f.r. Appendix B) where its effect is to reverse the sign of the Higgs doublet(s). This, together with the hermitian conjugation on U that reverses the sign of all pNGBs, has the desired combined effect. In all three cases P π leaves the vacuum invariant and preserves the custodial symmetry group. In particular Note that the hermitian conjugation is necessary in all three cases. But it is known that the WZW term breaks precisely this last transformation and thus P π can never be an exact symmetry at the quantum level. Still, it is desirable for the Yukawa couplings to be left invariant by such transformation since this prevents the generation of custodial symmetry breaking v.e.v.s from the induced potential and greatly alleviates the constraints from flavor physics, e.d.m. etc. This condition can be realized by imposing the invariance of the spurion fields. In particular, for the two-index irreps in Table 4 we require S = ±P π S †P π (either sign) for the S 2 , A 2 or (F, F) or S = ±P π S TP π (either sign) for the Ad or (F, F). Some, but not all, spurions obey these requirements. The spurions used in the next section to generate an example of potential have been chosen to satisfy these invariance requirements.
The second transformation of interest, G π , is realized as U →Ĝ π U TĜ † π and gives non trivial results only for SU (4) × SU (4) /SU (4) D since in the other two cases U T = ±U (see Appendix B). For the SU (4) × SU (4) /SU (4) D case we choose, following [21] This transformation is interesting because it is also a symmetry of the WZW term and it may be preserved at the quantum level in the UV theory. If so, the lightest neutral pNGBs odd under it (a linear combination of φ 0 , N 0 , h and A ) could be a Dark Matter candidate.
Mass spectrum
Now that we have seen what the main contributions to the potential are and how to compute them, we present a couple of examples of mass spectrum based on a particular choice of spurions. This is not in any way a prediction of the models, it is merely presented to make the previous discussion more concrete and to show qualitatively how a mass spectrum could look like. We consider potentials that depend on three of the dimensionless constants B i , to be specified below. We trade one linear combination for the misalignment angle sin ζ = v/f , measuring the amount of fine-tuning in the model.
A second combination is fixed by imposing the mass of the Higgs boson to be at its measured value [44] of 125 GeV. The third combination is left free and varying it gives possible examples for the mass spectrum.
As a first example, consider the SU (5)/SO(5) model with a potential where we have chosen the spurion for t R to be in the (1, 1) component of the decomposition of the Ad irrep and the spurion for (t L , b L ) to be in one of the two (2, 2) components with T 3 R = −1/2 in Setting f = 800 GeV and f = 1600 GeV, solving the constraints and varying B 1 we obtain the spectra in Figure 1 and 2 respectively.
Moving on to SU (4) × SU (4) /SU (4) D , we chose to present the mass spectrum induced by the following potential, consisting of the contributions from the gauge fields, some bare masses and a LH third family, assumed to give the dominant contribution.
The spurions for the LH quarks are chosen to belong to one of the (2, 2) of SU (2) L × SU (2) R found in the decomposition of (4, 4) The representative spectra for f = 800 GeV and f = 1600 GeV are given in in Figure 3 and 4 respectively.
Not much needs to be done for the remaining SU (4)/Sp (4). The η is the only pNGB particle other than the Higgs in our current approach its mass is essentially a free parameter. A full discussion of this case is given in [28].
Couplings involving pNGBs
The trilinear vertex ππ V between two generic EW pNGBs and an EW vector boson is encoded in the structure of the currents. With the usual shorthand π * 1 ← → ∂ µ π 2 = π * 1 ∂ µ π 2 − ∂ µ π * 1 π 2 we find, for SU (5)/SO(5) the coupling to the Z-boson 6 and that to the W ± For SU (4) × SU (4) /SU (4) D we find instead, in agreement with the results of [21] L ⊃ ie for the Z couplings, and for the W ± couplings. The electromagnetic coupling is of course always given by ieq π A µ π * ← → ∂ µ π for any of the pNGBs π of charge q π .
In all three cases the Higgs boson h does not mix with the other pNGBs and its couplings to the vector bosons at tree level are 7 : The model SU (4)/Sp(4) only contains the η as an additional pNGB. Its trilinear couplings vanish and at quartic level it can easily be written down: For the quartic couplings in the remaining models we refer to Appendix C.
The P π -parity odd pNGBs can decay to the transverse part of the vector bosons via the anomaly term yielding a vertex πV V . This can be extracted from the WZW term [45] by considering the piece containing one pNGB and two vector bosons. The relevant term is given in [46] in the elegant language of differential forms For SU (4)/Sp(4) we set A L = A, A R = −A T = − 0 A 0 and U = Ω exp(2 √ 2iΠ/f ) 0 Ω T . Expanding to first order in the pNGBs and integrating by parts yields For SU (4) × SU (4) /SU (4) D we set A L = A R = A and U = Ω exp(2 √ 2iΠ/f )Ω. Expanding to first order in the pNGBs and integrating by parts we find exactly the same expression as (18). This was found in [21] and it is due to the extra symmetry G π , defined in Section 3.2, present in this case.
In particular, no terms involving the pNGB φ and N arise in this model.
On the contrary, for the coset SU There are three possible production modes to be considered for these EW pNGBs, see Figure 5.
Two of them are pair production modes, one by an off-shell vector boson in the s-channel -Drell-Yan
production (DY) -and the other by vector boson fusion via a renormalizable four boson interaction (VBFr). The third one is a single production mode by vector boson fusion via the anomaly (VBFa).
Perhaps surprisingly, VBFr tends to give a larger contribution than DY. Consider the interesting case of the doubly charged pNGB φ + + present in SU (5)/SO (5). (A model in which such a particle is present as an elementary object is the Georgi-Machacek model [47].) The tree level production can be easily estimated with MadGraph and FeynRules [48] yielding, at 13 TeV for a mass of 500 GeV and f = 800 GeV: σ DY (φ + + φ − − ) = 1.3 fb and σ VBFr (φ + + φ − − ) = 3.0 fb. The single production of the doubly charged pNGBs via VBFa is totally negligible in this case: This last statement is no longer true for other pNGBs. For instance, in the case of the η of SU (4)/Sp(4), (and a particle with exactly the same couplings is present in SU (4) × SU (4) /SU (4) D as well), with the same parameters as before, the double production is now negligible: σ DY (ηη) = 0 (impossible) and σ VBFr (ηη) = 2.0 × 10 −2 fb, while σ VBFa (η) is of the order of a few fb depending on the specific value of the anomaly.
The reason for this different behavior is due to the fact that the VBF diagrams that contribute the most are those where a photon is allowed to be present. For this same reason, the single charge pNGBs have non negligible cross section for all processes and the single production mode becomes relevant at higher masses. We have not tried to pin down the exact range of masses where one production mode is expected to be dominant with respect to the others because this depends on the details of the models such as mixing, which is not an issue for the η of SU (4)/Sp(4) or the φ + + of SU (5)/SO(5). However, given that σ VBFr (φ + + φ − − ) and σ VBFa (η) are roughly comparable for masses of 500 GeV, we expect the cross-over region to be within the energy range of the LHC. In the SU (4) × SU (4) /SU (4) D case, the lightest pNGB odd under G π is collider stable under our assumptions and thus leads to missing energy or charged heavy tracks depending on its charge. If its decay into SM fermions is totally forbidden, it could even be a dark matter candidate [21]. This is in the spirit of [49] although their candidate for dark matter (the η of SU (4)/Sp(4)) is not viable for our UV completions because it decays through the anomalous couplings. (For pNGB dark matter see also [50]. Additional dark matter candidates have been conjectured to arise from the topological structure of similar cosets [51].) A pictorial description of the various possibilities is given in Figure 6.
Top partners and colored mesons
We now turn to the discussion of objects carrying color, that is, bound states containing some of the constituents χ.
As we mentioned in the introduction, top-partners are realized via fermionic tri-linears in the hyperquarks. These can be of type ψχψ or χψχ depending on the type of model under consideration, as shown in Appendix A. So far we have been somewhat sloppy in indicating the structure of these objects, now it is time to be more specific.
We need at least six new fermions "χ" in order to embed the color group into the associated global symmetry group in an anomaly-free way. In the case of a complex irrep, leading to SU Even in the other two cases (real or pseudo-real irreps), it is still convenient to split the 6 fermions into a 3 + 3 of SU (3) c . In these cases we allow ourselves the following notational ambiguity for ease of notation. Note that these fermions must carry not only the color quantum numbers but also the additional U (1) X charge needed to obtain the proper weak hypercharge Y = X + T 3 R for the top partners. The allowed values of X can be found looking at the construction of the top-partners as follows.
Consider the case where the top-partners are of type χψχ. Using the notation (19), we can generally construct at most three types of LH objects transforming in the 3. They are contained in the products χψχ,χψ † χ † , χ † ψχ † , where we used the fact that 3×3 = 6+3. Identifying the T L = T R = 0 component with the partner of t R we see that we must chose X(χ) = −1/3 and B(χ) = −1/6 (baryon number) for the constituents χ. Now, still within the χψχ case, if the G HC irrep for the χ in question is real, giving rise to the coset SU (6)/SO (6), this leads to colored pNGBs χχ ∈ 6 −2/3 of baryon number 1/3, as well asχχ ∈ 6 +2/3 of baryon number −1/3 and the ever-presentχχ ∈ 8 0 of baryon number 0. If the G HC irrep is pseudoreal, giving rise to the coset SU (6)/Sp (6), then the pNGB mesons are χχ ∈ 3 −2/3 etc. with the same baryon number assignments as before.
If instead the top partners are of type ψχψ, then the χ andχ in (19) must be in the 3 +2/3 + 3 −2/3 of SU (3) c × U (1) Y with baryon number ±1/3, leading, for a real irrep, to mesons χχ ∈ 6 4/3 of baryon number 2/3 and its complex conjugate plus the usualχχ ∈ 8 0 . From Appendix A we see that no pseudo-real cases exist when the top-partners are of type ψχψ. The case in which the χ are in a complex irrep only leads to the neutral mesonχχ ∈ 8 0 without baryon number.
The masses for these colored objects should be in the multi TeV range getting contributions from gluon loops and possibly bare masses for χ but they could still be in the discovery range of LHC. The octets decay mostly to two gluons via the anomaly term but there is no such term available for the triplet or the sextet. Preserving P π -parity, we can let them cascade to the lighter EW pNGBs via interactions of type πqq φ where q and q are SM quarks and φ is an appropriate EW pNGB with the right quantum numbers. If we allow for interactions violating P π -parity, we do not need this additional pNGB. Summarizing, we have therefore the following three possibilities, in addition to the octet: • Case a) χ in a real irrep and top-partners of type χψχ. This gives rise to mesons π in the 6 −2/3 of SU (3) c × U (1) Y of baryon number −1/3. They can decay via ∆B = 1 couplings π * ab Q La Q Lb φ, π * ab u Ra u Rb φ, π * ab d Ra u Rb φ, π * ab d Ra d Rb φ where we denoted explicitly only the color index. The various EW pNGBs φ appearing in the vertex must be such that the particular vertex is invariant under the full SM gauge group. In the case of Q L Q L coupling, we have the option of coupling to a SU (2) L triplet or a singlet, making the quark flavor indices symmetric or anti-symmetric respectively. In all gory details for the triplet: π * ab Q αf i La Q f j Lbα φ ij , symmetric in the exchange of f f . In the absence of P π -parity we could also consider the term π * ab d Ra d Rb , symmetric in the flavor indices.
• Case b) χ in a real irrep and top-partners of type ψχψ. This gives rise to mesons π in the 6 4/3 of SU (3) c × U (1) Y of baryon number 2/3. They can decay via same couplings as case a) but now these couplings are baryon number preserving. Without P π -parity one can only make the vertex π * ab u Ra u Rb , symmetric in flavor.
with the appropriate EW pNGB. Without P π -parity one can construct abc π a d Rb d Rc asymmetric in the flavor indices.
For all EW cosets there are some pNGBs that can be used to construct some of the couplings, so all the colored sextets and triplets can decay into two jets and an EW pNGB. Note that proton stability is assured since we preserve lepton number. However, the presence of ∆B = 1 interactions raises the interesting possibility of neutron-anti-neutron oscillations. (See [52] for a recent discussion in the context of RPV-SUSY. Similar scalars objects have been discussed in e.g. [53,54]). The situation is summarized in Figure 7.
As far as fermionic colored objects go these models predict a slew of additional resonances but all of them, with the possible exception of the top partners, should be out of reach at LHC.
Exotic fermions of higher electric charge also need be taken into consideration. For the almost ubiquitous charge 5/3 state X, the main decay mode targeted by experiments so far is X → W t [55], but the existence of possible additional charged pNGBs opens alternative channels such as X → t φ + 0 . The presence of doubly charged pNGBs in some constructions might even allow for X → b φ + + . The operator creating the fermionic resonance should acquire a large negative anomalous dimension in the running from Λ UV to Λ. This has been investigated at the perturbative level in [56] for the class of models in [8]. More recently [57] summarized the results for the QCD case, also within perturbation theory.
Two more pNGBs/ALPs
A universal feature of all of these models, simply due to the fact that they are constructed out of two different types of fermions, is the existence of two additional neutral pNGBs associated to the abelian axial currents from the axial U (1) ψ and U (1) χ . One linear combination of these currents can be taken to be free of G HC anomalies. The associated pNGB, to be denoted by a, will be naturally light and, in absence of further interactions would essentially be a composite axion [58] coupling to both gluons and EW gauge bosons via the anomaly Since the associated decay constant f a is much smaller than the possible window of values allowed by the "invisible-axion" solution, we must give this particle a mass to avoid the usual constraints. As in technicolor models [59], a mass can be obtained from e.g. the four-fermi terms arising at the Λ UV scale of the type (c i = O (1)) For typical values of the parameters, using Dashen's formula [60] we estimate but a fairly large range of masses is possible. For instance, Naive Dimensional Analysis would lead to a lower estimate m 2 a ≈ Λ 2 f 2 /Λ 2 UV ≈ (40. MeV) 2 . This value needs to be raised at least by roughly a factor ≈ 3 in order not to conflict with the bounds on the visible axion, coming from beam dump experiments (discussed in [61]) or K → πa searches [62]. (See also [63] for cosmological bounds for ALPS at much higher scale f .) This however is easily achieved. In fact, in [23] the exciting possibility has been raised that this object is responsible for the 750 GeV bump in the di-photon signal recently reported by ATLAS and CMS [22]. Such a large mass could be obtained by e.g. adding bare masses for the colored hyperquarks.
The remaining linear combination, to be denoted by η , corresponds to the G HC anomalous current and its associated "would-be" Goldstone boson acquires a mass via the 't Hooft mechanism [64]. The η mass is given the Veneziano-Witten formula [65,66] (N ≈ 10, Ξ the topological susceptibility) that can be naively estimated to be of the same order of a typical resonance. However, subtleties may arise that lower the mass of this object and also make it within reach of the LHC.
Regardless of their mass, these objects are singly produced mostly by gluons via the anomaly and decay to di-bosons also via the anomaly (Fig. 8) with calculable branching ratios. This makes them a good window into UV physics since the branching ratios are related to the type of UV d.o.f. of the underlying theory. It would also be interesting to investigate in detail the mixing of these scalars with the other fields in the EW coset, as done recently in [67] in the context of the model [47]. This could lead to an enhancement in the cross-section for the EW pNGBs.
Acknowledgments
The author wishes to thank A. Belyaev A All models of partial compositeness satisfying the requirements in the text In this appendix we list all models of partial compositeness satisfying the requirements in the text.
The main requirements are a simple hypercolor gauge group G HC and two irreps ψ and χ giving rise to a custodial EW coset and top partners. In addition, we require the theory to be asymptotically free and of course free of gauge anomalies.
Comparing with [10] we have removed a few models that do not seem promising. Some are based on spinorial irreps of the orthogonal group for which, as discussed in [10], the MAC hypothesis leads to the wrong symmetry breaking pattern. Others are those having baryons of type χψχ with ψ in a complex irrep. This leads to top partners in the (2, 1) violating the custodial symmetry [36]. If the di-photon excess [22] will be confirmed with properties roughly in agreement with the 2015 data, only a fraction of models [23] will be able to fit the data. Further restrictions [68] could arise from imposing 't Hooft anomaly matching [69].
The list of models presented in Table 5 contains both conformal and confining theories.
It is unfortunately not yet possible to exactly identify the conformal region in non-supersymmetric gauge theories. However, one can use some heuristic arguments to get indications on their behavior and it turns out that most of the models are rather clear-cut cases. Consider for instance the two-loop beta-function β(α) = β 1 α 2 + β 2 α 3 . (β 1 < 0 always.) A formal solution α * to β(α * ) = 0 exists for β 2 > 0 and, if not to large, it can be trusted and the theory can be assumed to be in the conformal regime. If β 2 < 0 or α * is out of the perturbative regime, the model is likely to be confining. In between there is a region, difficult to characterize precisely, where the theory is conformal but strongly coupled.
In Table 6 we list the subset of models that are likely to be outside of the conformal window.
These models also obey the heuristic bound 11C(G) > 4 (N ψ T (ψ) + N χ T (χ)) proposed in [70] as well as the rigorous bounds from the a-theorem [71] a UV > a IR .
The use of these models for BSM physics depends on their IR behavior. The simplest application would be to restrict oneself to the models in Table 6. These models can be easily brought into the con- Table 5.
B Group theory conventions for the three cosets
In this appendix we collect the conventions for the explicit constructions of the three EW cosets studied in the text.
B.1 Notation for the SU (5)/SO(5) coset
In this case we realize the Lie algebra of the unbroken group SO(5) as the subset of antisymmetric imaginary generators of SU (5). This is just a particular choice of basis; a more general way of doing the decomposition is to introduce a symmetric matrix δ 0 and define the broken/unbroken generators as T δ 0 ∓ δ 0 T T = 0 respectively. We chose not to do this, and set δ 0 = 1 from the onset but comment below on the general form of the pNGB matrix in the general case. The generators of the custodial SU (2) L × SU (2) R are chosen to be The broken generators are the real symmetric traceless generators of SU (5). We write the pNGBs as In this way with our conventions φ n * m = φ −n −m the full matrix of pNGBs is real symmetric:: The vacuum misalignment is described by the following unitary matrix obtained by exponentiating (half of) the Higgs v.e.v.
Ω preserves the custodial symmetry SU (2) D generated by T i L + T i R and we write the non-linear realization of the pNGBs as a symmetric and unitary matrix All the fields in Π have zero v.e.v. and in the unitary gauge H + = 0 and H 0 = h/ √ 2.
Notice that with our choice of basis, Ω = Ω T . Had we chosen a more general δ 0 , we would have where the last identity defines Σ. The matrix Σ has the advantage of making some formulas look more uniform in all three cases but the disadvantage of not transforming uniformly under SU (5) and we chose not to use it. The covariant derivative is and in our convention can be written in terms of commutators. Finally, the kinetic term is
B.2 Notation for the SU (4)/Sp(4) coset
We pick the symplectic matrix The unbroken generators satisfy T i 0 + 0 T iT = 0. In particular, the generators of SU (2) L × SU (2) R are chosen to be 39) and the pNGBs can be represented as Notice that Π 0 − 0 Π T = 0.
The matrix describing vacuum misalignment and preserving the custodial symmetry is in terms of which the non-linear realization can be expressed as an anti-symmetric and unitary matrix Also in this case, the fields in Π have zero v.e.v. and in the unitary gauge H + = 0 and H 0 = h/ √ 2. The covariant derivative reads as in the previous case (36) but the kinetic term is normalized differently Even in this case one has the option of using 0 Ω T = Ω 0 and of introducing a matrix Σ through the identity U = Σ 0 in an analogous way as for the previous coset, but we do not use it for the same reasons as above.
B.3 Notation for the SU (4) × SU (4) /SU (4) D coset
In this case, the SU (2) L × SU (2) R subgroup is embedded in the unbroken SU (4) D by choosing The pNGBs are parameterized as follows where φ * + = φ − and N * + = N − . In the unitary gauge we have as usual H + = 0, H 0 = h/ In this case, the non linear realization of the pNGBs is given by the unitary matrix The covariant derivative is obtained by the usual commutator and the kinetic term is normalized as C Additional three and four bosons couplings for the models in the text.
Of course, the generation of masses by the potential introduces a mixing between these gauge eigenstates. This depends on the specific nature of the mass matrix and in many cases it could be handled by the mass insertion approximation. Throughout the paper we work with gauge eigenstates.
Also note that one could use the Clebsch -Gordan coefficients to express the gauge eigenstates as eigenstates of the diagonal custodial symmetry group SU (2) D ⊂ SU (2) L × SU (2) R as done in [32].
An even deeper difference with the model in [32] is that they used an additional U (1) gauge field to induce vacuum-misalignment instead of top coupling. (5), to be multiplied by e 2 dim(ψ)/(48π 2 f ). | 11,634 | 2016-04-21T00:00:00.000 | [
"Physics"
] |
Simulation of Silicon Waveguide Single-Photon Avalanche Detectors for Integrated Quantum Photonics
Integrated quantum photonics, which allows for the development and implementation of chip-scale devices, is recognized as a key enabling technology on the road towards scalable quantum networking schemes. However, many state-of-the-art integrated quantum photonics demonstrations still require the coupling of light to external photodetectors. On-chip silicon single-photon avalanche diodes (SPADs) provide a viable solution as they can be seamlessly integrated with photonic components, and operated with high efficiencies and low dark counts at temperatures achievable with thermoelectric cooling. Moreover, they are useful in applications such as LIDAR and low-light imaging. In this paper, we report the design and simulation of silicon waveguide-based SPADs on a silicon-on-insulator platform for visible wavelengths, focusing on two device families with different doping configurations: p-n+ and p-i-n+. We calculate the photon detection efficiency (PDE) and timing jitter at an input wavelength of 640 nm by simulating the avalanche process using a 2D Monte Carlo method, as well as the dark count rate (DCR) at 243 K and 300 K. For our simulated parameters, the optimal p-i-n+ SPADs show the best device performance, with a saturated PDE of 52.4 +/- 0.6% at a reverse bias voltage of 31.5 V, full-width-half-max (FWHM) timing jitter of 10 ps, and a DCR of<5 counts per second at 243 K.
I. INTRODUCTION
Quantum information technologies have been rapidly developing in recent years, and efforts are shifting from conceptual laboratory demonstrations to scalable real-world devices [1]. Chip-scale photonics devices are important candidates for implementing key features of a future quantum internet, but many recent demonstrations still require the coupling of light to external single-photon detectors [2], [3]. Major improvements in device footprint and scalability could be achieved if these photodetectors reside on the same chip and couple directly to the photonic waveguides [4].
Superconducting nanowire single-photon detectors (SNSPDs) are a state-of-the-art solution, featuring waveguide integrability, near-unity quantum efficiencies, low dark count rate of a few counts per second (cps), and low timing jitter down to < 20 ps [5], [6]. However, they require cryogenic operating temperatures of a few degrees Kelvin, which is expensive and prohibitive for large-scale deployment.
A practical alternative can be found in single-photon avalanche diodes (SPADs), which are typically reverse biased beyond the breakdown voltage. In this so-called Geiger mode, a single incident photon can trigger a macroscopic avalanche current via a cascade of impact ionization processes. In contrast to SNSPDs, SPADs typically only require thermoelectric cooling and can even operate at room temperature [7], [8]. Moreover, SPADs can be easily incorporated into silicon photonics platforms and benefit from mature complementary metal-oxide semiconductor (CMOS) fabrication technologies [9], making them a promising candidate for scalable manufacturing.
To date, reports of waveguide-coupled SPADs have been limited to operation at infrared wavelengths [10], [11]. However, many relevant quantum systems, including trapped ions [12] and color centers in diamond [13], operate in the visible spectrum, which makes efficient, low-noise SPADs for visible wavelengths highly desirable. Such devices would also find numerous applications in other important technologies, including LIDAR [14], non-line-of-sight imaging [15], and fluorescence medical imaging [16].
In this paper, we extend our recent work on the design and simulation of silicon waveguide-coupled SPADs for visible light operation, where we used a 2D Monte Carlo simulator to obtain the photon detection efficiency (PDE) and timing jitter, and studied the effect of different waveguide dimensions and doping concentrations [17]. Here we perform an in-depth study of different doping configurations, focusing on two device families: p-n + and p-i-n + . In addition to the PDE and timing jitter, we also analyze the expected dark count rate (DCR) at room temperature and at -30 • C (243 K), which is a typical operating temperature achievable by Peltier coolers.
Many details regarding the basic SPAD geometry and simulation procedure can be found in ref. [17] and are not repeated here; instead we provide the essential points and highlight the improvements we have made on our previous work.
II. WAVEGUIDE-COUPLED SPAD DESIGNS A. Device Geometry
The SPAD structure is shown in Fig. 1 long silicon (Si) rib waveguide with an absorption of >99% at 640 nm. Input light is end-fire coupled from an input silicon nitride (Si 3 N 4 ) rectangular waveguide, which has high transmittivity at visible wavelengths [9]. We choose this input coupling geometry over a phase-matched interlayer transition, commonly used in integrated photodetectors for infrared wavelengths [11], [18], as the latter is difficult to achieve due to the large difference in refractive indices for Si (n = 3.8) and Si 3 N 4 (n = 2.1). An input coupling efficiency of >90% at the Si/Si 3 N 4 interface is obtained using 3D Finite Difference Time Domain (FDTD) simulations (Lumerical). The structures are cladded with 3 µm of silicon dioxide (SiO 2 ) above and below. In this study, we fixed the waveguide core width and height at 900 nm and 340 nm respectively, with a shallow etch giving a rib height of 270 nm.
Electrical connections to the device would be made via metal electrodes deposited on top of heavily-doped p ++ and n ++ regions at the far ends of the device (along the x axis).
B. Doping Configurations
Our previous simulation study of p-n + SPADs [17] showed that increasing waveguide core widths (up to 900 nm) could lead to a higher PDE, as charge carriers can travel a larger distance over which avalanche multiplication can occur. Here, we vary the placement of the p-n + junction, and investigate the hypothesis that increasing the displacement ∆j of the junction beyond the edge of the waveguide core region (Fig. 1(c)) would also enhance this effective distance, and hence the PDE.
Another observation was that impact ionization was most efficient in a narrow region where the highest electric fields are concentrated (similar to Figs. 2(a)-(c)). Widening this highfield region could enhance the PDE, and is achievable by introducing an intrinsic region between the p-and n + -doped areas ( Fig. 1(d)). However, doing so would also lower the peak electric field strength ( Fig. 2(d)), which could in turn decrease the impact ionization efficiency. Here we explore the effectiveness of such p-i-n + devices, and attempt to find the optimum width of the intrinsic region ∆W , centered at 300 nm from the edge of the waveguide core. For both device families, we maintain a constant geometry and doping profile along the length of the waveguide. In this study, we choose a n + (p) doping concentration of 1×10 19 (2×10 17 ) dopants/cm 3 , and a lightly p-doped intrinsic region with 1×10 15 dopants/cm 3 .
A. DC Electrical Analysis
For each set of device dimensions and doping configurations, we perform a DC electrical analysis (ATLAS, Silvaco Inc.) by applying a reverse bias voltage V B across the device electrodes. For each device, the cathode and anode are placed equidistant from the center of the Si waveguide, with a minimum n + region width of 45 nm. We thus obtain the electric field F(r), ionization coefficients, and other parameters dependent on the 2D position vector r in the x − y plane; these are required for the Monte Carlo simulation of the avalanche process. Further details can be found in ref. [17]. The breakdown voltage is also identified as the reverse bias voltage V B at which the device current increases sharply.
B. 2D Monte Carlo Simulator
In comparison to deterministic techniques [19], Monte Carlo simulators are well-suited for analyzing SPAD performance, as they can evaluate the timing jitter by modeling the stochastic nature of the impact ionization and avalanche buildup processes. For applications such as quantum key distribution (QKD) [20] and LIDAR [21], low timing jitter is critical to the overall system performance.
In this work, we adapt the 2D Monte Carlo simulator detailed in ref. [17]. Briefly, a random path length (RPL) model is used to simulate the avalanche multiplication process [22]- [24]. Each simulation run starts with a photon absorption which creates an electron-hole pair. At each time step of interval ∆t rpl , each charge carrier is accelerated by the electric field and, depending on the ionization coefficients, probabilistically causes an impact ionization after traversing a random path length. This creates further electron-hole pairs, which can then undergo further impact ionizations and eventually lead to a self-sustaining avalanche. Charge carriers are lost when they exit the device boundaries; we note that unlike in ref. [17], the Monte Carlo simulation in this work considers the entire device area (the whole of the p, i, and n + regions) and is not restricted to the waveguide core region.
A successful detection event results if the device current reaches a detection threshold I det . Treating the success and failure outcomes as a binomial distribution, the PDE is then the fraction of successful detection events over all simulation runs, with an uncertainty given by the standard deviation (s.d.). The distribution of avalanche times (i.e. time between photon absorption and reaching I det ) yields the timing jitter.
1) Diffusion in Quasi-Neutral Regions: The SPAD can be divided into a depletion region and quasi-neutral regions depending on the electric field strength. In the depletion region, the dominant charge carrier transport process is the drift force due to the strong electric fields, and the RPL model applies. However, in the quasi-neutral regions where electric fields are weak, impact ionization can be neglected, and a diffusion model which combines random walks (driven by Brownian motion) and the electric drift force is more suitable. Similar to ref. [17], we use a threshold field to define the quasi-neutral region, i.e. |F(r)| < F thr = 1×10 5 V/cm, which is on the same order as the breakdown field in silicon [25].
We use the fundamental (quasi-)TE mode profile ( Fig. 1(b)) as a probability density map to determine the location where the initial electron-hole pair is injected for each simulation run. If the injection occurs in the quasi-neutral regions, charge carrier transport is simulated using the diffusion model; if the charge carrier crosses over to the depletion region, the simulation continues under the RPL model.
2) Device Current via Shockley-Ramo's Theorem: Ref. [17] calculates the device current using a 1D approximation of Ramo's theorem, which only considers the motion of charge carriers in one direction. However, this would not be suitable here given our SPAD designs and more complex electric field profiles. As such, we use the generalized Shockley-Ramo's current theorem [26], [27], where each charge carrier i at position r i contributes to the device current I induced on the cathode via: where q i is the charge, v i (r i ) is the instantaneous velocity, and F 0 (r i ) is a weighting electric field calculated in a similar way to F(r), but under these modified conditions: (i) the cathode is at unit potential, while the anode is grounded; (ii) all charges (including space charges) are removed, i.e. the waveguide is undoped [28].
C. Dark Count Rate
Even in the absence of light, free charge carriers may be generated, which can probabilistically trigger avalanche events and result in dark counts. Due to the high electric fields in SPADs, the most relevant carrier generation mechanisms are thermal excitation enhanced by trap-assisted tunneling (TAT), and band-to-band tunneling (BTBT).
We quantify the dark noise by calculating the DCR R D (T ) via [29]: where T is the temperature, L = 16 µm is the SPAD length, P trig (r) is the avalanche triggering probability, and G TAT (r, T ), G BTBT (r, T ) are the net generation rates of charge carriers (per unit volume) of their respective mechanisms. 1) Trap-Assisted Tunneling: The thermal generation rate of carriers can be obtained from the Shockley-Read-Hall (SRH) model, modified to account for TAT [30], [31]: where n i (T ) is the intrinsic carrier concentration and τ g (r, T ) is the electron-hole pair generation lifetime, which can be expressed in terms of the recombination lifetime τ r (r, T ) [32]: where the exponential term describes the main temperature dependence in TAT, and the field effect function Γ(F(r), T ) describes the effect of electric fields. E t and E i are the energy levels of the recombination centers (assumed to be equal to that of traps at the Si/SiO 2 interface [33]) and the intrinsic Fermi level, respectively, and k B is the Boltzmann constant. The field effect function Γ(F(r), T ) is: in which where q is the electron charge, and m * t = 0.25 m 0 is the effective electron tunneling mass, with m 0 being the electron rest mass [34]. 2) Band-to-Band Tunneling: The BTBT mechanism has been shown to be important at electric field strengths above 7 × 10 5 V/cm, where band-bending is sufficiently strong to allow significant tunneling of electrons from the valence band to the conduction band [34]. This rate can be expressed as: where B A , B B , and B Γ are model parameters; we use values based on ref [34]. The values of the parameters used in our calculations are listed in Table I, and further details of their derivation can be found in the Appendix.
3) Avalanche Triggering Probability: To obtain the avalanche triggering probability P trig (r) for each device, we perform > 40k Monte Carlo simulation runs, with photon absorption positions distributed uniformly across the device. A representative map of P trig (r) is shown in Fig. 3.
IV. SIMULATOR OPTIMIZATION
The Monte Carlo simulations can become computationally expensive due to the need to keep track of and model individual charge carriers, especially when the number of charge carriers grows exponentially during the avalanche process. If we would use the same simulation parameters in our previous work [17] to model one SPAD at a given bias V B , our simulator (implemented in Python) would require ∼24k CPU-hours on two sets of 12-core CPUs (Intel ® Xeon ® E5-2690 v3). Such a high computation cost would limit the variety of SPAD designs we can feasibly study.
Thus, we first use a representative device (p-n + SPAD with ∆j = -50 nm, at V B = 21.5 V) to perform a series of preliminary studies to optimize the simulation parameters: the detection threshold I det , RPL time step ∆t rpl , and number of simulation runs per parameter set. We aim to reduce computation time without sacrificing the simulation accuracy.
A. Detection Current Threshold
A reasonable discriminator threshold in experimental SPAD characterization setups is I det = 0.2 mA [35], a value we used previously [17]. However, it may not be necessary to simulate the multiplication of charge carriers up to that point as the avalanche process might already have passed a self-sustaining threshold at a lower current. On the other hand, a very low I det would overestimate the PDE by falsely identifying small avalanches that would not be self-sustaining, and underestimate the timing jitter by not simulating the full avalanche. By varying I det while fixing ∆t rpl = 1 fs with 2k simulation runs per I det value ( Fig. 4(a)), we conclude that we can lower I det to 20 µA without significant deviations in PDE or timing jitter.
B. RPL Time Step
A larger RPL time step ∆t rpl would speed up simulations, but reduces time resolution and hence accuracy. A suitable choice would be just short enough such that the charge carrier environment does not change too significantly between each step, even in the high-field regions with large field gradients.
We vary ∆t rpl while fixing I det = 20 µA with 2k simulation runs per ∆t rpl value ( Fig. 4(b)). We choose ∆t rpl = 10 fs as an optimal value; for larger time steps, PDE begins to deviate significantly compared to the previous value of ∆t rpl = 1 fs.
C. Number of Simulation Runs
We analyze the PDE over an increasing number of simulation runs for ∆t rpl = 10 fs and I det = 20 µA, and observe that the PDE converges to a stable value after several thousand runs. We choose to perform at least 6k runs per parameter set to reduce the relative uncertainty to ∼1%.
A. Photon Detection Efficiency and Timing Jitter
We simulate each device at increasing reverse bias voltages V B , starting from just above its breakdown voltage. For all devices, PDE increases with V B and reaches a saturation level (representative plots shown in Fig. 5(a)). We define the saturated bias voltage V sat as the lowest V B value where the obtained PDE values within a ±1 V range agree within their 1 s.d. uncertainty; the PDE at V sat is then the saturated PDE.
The distribution of avalanche times is generally asymmetric, especially for p-i-n + SPADs with ∆W > 600 nm (see Fig. 5(b)). Long tails in the timing distribution can adversely affect applications requiring high timing accuracies, e.g. satellite-based quantum communications [39]. Therefore, we present the full-width-half-maximum (FWHM) and fullwidth-tenth-maximum (FWTM) timing jitter, both extracted from timing histograms with 1 ps bin size, to better describe the timing performance of the SPADs. In general, timing jitter does not vary significantly with V B , except when V B is near the breakdown voltage.
1) p-n + SPADs: For p-n + SPADs, we observe a general trend of PDE increasing with the junction displacement ∆j (Fig. 6(a)). If the junction is placed further away from the waveguide core, charge carriers injected after a photon absorption in the core region travel a longer distance and can undergo more impact ionizations, thus increasing the likelihood of a successful avalanche. The stochastic avalanche process taking place over a larger distance would also explain the increasing timing jitter at higher ∆j. However, ∆j being too large would weaken the electric field strength in the waveguide core, which would lead to more charge carriers being lost at the waveguide boundaries due to random walk; this may explain the slight drop in PDE for ∆j > 400 nm.
The observed drop in PDE for ∆j = 100 nm is due to an "edge effect": when the junction is placed in close proximity to the waveguide rib edge, we observe a narrowing of the effective impact ionization region where ionization coefficients are high (Fig. 2(b)), which leads to a lower PDE. The highest saturated PDE obtained for p-n + SPADs is 48.4 ± 0.6% at V B = 26.5 V for ∆j = 400 nm, with a FWHM timing jitter of 9 ps.
2) p-i-n + SPADs: For p-i-n + SPADs, the widening of the high-field region has led to a higher PDE than for p-n + devices ( Fig. 6(b)). Besides the increased efficiency of impact ioniza- tions, this can also be explained by a lower loss rate of charge carriers under the diffusion model (<5% for p-i-n + , ∼10% for p-n + ), which follows a photon absorption event in the quasi-neutral regions. We do not find an obvious dependence of the PDE on the intrinsic region width for ∆W > 400 nm, although timing jitter increases with ∆W . Based on our analysis, we conclude that the optimum performance is obtained for ∆W = 400 nm, which gives a saturated PDE of 52.4 ± 0.6% at V B = 31.5 V and a FWHM timing jitter of 10 ps.
B. Dark Count Rate
We also evaluate the dark noise performance of the SPADs, focusing on devices which display high saturated PDE: p-n + SPAD with ∆j = 400 nm and p-i-n + SPADs with ∆W = 400 nm and 900 nm. We calculate the DCR at 243 K, which is in a typical SPAD operating regime readily achieved with thermoelectric cooling, as well as at 300 K to explore the feasibility of room temperature operation.
For our simulated parameters, BTBT shows a greater sensitivity to peak electric field strength than TAT (Fig. 7). In p-n + SPADs, where the peak fields are high, BTBT is the dominant dark carrier generation mechanism. As the bias V B increases, the depletion region widens, leading to a decrease in the peak field strength and hence the overall DCR, while the TAT contribution stays relatively constant ( Fig. 8(a)). At an operating bias of V B = 31.5 V (which is above the saturated bias), the DCR is 11 kcps and 21 kcps at 243 K and 300 K, respectively.
In p-i-n + SPADs, due to wider high-field regions with lower peak fields, BTBT becomes negligible compared to TAT. As such, DCR generally increases with V B , and shows a steeper dependence on temperature (∼ 1000 -fold drop between 300 K and 243 K). We observe that while SPADs with wider intrinsic region widths ∆W had lower dark carrier generation rates per unit volume, this was offset by the larger device volume, and could lead to higher DCR compared to narrower ∆W .
Overall, dark count performance for p-i-n + SPADs is significantly better compared to p-n + devices, with observed DCR of < 4 kcps at 300 K and < 5 cps at 243 K ( Fig. 8(b)), even at V B beyond the saturated bias.
VI. CONCLUSIONS
In conclusion, we have simulated waveguide-based silicon SPADs for visible wavelengths, studying both p-n + and p-i-n + doping profiles. For our simulated parameters, p-i-n + SPADs outperform p-n + devices in terms of PDE and DCR; we identify the optimum device as a p-i-n + SPAD with ∆W = 400 nm, with a saturated PDE of 52.4 ± 0.6% at a bias of V B = 31.5 V, FWHM timing jitter of 10 ps, and DCR < 5 cps at 243 K. This is an improvement over our previous study, where the highest PDE obtained was 45% [17].
The PDE is slightly lower than typical free-space SPAD modules with PDEs of up to ∼ 70% [40]; however, our waveguide devices can offer superior timing performance and dark noise compared to available commercial devices (jitter ∼ 35 ps, DCR < 25 cps). We note that even at room temperature, the DCR of a few kcps is acceptable for certain important technologies including LIDAR [41] due to the use of temporal gating, thus indicating the potential applicability of our waveguide SPADs.
Our simulation methods can also be further extended to study other device geometries (e.g. trapezoid waveguides), doping profiles (e.g. p + -i-p-n + ) and materials (e.g. Ge-on-Si SPADs for near-infrared wavelengths).
APPENDIX
A. Trap-Assisted Tunneling 1) Intrinsic carrier concentration: We calculate the intrinsic carrier concentration n i (T ) in silicon via [36]: n i (T ) = 5.29 × 10 19 · (T /300) 2.54 · exp(−6726/T ) (8) 2) Effective Recombination Lifetime: The effective recombination lifetime τ r (T ) was measured to be 7 ns at room temperature for an undoped Si rib waveguide device with similar sub-µm dimensions [37]. To obtain a suitable value at 243 K, we analyze the temperature dependence of τ r (T ): for lowlevel injection in p-type silicon, τ r (T ) can be approximated as the electron recombination lifetime [32], i.e.: where σ e is the electron capture cross section, ν e (T ) is the mean thermal velocity of electrons, and N t is the trap density. The trap density N t is assumed to be temperature-independent, while for traps at Si/SiO 2 interface with E t -E i = 0.25 eV, σ e has been shown to be relatively constant over our relevant temperature range (243 -300 K) [42]. Thus, the temperature dependence comes only from ν e (T ) ∝ √ T , and we obtain τ r (243) = τ r (300) · 300/243 (10) | 5,560.8 | 2019-05-07T00:00:00.000 | [
"Physics",
"Engineering"
] |
On the Stabilization through Linear Output Feedback of a Class of Linear Hybrid Time-Varying Systems with Coupled Continuous/Discrete and Delayed Dynamics with Eventually Unbounded Delay
: This research studies a class of linear, hybrid, time-varying, continuous time-systems with time-varying delayed dynamics and non-necessarily bounded, time-varying, time-differentiable delay. The considered class of systems also involves a contribution to the whole delayed dynamics with respect to the last preceding sampled values of the solution according to a prefixed constant sampling period. Such systems are also subject to linear output-feedback time-varying control, which picks-up combined information on the output at the current time instant, the delayed one, and its discretized value at the preceding sampling instant. Closed-loop asymptotic stabilization is addressed through the analysis of two “ad hoc” Krasovskii–Lyapunov-type functional candidates, which involve quadratic forms of the state solution at the current time instant together with an integral-type contribution of the state solution along a time-varying previous time interval associated with the time-varying delay. An analytic method is proposed to synthesize the stabilizing output-feedback time-varying controller from the solution of an associated algebraic system, which has the objective of tracking prescribed suited reference closed-loop dynamics. If this is not possible—in the event that the mentioned algebraic system is not compatible—then a best approximation of such targeted closed-loop dynamics is made in an error-norm sense minimization. Sufficiency-type conditions for asymptotic stability of the closed-loop system are also derived based on the two mentioned Krasovskii–Lyapunov functional candidates, which involve evaluations of the contributions of the delay-free and delayed dynamics.
Introduction
So-called hybrid dynamic systems, which essentially consist of mixed, and in general, coupled, continuous-time and either digital or discrete-time dynamics, are of an un-doubtable interest in certain engineering control problems. Such interest arises from the fact that there are certain real-world problems which retain combined continuous-time and discrete-time information, and this circumstance is reflected in the dynamics. The continuous-time information is modelled through differential equations (such as ordinary, functional or partial differential equations) while the discrete-time dynamics are modelled through difference equations. In this way, hybrid systems can sometimes be very complex to analyze, since they might involve combinations and couplings of tandems of more elementary subsystems. See, for instance, [1][2][3][4]. A major requirement in the design of control schemes is stabilization via feedback by synthesizing a stabilizing controller. Even if an open-loop system (i.e., that resulting in the absence of feedback) is stable, there is often a need to improve its stability [5][6][7][8][9][10][11][12][13][14][15][16]. A useful procedure to discuss both stability and stabilization concerns is the use of Lyapunov-type or Corduneanu-type functionals and their generalizations (for instance, Lur'e, Krasovskii, Razumikhin, Popov, etc.). See, for instance, [1][2][3][4][5][8][9][10][11][12][13][14][15][16] and references therein.
To fix basic ideas on hybrid systems, note that a well-known typical elementary example of such systems is that consisting of a continuous-time system in operation under a discrete-time controller. In this way, the controller does not need to keep information on the continuous-time signals for all times, but only at sampling instants. Other typical hybrid systems involve the combined use of neural nets and fuzzy logic to operate on the continuous-time and/or discrete-time dynamics, or electrical and mechanical drivelines. On the other hand, hybrid dynamic systems with coupled continuous-time and digital dynamics have been described in [17]. Their properties of controllability, reachability and observability have been characterized in [18][19][20][21] and some of the references therein. Adaptive control methods for such systems in the case of a partial lack of knowledge of their parametrical values have been addressed in [22,23], while optimal "ad hoc" designs have been stated and discussed in [24] and some of the references therein. In the above topics, it might be important to adapt the design to the multirate context, since sometimes the discretized states and/or the inputs can be subject to different sampling rates, either due to accommodating the design to the nature of such signals or improving the control performances. The finite-time stabilization of multirate networked control systems based on predictive control is discussed in [25]. Another more general problem which can be considered in combination with different multirate designs is the eventual use of time-varying sampling rates, again to better accommodate the expected performances by adapting the sampling rates to the rates of variations in the involved signals [26].
Dynamic systems in general, and some hybrid dynamic systems in particular, can also typically involve linear and non-linear dynamics, and they can be subject to the presence of internal delays (i.e., in the state vector) and/or external delays (i.e., in their inputs or outputs). See, for instance, [1,2,[6][7][8][9][10][11][12][13][14][15][16]; although, it must be pointed out that the related background literature is extensive. Typical existing real-life systems involving delays include a number of biological models, such as epidemic models, population growth or diffusion models, sunflower equation, war and peace models, economic models, etc. This paper formulates and describes a class of linear time-varying, continuous-time systems with time-varying, continuous-time delayed dynamics. Such a class of systems is hybrid in the sense that it can consider an added contribution of delayed dynamics to its current continuous-time dynamics with respect to previously sampled values of the solution, for a certain defined sampling period. Such a dynamic contributes to the whole solution, together with both the delay-free, continuous-time dynamics and the continuous delayed dynamics. The latter is associated with a time-varying, continuously differentiable delay, which is, in general, unbounded and of a continuous-time derivative nature, being everywhere less than one. The class of hybrid systems under study might also be subject to linear output-feedback time-varying control under combined information of the output at the current time instant, the delayed one and the previous discrete-time value in a closed-loop configuration. The general solution is calculated in a closed explicit form. Special emphasis is paid to the closed-loop stabilization via linear output feedback through the appropriate design of the stabilizing control matrices. The stabilization process is investigated via Krasovskii-Lyapunov functionals.
Next, the paper deals with the derivation and analysis of sufficiency-type conditions for the closed-loop asymptotic stability, which are obtained through the definition of two Krasovskii-Lyapunov functional candidates. One of those functional candidates has a constant, leading positive-definite matrix to define the non-integral part as a quadratic function of the solution value at each time instant, while the second candidate proposes a time-varying, time-differentiable matrix function for the same purpose. There are also some extra assumptions invoked which focus on the maximum variation of the time-integral of the squared norms of the remaining matrices of delayed dynamics associated with both the continuous-time delay and with the memory on the sampled part of the hybrid system. These extra assumptions essentially rely on the fact that those time integrals vary more slowly than linearly, with any considered time interval length, in order to perform the integrals over time. The subsequent part of the manuscript is devoted to controller synthesis for the eventual achievement of closed-loop stabilization via linear output feedback, in such a way that the asymptotic stability results of the previous section are fulfilled by the feedback system. In the time-invariant, delay-free case, there are some background results available on stabilization via static linear output feedback (see, for instance, [27][28][29] and some of the references therein). The synthesized controller possesses several gain time-varying matrix functions. One is designed to stabilize the delay-free dynamics, while the remaining ones have, as their objective, minimization in some appropriate sense of the contribution of the natural and the sampled delayed dynamics to the whole closedloop dynamics. To stabilize the delay-free matrix of dynamics, the controller gain matrix function is calculated via a Kronecker product of matrices [29,30], associated with an algebraic system. The problem is well-posed, provided that such a system is compatible for some suitable matrix function describing the delay-free closed-loop dynamics. In case the mentioned algebraic system is not compatible, the controller gain is synthesized so as to approximate the resulting closed-loop matrix to a suitable dynamic in a best approximation context of its norm deviation, with respect to the prefixed and suitable closed-loop matrix of delay-free dynamics. This paper also discusses how to synthesize the remaining matrices, which involve natural delays, and the delayed dynamics associated with the discrete information, in such a way that the resulting matrix function of delayed dynamics has small norms in a sense of the best approximation to zero.
It can be pointed out that the previously cited literature on hybrid systems does not rely on the output-feedback stabilization of systems, which include both discrete information on the previously sampled solution values and combinations of both delay-free, continuous dynamics and delayed, continuous, time-varying dynamics. This paper also focuses on the closed-loop stabilization of the solution via linear output feedback. These concerns are the main novelty of this manuscript, and also the motivation for the study, since the class of hybrid systems under consideration is more general than those previously studied in the literature.
The paper is organized as follows. Section 2 states and describes the linear hybrid time-varying continuous time system with combined time-varying delay-free and delayed dynamics, as well as its solution in closed explicit form in both unforced and forced cases. The forced solution also considers a particular situation where the forcing control is obtained via linear feedback of combined information on the current output, the delayed output and the previously sampled value of the output. Section 3 deals with derivation of sufficiency-type conditions of closed-loop asymptotic stability, which are obtained through the definition of two Krasovskii-Lyapunov functionals for asymptotic stability analysis purposes. One involves a constant positive-definite matrix for the definition of the delay-free term, while the other involves a positive-definite time-varying continuous-time differentiable matrix. Controller synthesis for closed-loop asymptotic stabilization via linear output feedback is also discussed. Finally, conclusions end the paper.
Nomenclature
The following notation is used: R + = {r ∈ R : r > 0} is a set of positive real numbers and R 0+ = R + ∪ {0} is a set of non-negative real numbers. Similarly, the positive and non-negative integer numbers are defined by the respective sets Z + = {z ∈ Z : z > 0} and Z 0+ = Z + ∪ {0}.
Let M, N ∈ R n×n , then M 0 denotes that the matrix M is positive-definite; M 0 denotes that it is positive-semidefinite; M ≺ 0 (respectively, M 0) denotes that it is negativedefinite (respectively, negative-semidefinite); A closed-loop system, in the standard terminology, is that resulting from a state or output-feedback control law. The stability is termed to be global if the solution is bounded for all time and any given admissible function of initial conditions. It is of global asymptotic type if, in addition, it converges asymptotically to the equilibrium state.
We pay special attention in this manuscript to the synthesis of a stabilizing output linear feedback control. In the context of this manuscript, a hybrid system is one which involves mixed continuous-time and discrete-time dynamics. We consider that, in general, it also involves delayed continuous-time dynamics and discrete-time dynamics associated with a given sampling period.
The Hybrid Continuous-Time/Discrete-Time Differential System Subject to a Time-Varying Delay
Consider the following dynamic control system subject to, in general, a time-varying delay: .
∀t ∈ R 0+ under a bounded piecewise continuous function of initial conditions ϕ : [−h(0), 0] → R n , where T > 0 is the sampling period, k = k(t) = (maxz ∈ Z 0+ : zT ≤ t), x : [−h(0), ∞) → R n , y : [−h(0), ∞) → R p and u : [−h(0), ∞) → R m are, respectively, the state solution on [−h(0), ∞) and the output and input vector functions with max(p, m) ≤ n and x(t) = ϕ(t); t ∈ [−h(0), 0] with x 0 = x(0) = ϕ(0) and x k = x(kT); ∀k ∈ Z 0+ . The matrix functions of dynamics A : [0, ∞) → R n×n , A a : [−h(0), ∞) → R n×n and A d : [−h(0), ∞) → R n×n , and the control B : [0, ∞) → R n×m and output C : [0, ∞) → R p×n matrix functions, are piecewise, continuous and bounded. The control vector is piecewise and constant with eventual finite jumps at the sampling instants t k = kT; k ∈ Z 0+ (the set of non-negative integer numbers) and is the input (or control) vector u(t); with u(kT) = u k ; ∀k ∈ Z 0+ (the set of non-negative real numbers), and h : [0, ∞) → R 0+ is the time-varying delay subject to h(t) ≤ t; ∀t ∈ R + and h(0) finite. The above system is continuous-discrete hybrid in the sense that the state evolves forced by its current value at time t with a memory effect on its last preceding sampled value at the sampling instant kT under a periodic sampling of period T and the control operating jointly at both instants t and t − kT. The major interest of the subsequent investigation is the output-feedback controls of the form: where K : [0, ∞) → R m×p , K d : [−h(0), ∞) → R m×p and K a : [−h(0), ∞) → R m×p are the controller gain matrices to be synthesized and k = k(t) = (maxz ∈ Z 0+ : zT ≤ t). The replacement of the output vector by the state vector in (3) leads to the most restrictive state output-feedback control type. Through the paper, we will refer to (1) and (2) as the open-loop system, since the control via feedback is not yet selected. Its unforced solution is that corresponding to just the initial conditions, that is, when u ≡ 0. The forced solutions correspond to nonzero controls. Note that the controlled system (1) and (2) as well as the closed-loop configuration (1)-(3) resulting via feedback control are parameterized, in general, by time-varying matrices. The closed-loop system is the combination of (1) to (3), that is, that resulting after replacing the control law (3) in (1). The solution of (1) is characterized in the subsequent theorem.
Theorem 1. The solution of the unforced system (1), for any bounded piecewise continuous function of initial conditions ϕ : [−h(0), 0] → R n , is unique and given by: where the evolution matrix function Ψ : Ψ(t, t) = I n (the n-the identity matrix); ∀t ∈ R 0+ , and it satisfies: where the dot symbol denotes the time derivative with respect to the first argument t. The whole solution of (1), including the unforced and the forced contributions, is: Proof. The uniqueness of the solution is obvious since the matrix functions which parameterize (1) are bounded, piecewise, and continuous, and the expression (4), subject to (5), is the solution of the unforced (1), as it can be directly verified as follows. One obtains by replacing (5) into the time-derivative of (4) with the subsequent use of the claimed solution (4): . (7) coincides with the unforced differential system (1) so that the unforced solution is (4) and the evolution matrix function Ψ(t, t) = I n satisfies (5). As a result, the whole solution of (1) is (6).
Remark 1.
If A(t) commutes with e t 0 A(τ)dτ for all t ∈ R 0+ then the evolution matrix function of (1) which is the solution to (5) is: An interesting property of the evolution matrix through time is given in the subsequent result, which is useful to characterize analytically and eventually compute the solution: Proposition 1. Consider arbitrary time instants t 2 ≥ t 1 ≥ 0. Then, the evolution matrix function satisfies: Proof.
The first and the right-hand-side expressions of (10) have to be identical for any given function of initial conditions ϕ : [−h(0), 0] → R n so that (9) holds.
Let us define byx(t 1 ) the strip of the solution of In accordance with (4), define the interval-to-point evolution operator S : R 0+ → L(X) as follows: for any t ≥ t 0 ≥ 0, where X is the space of the unforced solutions of (1), for any given function of initial conditions ϕ : so that the evolution operator satisfies for t 0 , It can be noticed that the interval-to-point evolution operator is related to the evolution matrix function via the identities (12), and, under the additional assumption that the delay function is non-increasing discussed in the subsequent result, it is also related to an intervalto-interval evolution operator.
Note that Proposition 2 also holds in particular if the delay is constant. The following result is closely related to Theorem 1, except for that the hybrid system considers the contribution of the dynamics of the last preceding sampling instant to the current continuous one instead of the delay between them both. Corollary 1. Consider the differential system: .
The unforced solution for any bounded, piecewise, continuous function of initial conditions ϕ : [−h(0), 0] → R n is unique, and given by where the evolution matrix function Ψ : ∀t ∈ R 0+ , and it satisfies: .
The proof of Corollary 1 is similar to that of Theorem 1 by noting that an auxiliary delay r(t) = t − kT for t ∈ [kT, (k + 1)T) allows us to write x(kT) = x(t − r(t)) and u(kT) = x(t − r(t)), which leads to (17) being identical to (5) for such a delay. Note that the hybrid continuous/discrete differential system (15) has a finite memory contribution of the state and control at the sampling instants on each next inter-sample time interval, which is incorporated into the continuous-time dynamics. (16) and (18) of (1) can also be written equivalently as follows, by taking initial conditions on the interval [kT − h(kT), kT]:
Remark 2. The unforced and the total solutions
The closed-loop differential system (1) is obtained by replacing the feedback control (3) into (1), taking into account (2), to yield: The solution of (21) and (22) is found directly by replacing the evolution matrix function of Theorem 1 by that associated with (21), subject to (22), which leads to the subsequent result: The solution of the closed-loop differential system (21) and (22) for any given bounded, piecewise, continuous function of initial conditions ϕ : [−h(0), 0] → R n , is unique, and given by: ∀t ∈ R 0+ , and it satisfies: Remark 3. A parallel conclusion to that of Remark 1 for the closed-loop system is that, if A(t) commutes with e t 0 A(τ)dτ for all t ∈ R 0+ , then the evolution matrix function of (23), and solution of (21) subject to (22), is for t ≥ τ ≥ 0.
Proof. Property (i). Note that, in order for (4) to be bounded, for all time for any given ϕ : [−h(0), 0] → R n for any given ϕ : [−h(0), 0] → R n , the evolution operator being the solution to (5) The converse is also true in the sense that if such a norm is bounded then x(t) is bounded for all time for any given finite ϕ : is a necessary and sufficient condition for the global Lyapunov s stability of the unforced differential system (1). This condition, together with Ψ(t, τ) → 0 as |t − τ| → ∞ , guarantees, in addition, that x(t) → 0 as t → ∞ , and vice versa, so that the unforced differential system (1) is globally Lyapunov s asymptotically stable, i.e., asymptotically stable for any bounded initial conditions. Property (i) has been proved. Properties (ii)-(iii) are proved in a similar way via equations (15) to (17), (21), (22), (23) and (24), respectively. Property (iv) follows directly from the above properties in view of expressions (5), (15) and (24), since the parameterizing matrix functions of the differential systems (1), (15), (21) and (22) are bounded for all time. The uniform continuity of the respective evolution operators follows from the continuity of their time-derivative operators.
Asymptotic Stability
This section discusses the asymptotic stability and the stabilization via linear output feedback of the closed-loop system obtained from (1) and (2), under a feedback control laws (3), whose state differential system of equations is given by (11), subject to (22), from the use of Lyapunov-Krasovskii-type functionals (see, for instance, [1,2,7-9,13]), which are defined as "ad hoc" in this section for this hybrid model based on the state trajectory solution and its time derivative. h(t) ≤ γ < 1; ∀t ∈ R 0+ ; 2 There exist some q ∈ R + and some P = P T ∈ R n×n 0, such that: There exist constants µ 1 , µ 2 , µ 3 , µ 4 ∈ R 0+ such that for t 0 ≤ t 1 < t 2 , the following constraints hold: Then, all the solutions of the closed-loop differential system (21) and (22) are bounded and the zero solution is asymptotically stable for any finite function of initial conditions.
Proof. Consider the differential system (21) and (22) with the strip of its solution , t} for each t ∈ [kT, (k + 1)T) and k = k(t) = max(z ∈ Z 0+ : zT ≤ t) and the functional: where Assume that P is chosen to satisfy (26) for some q ∈ R + . Note that this is always possible since A c (t) is a stability matrix for all t ∈ R 0+ since (26) is identical to the timevarying Lyapunov matrix inequality: Since Q(t) 0; ∀t ∈ R 0+ because q > 0, P 0; and P h (t) 0, P hT (t) 0 and P h (t) 0; ∀t ∈ R 0+ . Since . h(t) ≤ γ < 1; ∀t ∈ R 0+ , one has from putting (32) into (31) that with ν = 2 − µ and 1 − γ ≥ µν, which make each of the four additive terms ofq(t, x t ) in (34), from (31), non-negative, as seen as follows concerning the first one: √ γ 2 , 2 and ν = 2 − µ. Proceeding with the remaining terms of (34) in the same way, it follows thatq(t , x t ) ≥ 0. On the other hand, it follows from the third theorem assumption that x(τ) 2 x(τ) 2 x(τ) 2 (30) and (35) that: where µ = µ 1 + µ 2 + µ 3 + µ 4 ≥ 0, and note also from (30) and (33) that where the n-square real matrix P 0 uniquely defines the factorization P T P = P of P since P 0. Since W(0) = W i (0) = 0, for i = 1, 2, and W(x) and W i (x), for i = 1, 2, there are radially unbounded positive real functions for any x > 0, and since Z(t, x t ) satisfies (36), one concludes that all the solutions of the closed-loop differential system (21) and (22) are bounded for any given finite initial conditions and the zero solution is asymptotically stable. (27) that A cl (t) is a stability matrix; ∀t ∈ R 0+ since P 0 and Q(t) 0 since (26), equivalent to (32), is a Lyapunov matrix inequality whose solution is P. Now, Theorem 4 is extended by involving a time-varying, time-differentiable matrix function P : R 0+ → R n×n and an associated matrix Lyapunov equation in the statement and solution of a Krasovskii-Lyapunov functional candidate. The relevant matrix condition to be fulfilled to guarantee the asymptotic stability is a matrix Lyapunov-type identity rather than a matrix inequality.
Remark 4. Note from
Theorem 5. Assume that: 1 The matrix functions defined in (22) are continuous and the delay function is continuous and There exists some q ∈ R + and some time-varying symmetric continuous-time positivedefinite matrix function P : R 0+ → R n×n , which is time-differentiable for all time, such that: A T cl (t)P(t) + P(t)A cl (t) = −Q(t) ≡ − qI n + 4P 2 (t) + P h (t) + P T (t) + P 2T (t) + P hT (t) + Ω(t) ; for some arbitrary, continuous time-differentiable positive-semidefinite symmetric Ω : R 0+ → R n×n for all time, where P h (t), P T (t), P hT (t) and P 2T (t) are defined in (27). 3 The third assumption of Theorem 4 holds. Then, the following properties hold: i All the solutions of the closed-loop differential system (21) and (22) are bounded for any given finite initial conditions and the zero solution is asymptotically stable. ii The positive-definite matrix function P : R 0+ → R n×n and its time derivative are subject to the constraints: .
Since Q : R 0+ → R n×n is positive-definite, (40) is a Lyapunov matrix equation, and since P(t) is positive-definite, then A(t) is a stability matrix for all t ≥ 0, so that for each t ≥ 0 there exits some norm-dependent real constants k t ≥ 1 and ρ t > 0, such that, since ρ t . Thus, one obtains from (42) that: .
which leads to .
provided that ρ > 4k 2 sup t∈R 0+ P(t) . Additionally, one obtains from (41) and (39) that sup t∈R 0+ which leads to provided that ρ > 2k 2 sup t∈R 0+ P(t) . Thus, the necessary condition for the joint validity of (44) and (46) The zeros of p sup t∈R 0+ which is simplified in view of the calculated values of sup t∈R 0+ P(t) 1,2 , as follows: and then Property (ii) follows directly. By modifying (29) with a time-varying continuously time-differentiable P(t) as: with Z(t, x t ) defined in (30), one obtains by following the same steps as in the proof of Theorem 4 that (33) is modified as follows: .
and .
which completes the proof of Property (i).
Remark 5.
Note that P ≤ k 2 d 2(ρ−2k 2 P ) is the simplified version of the norm constraint (46) in the proof of Theorem 6 being adapted ad hoc, as associated with (26) in Theorem 5, by taking into account that P is constant.
Following the relations previous to (39) in the proof of Theorem 6 for the parallel constraint (26) in Theorem 5, by taking into account that P is constant under the constraint P ≤ k 2 d 2(ρ−2k 2 P ) , which is a simplified version of (46) for this case, where the constraint P(t) ∈ 0, ρ 4k 2 is weakened to P ∈ 0, ρ 2k 2 since the stronger constraint P ∈ 0, ρ 4k 2 of Theorem 6 is removed since P is constant. Thus, (47) becomes simplified to p( P ) = 4k 2 P 2 − 2ρ P + k 2 d ≥ 0, which, combined with P ∈ 0, ρ 4k 2 , results in Theorem 5 in the subsequent parallel constraint to (49) obtained for Theorem 4, and which is a necessary condition for the existence of P, satisfying (26):
Closed-Loop Asymptotic Stabilization
Note that the second conditions of Theorems 3 and 4, visualized by the Lyapunov matrix inequality (26) and the Lyapunov matrix Equation (39), respectively, rely on the fact that matrix of delay-free closed-loop dynamics A cl (t) is a stability matrix for all time. In view of the first identity of (22), the open-loop delay-free dynamics can be stabilized via linear output feedback if, and only if, there exists some matrix function K : R 0+ → R m×p , such that A cl (t) equalizes some stability matrix A m (t) for all t ∈ R 0+ . The subsequent result characterizes the linear output-feedback stabilizing gain matrix of the delay-free, closed-loop dynamics. It also discusses how to address the third stipulation of Theorems 4 and 5 by the choice of the other two controller gain matrix functions K d (t) and K a (t) in (22) for the delayed dynamics. Each of those control gain matrices is intended to be calculated to cancel, if possible, the corresponding delayed closed-loop dynamics if the resulting algebraic system is solvable, or to obtain the best approximation to zeroing such corresponding dynamics if the corresponding algebraic system is incompatible. Theorem 6. The following properties hold: (i) The algebraic system: is solvable in K(t), for some stability matrix A m (t); ∀t ∈ R 0+ , equivalently, the set of algebraic linear system of equations: is solvable in vecK(t); ∀t ∈ R 0+ , if and only if so that the matrix of delay-free, closed-loop dynamics A cl (t) is stable since it is fixed to A m (t); ∀t ∈ R 0+ .
(ii) If (53) is solvable by a stabilizing matrix function of the closed-loop, delay-free dynamics gained by linear output feedback, then the set of solutions for such a gain is given by and equivalently, by, where K 0 (t) ∈ R m×p ; ∀t ∈ R 0+ is arbitrary.
(iv) The subsequent choices of K d (t) and K a (t) minimize A dcl (t) and A acl (t) , respectively: Proof. Note that (53) is the first identity of (22) for A cl (t) = A m (t); ∀t ∈ R 0+ , which is solvable in K(t); ∀t ∈ R 0+ , if and only if (56) holds from Rouché-Capelli theorem, and equivalently, if and only if (55) holds, which is the necessary and sufficient condition for solvability of (53) via the Moore-Penrose pseudo-inverses [29,30]. Note that (55), and equivalently (56), is a necessary condition for the second stipulations of Theorem 4 and Theorem 5 to hold, since A cl (t) has to be a stability matrix to satisfy the respective Lyapunov matrix inequality and equation in such theorems. Note also that the solution for delay-free controller gain K(t) is, in general, non-unique, with the algebraic linear system (54) being a compatible indeterminate. This proves Property (i). Property (ii) follows directly from Property (i) by making the solution explicit in the equivalent forms (57) and (58) under the necessary and sufficient condition for its existence. Property (iii) follows, since if no solutions exist, then (58), and equivalently, (57), under the choice K 0 (t) ≡ 0, minimizes the error norm with respect to all the choices of the arbitrary matrix K 0 (t), [29,30].
To prove Property (iv), note that in (28), the following relation can be written for t 2 > t 1 ≥ 0, and close equivalences apply for the remaining three conditions given in (28). Now, the values of µ 1 and µ 2 become as small as possible by reducing as much as possible A acl (t) and B ad (t) through the choices of K d (t) and K a (t), respectively. Thus, if the equations (22) are either solvable, K d (t) and K a (t) or algebraically incompatible, then the respective minimizations of A acl (t) and B ad (t) arise by the choices (59) and (61), respectively.
Remark 6.
Note that, in general, a less restrictive condition than that given in Theorem 6 for the solvability of (53) is the stabilization by linear state-feedback, since the state space dimension n is usually higher than that of the output space p. In that case, the controller gain matrices are of orders m × n instead of m × p. This reduces, to take C(t) = I n in (53) and (54) so that the solvability condition (55) becomes weakened to: rank(B(t) ⊗ I n ) = n × rankB(t) = rank(B(t), vec(A m (t) − A(t))); ∀t ∈ R 0+ (64) On the other hand, in the particular case with m = p = n, the dimensions of the state, input and output are identical, and it can also be discussed as a particular case of linear state feedback for the same number of inputs as the number of outputs, both of them equalizing the state dimension. However, this theoretical case is not very useful in most applications where the numbers of inputs and outputs are less than the state dimension.
In addition, note that in the case where the algebraic system is incompatible, the simplest solution (K 0 (t) ≡ 0), corresponding to the indeterminate compatible case, gives the best approximating solution in the sense that the error norm between both sides of (54) is the minimum possible error norm for any selection of K(t).
It can be pointed out that there are other generalized inverses, such as the generalized Bott-Duffin inverse, which is constrained by the use of a projection on a subspace of the solution, or the Drazin inverse. It does not satisfy the condition AA † A = A, in general, [29]. Remark 7. Note from (21) and (22) that Theorem 6 (iv) provides a way to minimize A acl (t) and B ad (t) , but we still need to deal with the delayed dynamics associated with the matrices B ad (t) and B aa (t). However, the control law (3) has no extra gains to deal with those resulting contributions to the close-loop dynamics. A modification of the control force in (1) can assist with that task. Consider the differential system: with u(t) still being generated by (3) and u 0 (t) = K 0 (t)x(t − kT); ∀t ∈ R 0+ being another supplementary control to deal with the above-mentioned drawback. Then, the former closed-loop differential system (21) and (22) becomes modified as follows: .
Now, K(t) and K d (t) are designed as in Theorem 6 to deal with A cl (t) and A dcl (t), while K a (t) and K 0 (t) are designed to deal with A bcl (t) via the following possibilities: and equivalently, leading to is the best approximation of A bcl (t) = A a (t) + (B(t) + B a (t))K a (t)C(t) to A bcl (t) = 0.
and equivalently, is the best approximation of A bcl (t) = A a (t) and equivalently, is the best approximation of A bcl (t) = A a (t) + B(t)K a (t)C(t) to A bcl (t) = 0.
Example
Consider the following time-varying, third-order linear system with two inputs and two outputs, defined by: The stabilization objective is the achievement of dynamics given by the stability matrix: which is solvable in the controller gain K(t), since (56) is fulfilled, [29][30][31]. The stabilizing controller gain which satisfies the above equation is: The first condition of Theorem 4 is fulfilled with P = I 3 , since λ max (P h (t) + P T (t) + P 2T (t) + P hT (t)) ≤ −2.05 + q + 1 + 1 4 sup t∈R 0+ λ max (P h (t) + P T (t) + P 2T (t) + P hT (t)) ≤ 0 is fulfilled according to (27). If for some q ∈ (0, 1.05), any discrete dynamics and continuoustime dynamics satisfy the following constraint for k = max(z ∈ Z 0+ : zT ≤ t), since this constraint guarantees that, in addition, (28) The corresponding gain controller matrix in the controller (3) given by K a (t) = 1 −0.5 −0.5t 0.25t (81) cancels the contribution of such discrete dynamics in the closed-loop dynamics with A acl (t) = A 0 acl (t) = 0 and B a (t) = B aa (t) = B ad (t) = 0 in (22). Thus, the whole closed-loop system with delay-free and discrete dynamics is stabilized by the controller: u(t) = K(t)y(t) + K a (t)y(t − kT); ∀t ∈ R 0+ (82) with the controller gains given by (77) and (81). It then suffices for the continuous-time delayed contribution, if any (i.e., if A d (t) is not identical to zero in (1)) for the closed-loop dynamics to satisfy (79). For instance, it is sufficient for the whole controller (3) to have the gains K(t), Equation (77) and K a (t), Equation (81), with an extra gain K d (t) which satisfies: in order to stabilize the continuous-time delayed dynamics subject to a time-varying differentiable delay h(t) of a time-derivative less than unity. In future works, it is planned to extend the results of this paper to the hyperstability and passivity theories, [32][33][34][35][36] by designing the controller gains so that "ad hoc" Popov stype inequalities be satisfied by a feedback control loop under generic nonlinear timevarying control laws.
Conclusions
This paper has studied a solution in closed form as well as the asymptotic stability and asymptotic stabilization of a linear, time-varying, hybrid continuous-time/discrete-time dynamic system subject also to delayed dynamics, whose dynamics depend not only on time but on previously sampled state values as well. The delay function is not necessarily bounded, and it is time-differentiable with bounded time-derivatives with a bound is less than one for all time. The asymptotic stability after injecting eventual feedback efforts is studied through two Krasovskii-Lyapunov functionals, one of them having a constant leading positive-definite matrix to define the non-integral part as a quadratic function of the solution, while the other takes a time-varying, time-differentiable matrix function for the same purpose. Those Krasovskii-Lyapunov functionals establish sufficiency-type conditions for the asymptotic stability of the closed-loop system. The system is assumed under a control law based on time-varying linear output feedback, which takes combined information of the current output value, the delayed one and its last previous sampled value, which arises from the combined continuous-time/discrete-time hybrid nature of the differential system. The associated Lyapunov matrix inequality, or equality associated with the above-mentioned Krasovskii-Lyapunov functionals, assumes that the delay-free matrix of the closed-loop system dynamics is a stability matrix for all time, achieved, under certain conditions, by one of the control gain matrix functions of the control law. There are also extra assumptions on the maximum variation of the time-integral of squared norms of the remaining matrices of delayed dynamics in the sense that those time integrals vary more slowly than linearly with any considered time interval length. | 8,881.6 | 2022-04-23T00:00:00.000 | [
"Mathematics"
] |
Functional Characterization and Cellular Dynamics of the CDC-42 – RAC – CDC-24 Module in Neurospora crassa
Rho-type GTPases are key regulators that control eukaryotic cell polarity, but their role in fungal morphogenesis is only beginning to emerge. In this study, we investigate the role of the CDC-42 – RAC – CDC-24 module in Neurospora crassa. rac and cdc-42 deletion mutants are viable, but generate highly compact colonies with severe morphological defects. Double mutants carrying conditional and loss of function alleles of rac and cdc-42 are lethal, indicating that both GTPases share at least one common essential function. The defects of the GTPase mutants are phenocopied by deletion and conditional alleles of the guanine exchange factor (GEF) cdc-24, and in vitro GDP-GTP exchange assays identify CDC-24 as specific GEF for both CDC-42 and RAC. In vivo confocal microscopy shows that this module is organized as membrane-associated cap that covers the hyphal apex. However, the specific localization patterns of the three proteins are distinct, indicating different functions of RAC and CDC-42 within the hyphal tip. CDC-42 localized as confined apical membrane-associated crescent, while RAC labeled a membrane-associated ring excluding the region labeled by CDC42. The GEF CDC-24 occupied a strategic position, localizing as broad apical membrane-associated crescent and in the apical cytosol excluding the Spitzenkörper. RAC and CDC-42 also display distinct localization patterns during branch initiation and germ tube formation, with CDC-42 accumulating at the plasma membrane before RAC. Together with the distinct cellular defects of rac and cdc-42 mutants, these localizations suggest that CDC-42 is more important for polarity establishment, while the primary function of RAC may be maintaining polarity. In summary, this study identifies CDC-24 as essential regulator for RAC and CDC-42 that have common and distinct functions during polarity establishment and maintenance of cell polarity in N. crassa.
Introduction
Rho GTPases are small G proteins of the Ras superfamily and function as molecular switches that activate a variety of effector proteins when in the GTP-bound state and return to inactivity upon hydrolysis of GTP. They play key roles in multiple signal transduction pathways and regulate fundamental cellular processes, including cell migration, cell cycle progression and cell polarity [1,2]. Rho guanine nucleotide exchange factors (RhoGEFs) and Rho GTPase-activating proteins (RhoGAPs) enhance nucleotide binding and hydrolysis of Rho GTPases, respectively, and are increasingly acknowledged as crucial determinants of spatiotemporal Rho signaling activity [3].
While the unicellular yeasts Saccharomyces cerevisiae and Schizosaccharomyces pombe are the paradigms for polarized growth, many members of the fungal kingdom, among them important human pathogens, are distinguished from their well-studied yeast relatives by their ability to grow in a filamentous mode, leading to the formation of highly elongated hyphae. Knowledge about the molecular mechanisms underlying this extreme form of polarized extension is only slowly beginning to accumulate, and Rho GTPases and their regulators play an essential role in hyphal morphogenesis and development [4][5][6][7][8][9].
The most obvious distinction between the Rho repertoires in yeasts and filamentous fungi is the presence of a Rac homologue in the latter organisms only. Rac is considered the founding member of the Rho GTPase family, from which the closely related Cdc42 and the more distantly related Rho proteins descended, a process associated with concomitant specialization of function [10]. In yeasts, Rac was probably lost later and its roles taken over by Cdc42. This scenario would explain why S. cerevisiae and S. pombe cells devoid of Cdc42 are not viable [11,12], while its depletion in filamentous fungi is not lethal, but becomes so upon simultaneous disruption of the Rac-encoding gene [13,14]. In mammalian systems, Rac and Cdc42 are best known for their control of different actin-based cell projections involved in cell motility. While Rac is the main regulator of lamellipodia formation, Cdc42 is required for formation of the slender filopodia, and the two GTPases appear to regulate both unique and shared effector proteins [15,16]. A similar tendency to employ Rac and Cdc42 for both overlapping and distinct morphogenetic functions is also observed in filamentous fungi, although it is also becoming clear that the degree of specialization between the two GTPases and their relative contributions to hyphal growth can vary widely between different species [17].
For instance, in Candida albicans, a dimorphic ascomycete and opportunistic human pathogen, deletion of rac1 does not interfere with viability, but Cdc42 is an essential gene. Moreover, Rac1 and Cdc42 have distinct roles in hyphal growth triggered by different stimuli and cannot substitute for each other [18,19]. Rac1 and its GEF Dck1 are required for matrix-induced filamentous growth and appear to be involved in cell wall integrity [19,20,21]. On the other hand, specific regulation of Cdc42 and its essential GEF Cdc24 allows serum-induced filament formation [18,22]. In contrast, in the basidiomycete Ustilago maydis Rac1 plays the prominent role during hyphal growth, while deletion of Cdc42 does not affect filament formation. In contrast, Cdc42, but not Rac, is essential for cell separation of the yeast cells after cytokinesis and triggers the formation of the secondary septum [13]. Thus, the roles of Cdc42 and Rac1 have strongly diverged, and consistently the two GTPases cannot substitute for each other [23]. Nevertheless, despite the high degree of specialization, the two GTPases must have retained at least one common essential function, as evident in the synthetically lethal effect of their combined depletion [13].
RacA and Cdc42 of the filamentous ascomycete Aspergillus nidulans are proposed to share a function in establishing the primary axis of polarity. Cdc42 appears solely responsible for maintaining directed elongation and regulating subsequent polarization events for lateral branch formation while RacA appears to play a prominent role in asexual development [14]. Only recently, it has been shown that in Aspergillus niger, which is a close relative of A. nidulans, RacA has a prominent role in regulating actin polarization and hyphal growth, especially maintenance of established polarity axes, while the Cdc42-homologue CftA appears largely dispensable [24].
Initial hints for the involvement of Rho GTPases for hyphal morphogenesis in the filamentous ascomycete Neurospora crassa came from a large-scale screen for conditional mutants defective in cell polarity that identified conditional mutants in the RHO1specific GAP lrg-1 and the GEF cdc-24 [25,26]. In this work, we investigate the requirement of CDC-42 and RAC during polarization and growth of N. crassa and explored their common regulation through the GEF CDC-24.
Materials and Methods
Strains, media and growth conditions N. crassa and bacterial strains used in this study are listed in Table 1. General genetic procedures and media for N. crassa are available through the Fungal Genetics Stock Center (www.fgsc. net; [27]). Fungal strains were routinely grown at 28uC on Vogel's Minimal Medium (VMM) supplemented with 1.5% (w/v) sucrose as the carbon source and solidified with 1.5% (w/v) agar. Stock solutions of cytochalasin A and benomyl (Sigma-Aldrich, St. Louis, Mo) were prepared in 100% ethanol at 10 mg/ml. Working solutions of cytochalasin and benomyl were prepared according to [28] at 2.5 mg/ml and 1.0 mg/ml respectively. A drop of the drug was placed on a coverslip and the block of agar containing mycelium was placed in contact with the inhibitor solution and scanned under confocal microscopy after 5 min of exposure. For auxotrophic strains, 0.5 mg/ml histidine was added to VMM [29]. Transformation of N. crassa macroconidia was carried out by electroporation as previously described [30]. N. crassa crosses were carried out on synthetic crossing medium [31]. Transformants showing robust consistent fluorescence were selected and back-crossed to obtain homokaryotic strains. Mycelium for DNA extraction was grown for 7 days on VMM liquid medium with no shaking and no light, filtered, submerged in liquid nitrogen and lyophilized. For genomic DNA extraction of N. crassa, we used the DNeasy Plant extraction Kit (Qiagen, Inc.). Temperature-sensitive strains of cdc-42 and rac were created by applying RIP (repeat induced point mutation) mutagenesis [32]. Briefly, N. crassa his-3 was transformed with 1.5 kb fragments covering rac and cdc-42 coding sequence and 350 bp and 200 bp 59 and 39, respectively, cloned into vector pBM61 [30]. These strains were mated with wild type and the resulting progeny screened for conditional growth defects according to the procedure described in [25].
Plasmid construction of fluorophore-Rho fusion proteins
For creation of pPgpdYFP_Rac and pPgpdYFP_Cdc42, respectively, rac (NCU02160) and cdc42 (NCU06454) were amplified from genomic DNA using primer combinations SB_rac_ 5_BglII/SB_rac_3_EcoRI and SB_cdc42_5_BglII/SB_cdc42_3_ EcoRI, respectively, and inserted via BglII/EcoRI sites into pPgpdYFP, which was designed to allow expression of N-terminally yellow fluorescent protein (YFP)-tagged proteins under the control of the A. nidulans gpdA promoter. The promoter was amplified from plasmid pEHN1-nat [33] using primers CoS_Pgpd_3/_4, while yfp was amplified from pYFP [34] using primers CoS_YFP_1/ CoS_YFPC_2MCS. The two fragments were subjected to fusion PCR with primer pair CoS_Pgpd_3/CoS_YFPC_2MCS. The resulting amplification product was cleaved with ApaI/NotI and inserted into pYFP from which the ccg-1 promoter and yfp gene had been released by digestion with the same enzymes. For creation of pCAP24.3GFP_Cdc24, cdc24 (NCU06067.4) was amplified from genomic DNA using primer combinations cdc24_5_SpeI/cdc24_ 3_PacI and inserted via SpeI/PacI sites into pRM-12GFP (Mouriño-Pérez, unpublished). Plasmids and oligonucleutides used or generated in this study are listed in Tables S1 and S2, respectively. GEF assays cDNA encoding wild type and mutant versions of RhoGEF and PH domain regions of CDC24 (NCU06067; aa 204-544) were amplified using primers NV_CDC24_5/_6. SalI/NotI sites were used for ligation with pNV72 to produce pMalc2xL_CDC24-GEFPH and its respective mutant analogues. The N. crassa Rho GTPases and RhoGEF domain constructs were expressed as fusion proteins with an N-terminal maltose binding protein (MBP) tag. For fusion protein purification (modified from [26,35]), LB+ medium (1% NaCl, 0.8% yeast extract, 1.8% peptone, 2% glucose) was inoculated to an OD 600 of 0.1 from an overnight culture of Rosetta2(DE3) E. coli cells transformed with the respective pNV72-derived plasmid. Cultures were grown shaking at 20uC to an OD 600 of 0.45, and fusion protein expression was induced by addition of isopropyl b-D-thiogalactopyranoside to 0.2 mM for 2 hours. Cells were disrupted by ultrasonication using a Sonopuls HD 2070 ultrasonicator (Bandelin GmbH & Co. KG, Germany) in lysis buffer (50 mM Tris, pH 7.4, 125 mM NaCl, and rac(7-1) grown at permissive conditions and shifted to 37uC for the indicated time or germinated at restrictive temperature. (B) Higher magnification and staining with FM4-64 revealed severe morphological defects of cdc42 and rac(7-1) grown at 37uC. Note that the dye accumulates within the apical region of the hyphal tip (white arrows), but a typical Spk (black arrow in wild type) is not formed. Arrowheads mark septa in the conditional strains. doi:10.1371/journal.pone.0027148.g001 5 mM MgCl 2 , 10% glycerol, 0.02% NP-40, 2 mM DTT, 1 mM PMSF, 0.35 mg/ml benzamidine, 10 mM GTP) and cleared lysates incubated on a rotating wheel at 4uC with pre-equilibrated Amylose Resin (New England Biolabs, USA) for one hour. The resin was washed twice with washing buffer (lysis buffer with 250 mM NaCl) before elution with elution buffer (50 mM Tris, pH 7.4, 200 mM NaCl, 5 mM MgCl 2 , 10% glycerol, 0.02% Nonidet-P40, 2 mM DTT, 20 mM maltose). Total protein concentration of the eluate was determined with bovine serum albumin standard solutions as a reference and using RotiH-Quant (Carl Roth GmbH+Co. KG, Germany) and a Tecan InfiniteH M200 microplate reader equipped with Magellan TM software (version 6; both Tecan Group Ltd., Switzerland).
Live-cell imaging
For the analysis of colonial and hyphal morphology, an Olympus SZX16 (Olympus, Japan) stereomicroscope equipped with an Olympus SDF PLAPO 1xPF objective was used; photos were captured with an Olympus ColorView III camera operated by the program Cell D analySIS Image Processing (Olympus SoftImaging Solutions GmbH, Germany). Higher resolution images of N. crassa hyphae were obtained using the inverted agar block method [38] on an inverted Zeiss Laser Scanning Confocal Microscope LSM-510 META provided with an Argon-2 ion and a He/Ne1 lasers well suited to detect GFP and YFP Abs/Em 488/ 515-530 nm. A Plan Apochromat X100/1.4 oil immersion objective was used. A photomultiplier module allowed us to combine fluorescence with phase-contrast to provide simultaneous view of the fluorescently labeled proteins and the entire cell. Confocal images were captured using LSM-510 software (version 3.2; Carl Zeiss, Germany) and evaluated with an LSM-510 Image Examiner (version 3.2). Some of the image series were converted into animation movies using the same software. Samples were stained and incubated with 2.5 mM of FM4-64 for 10 min (Molecular probes, Eugene, OR) and subsequently analyzed under confocal microscopy using an Argon-2 laser, Abs/Ems 514/670 nm.
Results
The coordinated activity of RAC and CDC-42 is required for cell polarization, the integrity of the Spitzenkö rper and hyphal growth of N. crassa In order to dissect the functions of RAC and CDC-42 for hyphal growth in N. crassa, we generated conditional mutants using an in vivo mutagenesis approach. RIP (repeat induced point mutation) is a unique method that allows the introduction of point mutations in N. crassa genes as part of a defense mechanism of N. crassa by inactivating duplicated sequences when strains carrying repeat sequences are sent through a cross [32]. By visually screening ca. 10.000 progenies of crosses of wild type with strains carrying duplicated rac or cdc-42 genes for temperature-sensitive phenotypes, we identified one strain in the progeny of each cross that displayed conditional growth defects. Sequencing of the rac(7-1), cdc-42 , coding regions amplified from genomic DNA of these mutants revealed several silent mutations, but also RIPspecific mutations that translated to one and four amino acid substitutions of highly conserved positions, respectively ( Figure S1). An alignment of N. crassa RAC and CDC-42 with homologues from other fungi revealed that the substitutions are all located at conserved positions.
When grown at permissive conditions (#32uC), rac(7-1) and cdc-42(18-4) exhibited normal cell morphology, albeit slightly reduced growth rates. Shifting cultures to 37uC, however, quickly lead to pronounced morphological aberrancies (Figure 1 A). As control, wild type was cultured under the same experimental conditions and no morphological changes were detected. Labeling these mutants with the vital dye FM4-64 revealed that some hyphae still displayed accumulation of the colorant at the apex of the new branches, but the typical Spitzenkörper (Spk) observed in wild type was not present in the two mutants at restrictive conditions (Figure 1 B). cdc-42 hyphae shifted to restrictive temperatures exhibited some apical branching, but its most prominent defects, however, were the loss of apical polarity, the frequent generation of subapical branches and swelling of most hyphal tips. Pronounced apical hyperbranching was observed in rac(7-1) within 30 min of transfer to restrictive conditions. Many of these new tips grew initially in an apolar manner, but resumed some polarity after prolonged incubation at 37uC, resulting in knobby tree-like clusters of hyphae at the edge of the highly compact colony. Nevertheless, the strong polarity defect in both conditional mutants didn't affect all hyphal tips and allowed the formation of compact colonies with highly reduced extension rates even at restrictive conditions.
We also isolated several clones with compact morphologies from the rac and cdc-42 crosses that did not show conditional defects. Sequencing the GTPase genes of these mutants revealed the repeated generation of in frame stop codons within the rac and cdc-42 coding region, suggesting that these mutants are loss of function alleles of the two GTPases (data not shown). Because clear deletion strains were available form the Neurospora genome project [39,40], we focused on the further characterization of these strains instead of the loss of function mutants isolated in the RIP approach. Colonies of Dcdc-42 and Drac showed severe growth defects, resulting in a very compact colony morphology, in contrast to the typical spreading growth of wild type (Figure 2 A). They also showed irregular growth generating distorted hyphae as a (10)(11)(12)(13)(14)(15)(16)(17)(18)(19), cdc24 and cdc24 strains of N. crassa grown at permissive conditions and shifted to 37uC for the indicated time or germinated at restrictive temperature. Defects range from apical hyperbranching in the weak cdc24 (10)(11)(12)(13)(14)(15)(16)(17)(18)(19) allele to complete failure to polarize in cdc24 . The hyphal morphology of a wild type control grown under these conditions is shown in Figure 1. (C) Mature hyphae stained with FM4-64 revealed similar polarity defects and the excessive formation of multiple septa in cdc24 (10)(11)(12)(13)(14)(15)(16)(17)(18)(19), cdc24 and cdc24 . Arrows indicate FM4-64 accumulation in tips lacking a typical Spk. doi:10.1371/journal.pone.0027148.g004 consequence of temporal loss of polarity and the periodical reinitiation of polar growth at the site of swollen tips (Figure 2 B). When stained with FM4-64, Dcdc-42 displayed a bright accumulation of the dye at the apical-subapical area without generating a defined Spk as clearly observed in wild type (Figure 2 C). Septa were also abundant and generated near the apical zone, resulting in cell compartments of reduced length (Figure 2 C). Drac was typified by its production of profuse apical branches, resulting in ramification of the compact colony. Accumulation of FM4-64 in the apical tip region was lower when compared with hyphal tips of Dcdc-42. These observations corroborate that RAC and CDC-42 are critical components for polarity maintenance and are required for Spk assembly.
We were unable to obtain viable Drac;Dcdc-42 strains, but the frequent occurrence of apolarly germinating ascospores obtained from Drac6Dcdc-42 crosses suggested lethality of the double mutants (Figure 3 A). We tested this hypothesis by generating conditional rac(7-1);cdc-42 double mutant. In accordance with the proposed requirement of at least RAC or CDC-42 function for viability, this strain displayed strong synthetic growth and polarity defects upon transfer to 37uC (Figure 3 B). We observed pronounced apical hyperbranching, concomitant swelling of apical and subapical hyphal compartments and the increased formation of septa. After 3 h of incubation cell polarity was completely lost, and the swollen compartments lysed and died after prolonged incubation at restrictive conditions. This was confirmed by confocal imaging using FM4-64, which also revealed that the typical hyphal organization was lost (Figure 3 C). Germination of cdc-42 ;rac(7-1) conidia at restrictive temperature did not produce viable germlings, but only cells grew isotropically before they lysed. Shifting these isotropically swollen cells back to permissive conditions resulted in the fast generation of multiple germ tubes (Figure 3 B). Interestingly these tubes emerged primarily on one side of the spore, suggesting that signals required for polarity establishment are not confined to a single spot but to a wide region of the cell. In summary, these phenotypic characteristics indicate a common, essential function of the two GTPases for establishment and maintenance of cell polarity in additional to individual, but non-essential functions during hyphal morphogenesis.
CDC-24 functions as exchange factor and activator of RAC and CDC-42
The Neurospora genome project had generated a heterokaryotic deletion strain for the GEF cdc-24. However, homokaryotic knockout ascospores obtained by backcrossing with wild type only rarely germinated apolarly and ultimately lysed, indicating that cdc-24 is essential for viability (Figure 4 A). Moreover, the phenotypic defects of the conditional rac and cdc-42 mutants were highly reminiscent to conditional cdc-24 mutants that also displayed multiple forms of apical hyperbranching and loss of polarity [25].
Of the $20 cdc-24 strains isolated in this screen, we analyzed tree mutants that represented weak, intermediate and strong cdc-24 defects (Figure 4 B, C). After shift to restrictive conditions, cdc-24(10-19) displayed pronounced apical hyperbranching, resulting in the formation of dense tree-like hyphal structures that looked highly similar to the rac(7-1) characteristics. Confocal imaging at higher magnification revealed distorted, but still polarly growing hyphae without the characteristic internal cell organization, i.e. hyphae showed lack of typical Spk, nuclei were closer to tips and no elongated mitochondria were observed (Figure 4 C). Severe morphological defects were observed in cdc-24(19-3) and cdc-24 , where hyphae branched excessively in apical and subapical regions. This was accompanied by swelling of apical tips and hyperseptated chains of cells without internal organization (Figure 4 C). Hyphal tips lost polarity altogether and ballooned spherically. At later stages hyperseptated hyphae resembled chains of spheres as apical and subapical hyphal compartments expanded isotropically, with some lysing at an advanced stage. The defects of these strains were almost identical to those of the conditional rac(7-1);cdc-42(18-4) double mutant described above. When conidia of the three cdc-24 mutants were germinated at 37uC, polarity defects with similar characteristics were observed. Cdc-24 (10)(11)(12)(13)(14)(15)(16)(17)(18)(19) formed hyperbranched and tight colonies, while similar hyperbranching was accompanied by apical and subapical swelling of hyphal compartments in cdc-24 . cdc-24 conidia were unable to polarize and growth was restricted to isotropic expansion. These data underline the importance of CDC-24 not only for maintenance, but also for establishing cell polarity in N. crassa.
Therefore, we determined the specificity of CDC-24 for its cognate N. crassa Rho GTPase(s) and performed in vitro GDP-GTP exchange assays with bacterially expressed and purified Rho proteins and a CDC-24 fragment that contained the catalytic GEF and adjacent PH domain of CDC-24 ( Figure 5). CDC-24(204-544) specifically stimulated the GDP-GTP exchange activity of RAC and of CDC-42, but did not affect the exchange activity of RHO1 to RHO4. Interestingly, this fragment exhibited equal GEF activity towards RAC and CDC-42.
Next, we asked if this dual GTPase specificity is affected in the conditional cdc-24 mutants. Sequence analysis of the mutant cdc-24 genes revealed mutations causing substitutions of highly conserved amino acids located within the predicted RhoGEF domain (cdc-24 (10)(11)(12)(13)(14)(15)(16)(17)(18)(19) and cdc-24 ) or the adjacent PH domain (cdc-24(19-3)) of CDC-24 ( Figure S2). Partial cDNA encoding the GEF and PH domains of CDC-24 (aa 204-544) was prepared from the three mutant strains, and the bacterially expressed proteins were used for in vitro GEF assays ( Figure 6). The identified amino acid substitutions in the CDC-24 mutant constructs affected their ability to enhance nucleotide exchange in RAC and CDC-42, and the reduction in GEF competency of the mutant proteins correlated with the strength of morphological defects observed in the corresponding mutant strains. However, the three CDC-24 variants did not exhibit significantly altered target specificity in vitro when assayed at permissive or restrictive temperature.
CDC-42, RAC and CDC-24 show distinct localizations patterns during cell polarization and tip extension
N. crassa strains, in which CDC-42 and RAC GTPases were Nterminally tagged with YFP in the respective deletion background, complemented the mutant growth defects, indicating functionality of the constructs. We observed YFP-CDC-42 fluorescence in growing hyphal tips in the form of a plasma membrane-associated crescent by confocal microscopy (Figure 7 A, movie S1). In contrast, YFP-RAC localized as membrane-associated ring that excluded the most apical zone occupied by the Spk, which is labeled by YFP-CDC-42 (Figure 7 B, movie S2). As expected by its dual function as GEF for RAC and CDC-42, the localization of an N-terminal GFP-CDC-24 construct overlapped with those of both GTPases in that it was distributed as a broad cap at the hyphal apex (Figure 7 C). Interestingly, GFP-CDC-24 was not exclusively associated with the apical membrane as the two GTPases did, but also labeled a cytosolic region surrounding the Spk in a highly dynamic manner (Figure 7 D; movie S3). Counter-staining with FM4-64 further revealed that CDC-24 was excluded from the Spk core. In summary, the three components of the RAC -CDC-42 -CDC-24 GTPase module displayed distinct localization patterns within the mature hyphal tip. To explore if the three components of the GTPase module localize to the apical dome in a cytoskeleton-dependent manner, mature hyphae were exposed to cytochalasin A, which depolymerizes actin filaments and benomyl, a microtubule depolymerizing drug. When exposed to either drug, the three proteins remained associated with the apex despite the clear effect of the drugs, which provoked irregular hyphal growth and loss of polarity, respectively (Figure 8 A, B). These results indicate that the preservation of the three proteins at the hyphal tip is independent of a functional F-actin and microtubule cytoskeleton. Both GTPases were also observed at developing septa ( Figure 9), but we did not detect CDC-24 there, potentially because the localization of all three proteins at septa is very weak and close to the detection limit.
The phenotypic characteristics of rac and cdc-42 mutants indicated the involvement of these GTPases not only in hyphal growth, but also in polarity establishment of a germinating spore and during branch formation. We observed a slight accumulation of both proteins at a subapical region of the plasma membrane, ca. 20-70 sec prior to the emergence of the new branch (Figure 10 A, B), further supporting the involvement of CDC-42 and RAC in the polarity establishment. Interestingly, CDC42 accumulated at future branch sites $1 min prior to branch emergence, while RAC localized there only #20 sec. Both proteins maintained their localization within the apex of newly formed branches with a very dynamic behavior (movies S4, S5). Interestingly, RAC was observed as crescent throughout the whole apical dome of newly formed branches (Figure 10 B, movie S5), a different distribution pattern than that observed in mature hyphae and more similar to the localization of RAC in the germ tube (see below), suggesting a growth rate dependent re-localization of RAC from an apical crescent to a subapical ring. Once, the new branch reached a length of about 20-30 mm, RAC adopted a subapical distribution as observed in mature hyphae. CDC-42 also displayed a wider distribution at the apex of new branches, compared to that of leading hyphae (movie S4). In contrast, CDC-24 was not detected as an accumulation close to the plasma membrane, but only as a cloud occupying the apical dome (Figure 10 C, movie S6). A slight accumulation in the tip was detected once the new branch reached approximately 10 mm in length (Figure 10 C).
When conidia (asexual spores) are transferred to an appropriate medium, they rehydrate and begin to grow isotropically for 3-4 h, before growth becomes polarized, and a new hyphal tip is generated [4,8]. During this hydration phase, YFP-CDC-42 accumulated at a discrete zone of the conidia that marked the future site of germ tube emergence, while the YFP-RAC was observed as cytosolic spots and accumulated at the membrane only after polarization had occurred, labeling the apex of an established germ tube (Figure 11 A). GFP-CDC-24 was primarily observed as cytosolic spots and less frequently accumulated at the cortex of conidia. Its accumulation at the apex of germ tubes was typically visible only after the germ tube reached a few mm in length. During extended growth of the germling, both YFP-CDC-42 and YFP-RAC accumulated as general plasma membrane label with an increased accumulation within the apical 5-10 mm (Figure 11 B). In contrast to RAC and CDC-42, CDC-24 appears to accumulate as a cloud occupying the apical region, before its membrane association became stronger with increased growth rate of the hyphal tip.
Discussion
The results presented here show that the RAC -CDC-42 -CDC-24 GTPase module is required for polarized growth and hyphal morphogenesis in the ascomycete N. crassa. The phenotypic characterization of single and double mutants and a thorough microscopic analysis of the localization patterns of the three proteins indicate that the two Rho GTPases have primarily nonredundant functions. However, they must at least share one common and essential task during establishment and maintenance of cell polarity, which is illustrated by the synthetic lethality of rac;cdc-42 double mutants. Moreover, in vitro GDP-GTP exchange assays demonstrate that CDC-24 functions as common GEF for RAC and CDC-42 and the mutant characteristics of conditional as well as loss of function alleles of rac, cdc-42 and cdc-24 strongly suggest that CDC-24 is the primary GEF for RAC and CDC-42.
Cells devoid of either of the two GTPases are still able to germinate, but show clear growth and polarity defects that result in the formation of small compact colonies. This signifies that establishment of a primary axis of polarity is possible, albeit delayed, in the absence of CDC-42 or RAC, but subsequent hyphal extension is highly compromised in distinct ways. Strains deficient in RAC are characterized by dichotomous tip splitting and massive apical hyperbranching; this is also observed to a lower extent in mutants affected in CDC-42 function, but their most prominent feature is the emergence of numerous subapical branches and the swelling of apical and subapical regions of the hypha. Thus, the two GTPases appear to function jointly in establishing polarity and in maintaining a stable axis of polarity, with a greater impact of RAC on the latter process, whereas CDC-42 is required to control the overall cell morphology and subapical branching. This notion of both overlapping and individual roles of the two GTPases in hyphal morphogenesis is corroborated by the findings that the simultaneous depletion of RAC and CDC-42 is In budding and fission yeasts, Cdc42p activity is regulated by the RhoGEF Cdc24p or its close homologue Scd1, respectively [11,12]. Cdc24 have also been implicated in the regulation of Cdc42 in A. gossypii and C. albicans [18,41]. However, for none of the species GEF activity of Cdc24 towards the presumed target GTPase has been directly demonstrated. In contrast, U. maydis Cdc24 functions as specific activator of Rac1 [13,42,43]. Thus, this study presents the first evidence that CDC-24 stimulates in vitro nucleotide exchange of both RAC and CDC-42 in a fungal system. Consistent with these in vitro results, the polarity defects observed in conditional cdc-24 mutants phenocopy those observed for mutants deficient in RAC and CDC-42 function. cdc-24 (10)(11)(12)(13)(14)(15)(16)(17)(18)(19) and cdc-24(19-3) exhibit clear apical hyperbranching as determined for rac(7-1) and, less pronounced, for cdc-42 , while the phenotypic characteristics of cdc-24 are identical to that of the conditional rac(7-1);cdc-42(18-4) double mutant. Specifically, cdc-24 and rac(7-1);cdc-42 conidia are unable to perform the isotropic to polar growth switch required for spore germination, and ascospores homokaryotic for deletion of cdc-24 and rac;cdc-42 fail to establish polarity. Moreover, when established colonies of the two conditional strains are transferred to restrictive condition, the hyphae lose polarity, hyper-branch and continue growing in an isotropic manner.
The proposed functional overlap of RAC and CDC-42 in N. crassa and their common regulation by CDC-24 is also reflected in the similar localization patterns of the three proteins. Both GTPases are concentrated as membrane-associated crescent at sites of polarization during germ tube and branch formation, the hyphal apex of mature hyphae and at constricting septa. Accumulation of Rac and Cdc42 homologues at hyphal tips, often in crescent-like structures as observed in this study, has been reported for several filamentous fungi such as P. marneffei, A. nidulans, A. niger and C. albicans [14,22,24,[44][45][46] and is further underlining the importance of the two GTPases for fungal morphogenesis. Interestingly, the specific localization patterns of the three proteins are distinct, and support different functions of RAC and CDC-42 within the mature hyphal tip and during polarity establishment. CDC-42 localized as confined apical membrane-associated crescent in the hyphal tip, while RAC labeled a membrane-associated ring excluding the region labeled by CDC42 (Figure 12 A). The GEF CDC-24 occupies a strategic position at the apical dome, localizing as broad apical crescent covering the localization pattern of both GTPases. This is consistent with the in vitro GDP-GTP exchange assays that confirm equal GEF activity towards RAC and CDC-42. However, CDC-24 also displays a cytosolic accumulation surrounding the Spk, suggesting that activation of RAC and CDC-42 occurs at the plasma membrane, while cytosolic CDC-24 may serve as activation competent reservoir or may have additional GEFindependent functions.
The localization of the two GTPases in young germlings and in newly established branches is different from that observed in mature hyphae, potentially because the two GTPases re-localize in a growth rate dependent manner at the hyphal tip, similarly to what has been described for the RHO-1 GAP LRG-1 in N. crassa [26]. Specifically, both CDC-42 and RAC localize in germlings and during the formation of new branches as broad membraneassociated crescents within the apical dome and switch to a small apical cap and a subapical ring, respectively, once tip extension has reached a certain rate. Cdc42 has already been implicated in branch formation in A. nidulans [14], although convincing data are currently lacking. In N. crassa, CDC-42 and RAC localize to future branch points prior to their emergence, implicating both proteins in the regulation of branch initiation. However, both proteins are not essential for branch formation as both deletion strains are still able to branch. Interestingly, the two GTPases have different kinetics of membrane localization prior to branch emergence, suggesting early and late functions of CDC-42 and RAC, respectively during branch initiation. Even more pronounced is the difference in localization of the two GTPases during polarity establishment in conidiospores (Figure 12 B): while CDC-42 localizes to the cortex prior to germ tube emergence, RAC accumulates there only after polarity is established. These differences correlate with a more pronounced polarity defect of Dcdc-42 compared to Drac and may suggest a more important role of CDC-42 than RAC during polarity establishment in N. crassa.
CDC-42 and RAC also participate in septum formation, consistent with a function described for Cdc42p in budding and fission yeasts during cell division [11,12]. Likewise, septal localization has also been observed for CflA and CflB in P. marneffei, where loss of the latter leads to inappropriate septation [44,45], for Cdc42 in C. albicans [22] and, rarely, for RacA in A. niger [24]. With the exception of U. maydis Cdc42, which appears to be highly specialized for controlling cell separation of the yeast form of this basidiomycete fungus [47], specific contributions of Rac and Cdc42 during septum formation in filamentous fungi remain to be elucidated. The increased abundance of septa in the N. crassa rac, cdc-42, and cdc-24 mutants suggests an involvement as negative regulators that may function in an antagonistic relationship with the GTPases RHO1 and RHO4, which are positive regulators of septum formation in N. crassa, A. nidulans and C. albicans [26,35,[48][49][50][51]. While a more detailed analysis of the localization kinetics and activation patterns of RAC -CDC42 -CDC-24 module in N. crassa essential for their mechanistic understanding, the identification of shared and unique downstream effector proteins is also required for clarifying their common and distinct roles during hyphal morphogenesis. Potential effectors include the PAK family kinases Cla4 and Ste20 that function during actin organization, MAP kinase activation and septin organization and have been implicated as RAC/Cdc42 targets in various filamentous fungi [43,52]. Other potential effectors are components of the ROS production machinery, which have been implicated in regulating apical dominance of fungal hyphae [53,54] and shown to interact with RAC in A. niger and Epichloë festucae [24,55]. In light of the multitude of morphogenetic factors possibly acting downstream of RAC and CDC42, much additional work is needed to elucidate the common and individual output pathways, which ultimately determine the pattern of redundancy and specialization observed for the two GTPases in N. crassa and other filamentous fungi. | 8,033.2 | 2011-11-07T00:00:00.000 | [
"Biology"
] |
BatchPrimer3: A high throughput web application for PCR and sequencing primer design
Background Microsatellite (simple sequence repeat – SSR) and single nucleotide polymorphism (SNP) markers are two types of important genetic markers useful in genetic mapping and genotyping. Often, large-scale genomic research projects require high-throughput computer-assisted primer design. Numerous such web-based or standard-alone programs for PCR primer design are available but vary in quality and functionality. In particular, most programs lack batch primer design capability. Such a high-throughput software tool for designing SSR flanking primers and SNP genotyping primers is increasingly demanded. Results A new web primer design program, BatchPrimer3, is developed based on Primer3. BatchPrimer3 adopted the Primer3 core program as a major primer design engine to choose the best primer pairs. A new score-based primer picking module is incorporated into BatchPrimer3 and used to pick position-restricted primers. BatchPrimer3 v1.0 implements several types of primer designs including generic primers, SSR primers together with SSR detection, and SNP genotyping primers (including single-base extension primers, allele-specific primers, and tetra-primers for tetra-primer ARMS PCR), as well as DNA sequencing primers. DNA sequences in FASTA format can be batch read into the program. The basic information of input sequences, as a reference of parameter setting of primer design, can be obtained by pre-analysis of sequences. The input sequences can be pre-processed and masked to exclude and/or include specific regions, or set targets for different primer design purposes as in Primer3Web and primer3Plus. A tab-delimited or Excel-formatted primer output also greatly facilitates the subsequent primer-ordering process. Thousands of primers, including wheat conserved intron-flanking primers, wheat genome-specific SNP genotyping primers, and Brachypodium SSR flanking primers in several genome projects have been designed using the program and validated in several laboratories. Conclusion BatchPrimer3 is a comprehensive web primer design program to develop different types of primers in a high-throughput manner. Additional methods of primer design can be easily integrated into future versions of BatchPrimer3. The program with source code and thousands of PCR and sequencing primers designed for wheat and Brachypodium are accessible at .
Background
Primer design programs are crucial in optimizing the polymerase chain reaction (PCR). A poorly designed primer can result in little or no target product. Numerous web-based or standard-alone programs for PCR primer design are available but vary in quality and functionality [1,2]. Primer3 [3,4] is the most popular non-commercial primer design software because of its capabilities and free accessibility. Primer3 core program, a C language-written command line program, has great flexibility to optimize a number of parameters such as product size, melting temperature (T m ), GC content, primer length, 3' end stability, self complementarity, primer dimer possibility, position constraints and so forth to get the best primer pairs, and provides the potential to design different types of PCR primers to meet various needs. Due to complexity of Primer3 core program in parameter input, it is difficult to directly use the command line program to design primers. Primer3Web [3] is the first web interface for Primer3 written in Perl. The interface has a powerful, but complex HTML form, including all of possible parameters and options used in the Primer3 core program. Primer3Plus [5] further reorganized and optimized the Primer3Web's user interface in light of parameter categories and primer design tasks. Primer3Web provided two types of primer design, generic primers and hybridization oligos. Primer3Plus further expanded to have cloning and sequencing primer design as well as a primer management module to facilitate further primer analysis and ordering. Several other web-based or command line pipeline programs using Primer3 as a primer design engine also have been developed [6][7][8][9][10]. However, most of those webbased programs lack batch primer design capability. For many large-scale primer design projects, in addition to the requirement of suitable primer design methods, two additional features, batch input of DNA sequences and primer ordering ready output are necessary.
Simple sequence repeat (SSR) and single nucleotide polymorphism (SNP) are two types of important genetic markers. Large numbers of SSRs and SNPs have been detected in various species and used in genetics and breeding [11][12][13][14][15]. A number of different SNP genotyping technologies have been developed based on various methods of allelic discrimination and detection platforms (see review [15]). Primer extension is the most commonly used approach to SNP genotyping because it can be used in a wide variety of high-throughput detection platforms, i.e., electrophoresis, fluorescence resonance energy transfer, fluorescence polarization, arrays, mass spectrometry, and luminescence [15]. A primer extension reaction involves two types of primer design: single base extension primers and allele-specific primers. A software tool for designing SSR flanking primers and SNP genotyping primers in a high-throughput mode is increasingly needed.
On the basis of the Primer3 core program, Primer3Web [3] and Primer3Plus [5], we developed a new web-based application, BatchPrimer3. The aims of BatchPrimer3 development are (1) to implement additional options in primer design, (2) to improve capability of the program to process a large number of DNA sequences, and (3) to provide convenient primer outputs for viewing primer details, printing primer lists, editing primers and finally placing primer orders. We extended Primer3Web and Primer3Plus to have batch processing capability of designing primers, and integrated SSR detection and SSR-flanking primer design to have flexible options for SSR search criteria and to export both SSR detection results and SSRflanking primer list. In addition, we implemented primer design methods for two basic types of SNP genotyping primers, single base extension (SBE) primers and allelespecific (AS) primers, as well as tetra-primers for tetraprimer amplification refractory mutation system (ARMS) PCR [16]. DNA sequencing primer design is also reimplemented in this program. The BatchPrimer3 program is easily extendible and additional primer design methods may be integrated in the future.
Web application design
BatchPrimer3 was designed as a web application consisting of a set of CGI programs written in Perl, which can run on different operating systems, such as Solaris, Linux, Mac OS or Windows with an Apache HTTP server and Perl interpreter program. The open source program Primer3Web [3] was adopted as a start point. The similar interface in Primer3Plus [5] was used, which has a pulldown combo-box for primer type selection, and a text field together with a button for uploading a sequence file ( Figure 1). This task-orientated interface [5] with modular programming design provides extendibility to integrate new primer design methods to the program. File uploading allows users to input a large number of target sequences for batch primer design and overcomes the sequence size limit in an HTML textarea field. The preanalysis module of input sequences is added to calculate sequence properties, such as sequence lengths and GC contents, which is helpful to determine parameter ranges for primer design. The parameter setting panels are customized according to different primer types. When a user chooses a primer type, the corresponding parameter setting panels are represented directly below the sequence input box (Figure 1). An email address text field is also provided to allow a user to receive an email alert of primer design results.
The Primer3 core program [3] is used to be the major primer design engine for picking the best pairs of standard PCR primers. An additional primer-picking algorithm was implemented to select position-restricted primers such as SBE primers, AS primers and sequencing primers. The best primers are selected based on the quality scores of candi-Web interface of BatchPrimer3 v1.0 application Figure 1 Web interface of BatchPrimer3 v1.0 application. Various types of primer design can be selected from the primer type pull-up combo-box, and corresponding parameter setting panels are placed below the sequence input box. Pre-analysis of input sequences can be performed before batch primer design. date primers. The quality score is a weighted linear function of primer length, T m , GC content, number of a singlebase repeat and simple sequence repeats, number of an ambiguity code (N), and self-complementarity of the entire primer and the last 10 nucleotides in the 3' end. The maximum quality score of a candidate primer is 100. If the parameter values of a candidate primer are beyond the user-specified ranges, or a candidate primer contains single sequence repeats, the quality score is set to 0. Within specified parameter range, the closer the user-specified optimum value is to a calculated primer property value, the higher is the primer quality score. If the highest score is zero, no primer is given for the specified criteria. The T m value of a primer varies in different T m calculation models -this often results in different set of primers being picked even when using the same T m parameter settings [17]. In BatchPrimer3, the T m of generic primers, hybridization oligo, and SSR flanking primers is calculated using Primer3 core program. An additional T m calculation module was implemented for SNP genotyping primers and sequencing primers based on the same model of the nearest neighbour thermodynamic theory [18,19] as used in the Primer3 core program (v 1.1) [4].
Primer design strategy
Besides generic primer and hybridization oligo [3,5], sequencing primer design is reimplemented in a batch mode. SSR flanking primer design and several SNP genotyping primer designs are newly implemented in BatchPrimer3 v1.0.
SSR screening and primer design
SSR is a simple repeat of short motifs, 1 to 6 base pairs in length with at least 12 nucleotides in length of SSR [13]. Options of di-to hexa-nucleotide repeat motifs and minimum repeat numbers for each type of motifs are provided in the web interface. An SSR detection algorithm was adopted from the SSR search program [20] to detect the SSR motifs which are then masked as targets. The Primer3 core program is then used to pick the best pairs of primers that flank the targets. If more than one SSR is detected in the same sequence, separate pairs of SSR primers will be designed for each SSR.
Design of primers flanking SNPs
In most SNP detection platforms, SNP detection requires previous PCR amplification of the genomic region that flanks the SNP site. BatchPrimer3 v1.0 provides a module to design pairs of primers that flank the SNP site.
Design of primers for SNP genotyping
In BatchPrimer3 v1.0, three types of SNP genotyping primers can be designed: (1) SBE primers, (2) AS primers, and (3) tetra-primers for tetra-primer ARMS PCR system [16]. SBE primer design SBE primers are widely used in some high-throughput detection technology platforms, such as SNaPshot (Applied Biosystems) and fluorescence polarization detection (FP-TDI) [21,22]. An SBE primer that anneals immediately adjacent to the SNP is extended by one base using a fluorescently labeled ddNTP ( Figure 2). For each SNP, it is possible to design two SBE primers, one for each orientation (forward and reverse). For each orientation, all the primer candidates meeting the user-specified primer length range (greater than or equal to the minimum size and less than or equal to the maximum size) are picked. Then the T m , GC content and quality score of each candidate are calculated. The primer with the highest score is chosen. A pair of SNP flanking primers and SBE primer can be designed in the same module.
AS primer design
SNPs can be genotyped using AS primers with the last nucleotide at the 3' end of a primer corresponding to the site of the SNP [23]. In the AS extension reaction, two primers are required, one for each allele of a SNP ( Figure 3). AS extension relies on the difference in extension efficiency of DNA polymerase between primers with matched and mismatched 3' ends. DNA polymerase extends a primer only when the 3' end is perfectly complementary to the DNA template. Thus, an AS primer is specific to one of two alleles of a SNP at the 3' end of primers and specifically amplifies one of the two alleles. Genotyping is based on determination of the primer that produces the amplicon [15]. If a common reverse primer is used in the reaction, the reaction is called allele-specific PCR (AS-PCR) [24][25][26][27][28]. Typically two forward AS primers are used in AS-PCR with a shared reverse non-specific primer. Two PCR reactions are needed to detect both alleles of a SNP [25,26,28]. One variant of AS-PCR is to use only one AS Primer design of single base extension (SBE) primers Figure 2 Primer design of single base extension (SBE) primers. One SBE primer is positioned at the base of the 3' end immediately upstream to the SNP. primer and two SNP-flanking primers in one PCR reaction (three-primer nested system) [27]. To enhance the specificity in the AS-PCR reaction, an additional mismatch may be deliberately introduced at the third or other position from the 3' end of each of the AS primers [16,[24][25][26].
Rules for selection of a nucleotide for the mismatch [16,24,29] are summarized in Table 1: "a 'strong' mismatch (G/A or C/T) at the 3'-end of an allele-specific primer will likely need a 'weak' second mismatch (C/A, or G/T) and vice versa, whereas a 'medium' mismatch (A/A, C/C, G/G or T/T) at the 3'-end will likely require a 'medium' second mismatch" [16]. An option is provided in the parameter setting panel for adding an additional mismatch and choosing the position of the second deliberate mismatch (the default is the third position). Two sets of AS primers, in both forward and reverse direction can be designed in BatchPrimer3. The SNP-flanking primer pair also can be designed together with AS primers or separately. The same primer selection algorithm is used to choose the AS primers with the highest scores.
Primer design for tetra-primer ARMS PCR Ye et al. [16] proposed a simple, effective and economical SNP genotyping method based on AS primers called tetraprimer ARMS-PCR [30][31][32][33][34][35]. This procedure adopts principles of the tetra-primer PCR method [36] and the amplification refractory mutation system (ARMS) [24]. Four primers are required to amplify a larger fragment from template DNA containing the SNP and two smaller fragments representing each of the two AS products. Primers are designed in such a way that the allelic amplicons differ in size and can be resolved by agarose gel electrophoresis.
To enhance the specificity of the reaction, in addition to the first mismatch at the 3' end of AS primers, an extra mismatch is also deliberately introduced at the third position from the 3' end of each of the two inner AS primers (Table 1, Figure 4). From the primer design perspective, two sets of tetra-primers can theoretically be designed for any SNP depending on the AS primer orientation. The schematic diagram of two-set primer design is shown in Figure 4. Although the web program [37] for designing a single set of primers for a SNP is available [16], BatchPrimer3 v1.0 implemented a batch module to easily design two sets of tetra-primers for a SNP.
Program input
Sequences can be input in two ways. Sequences in FASTA format can be copied and then pasted to the sequence text box (Figure 1). This approach has a maximum size limit of 256 kb. For a large volume of sequences, a FASTA file can be uploaded to the server and the sequence size limitation only depends on Internet speed and server machine memory. When inputting a FASTA file or a single sequence, a header line starting with ">" is mandatory for each sequence. However, empty lines or spaces within sequences are allowed ( Figure 1 and 5).
For SNP flanking primer or SNP genotyping primer design, the SNPs or alleles in sequences need to be converted to IUB/IUPAC codes (G/C→S, A/T→W, G/A→R, T/ C→Y, G/T→K, A/C→M), and the sequence file follows the NCBI dbSNP FASTA format. If multiple SNPs exist in one sequence, BatchPrimer3 will try to design primers for each. Because only one SNP is taken as target and other SNPs are converted to one of the SNP alleles, we suggest that a separate sequence for each SNP is generated based on a reference sequence.
As in Primer3Web [3] and Primer3Plus [5], for any type of primer design, the "{}" pair can be inserted into sequences to specify an included region (for example, excluding the vector sequence fragments on both ends), and the "< >" pair to specify excluded regions. An example is to mask all introns with "< >" to design primers only in exons ( Figure 5A). An alternative method to specify excluded regions is to replace the unwanted regions with The " []" pair is adopted to specify targets. If multiple targets are set in one sequence, at least one target will be included in the PCR product [3]. It is notable that target masking can be used only for generic primer design in BatchPrimer3. In addition, multiple targets and excluded regions can be specified in a sequence but only one included region is allowed.
Program output
The BatchPrimer3 program produces four parts of outputs: (1) a main HTML page containing the primer design summary of all input sequences, (2) an HTML
Results
Using the BatchPrimer3 program we have designed thousands of primers in several genomic research projects, including conserved intron-flanking primer pairs from EST sequences for wheat SNP discovery, SNP genotyping primers for wheat SNP mapping, primer pairs from Brachypodium bacteria artificial chromosome (BAC) end sequences for Brachypodium SNP discovery, sequencing primers from EST sequences for gene-specific sequencing, and SSR flanking primer pairs from Brachypodium EST and BAC end sequences for Brachypodium SSR genotyping.
Most of these primers have been validated in experiments from several laboratories.
Wheat conserved intron-flanking primer design
In the project with a goal to discover SNPs in wheat, deletion-mapped wheat EST contigs [38] were compared with rice genomic sequences using BlastN to detect the splice sites (exon/exon junctions) in ESTs. Intron-flanking primers (called conserved primers) were then designed for PCR amplification and sequencing of introns and the nested portions of exons [39]. A total of 6,045 deletion-mapped contigs and singletons were used to perform BlastN searches. Rice introns were inserted into the ESTs at the Primer design of allele-specific (AS) primers Figure 3 Primer design of allele-specific (AS) primers. Two AS primers, one for each allele of a SNP are designed. The AS primers contain one of two polymorphic nucleotides at the primer 3' end. Two sets of primers, either forward or reverse primers can be designed. If a common reverse or forward primer is used in a PCR reaction, the reaction is called allele-specific PCR (AS-PCR). Generally, two PCR reactions are needed for detection of both alleles of a SNP. A variant of AS-PCR is to use only one AS primer and two outer SNP-flanking primers in a single PCR reaction, i.e., three-primer nested system [27]. A mismatch (represented by *) may be deliberately introduced at the third position from the 3' end of each of AS primers to increase allelic specificity (See Table 1). The SNP R (G/A) is illustrated as an example, and the other types of SNPs can be applied in the same way.
A. Primer set 1 G C 3'
5'
predicted positions and replaced with corresponding number of ambiguity codes (Ns) ( Figure 5B). PCR primer pairs anchored in neighbouring exons were designed using "Generic primer" module in BatchPrimer3. An additional primer analysis was performed to identify candidate primer pairs that span at least one intron in rice. A total of 2,223 conserved primer pairs were generated. They were further filtered to select only those from single-copy genes. A total of 1,821 of these primer pairs were used for PCR amplification and sequencing of the amplicons from 16 DNAs comprising wheat diploid ancestors and tetraploid wheat in seven different laboratories. Of 240 conserved primer pairs used in one of the laboratories, 228 produced amplicons (95%). All conserved primers were made publicly available and are downloadable from the wheat SNP web site [40]. Table 1). Two outer standard primers are designed in such a way that the amplicons of two alleles differ in sizes and can be resolved by agarose gel electrophoresis.
5'
PCR product: * Wheat SNP genotyping primer design Sequences of amplicons produced with the 1,821 conserved primers were used to design genome-specific primers for PCR amplification of target DNA from a single genome of polyploid wheat and sequencing of the amplicons in a panel of wheat lines and synthetic wheats. A total of 1,527 loci containing one or more SNPs were discovered [14]. The SNPs provide a large number of potential SNP markers. Using this population of SNPs, SBE primers, AS primers, and tetra-primers were designed ( Table 2). Theoretically two sets of primers for each SNP can be designed according to primer orientation (Figure 2, An example of DNA sequence preprocessing Figure 5 An example of DNA sequence preprocessing. Any unwanted regions for primer design in sequences can be masked using a pair of "<>" to keep the sequence unchanged (A). Alternatively, the unwanted regions can be replaced with "Ns" (B). The included region can be specified by one pair "{}" and only one included region can be masked. TC{AGCTGAGGCAAGCTAAAGACGAAGTAGATAAGCCAGGGCTTCAAGTAA TG<CTTCAAAAGGTGTTGCAACT>ATATGCTTCCAACTTTCTCCGAAAGCGC AGTTACGCTTATAAAGGG<GGAGAGGTTGTAGTGCCTGA>AAAGTTTCTTGA ATCGATAATAGAGGCTCCCGAAAATGACTGGAATAGGCTGTTGCTTGATG GACTTACAGTTGGAAAGGGAGAT<GTTTCACCTGAAGAATTTTA>CGCTGTT ACCAAGAAGAGAATTGAGAGAATCTTGATTCGCACGGAAGGAGGTTCTTA TCAG<CAACGGGTACTTGTCGAATA>TATAAAAGAGATACAAGCTAGAGCAG AGGAAATAGTGAACCGGCTTCAAGGCCCAGCTGTG}TAACGTTTATGGTAC ATTTGTAGTTACTGAAAAAGGCTTTCGCCCTGCTTTATATATAAAGCACA ATCCACAACAACACGGTACAAACGCACGCCACCAAAAAAAAAAAAAAAAA AAA >BQ162199 TC{AGCTGAGGCAAGCTAAAGACGAAGTAGATAAGCCAGGGCTTCAAGTAA TGNNNNNNNNNNNNNNNNNNNNATATGCTTCCAACTTTCTCCGAAAGCGC AGTTACGCTTATAAAGGGNNNNNNNNNNNNNNNNNNNNAAAGTTTCTTGA ATCGATAATAGAGGCTCCCGAAAATGACTGGAATAGGCTGTTGCTTGATG GACTTACAGTTGGAAAGGGAGATNNNNNNNNNNNNNNNNNNNNCGCTGTT ACCAAGAAGAGAATTGAGAGAATCTTGATTCGCACGGAAGGAGGTTCTTA TCAGNNNNNNNNNNNNNNNNNNNNTATAAAAGAGATACAAGCTAGAGCAG AGGAAATAGTGAACCGGCTTCAAGGCCCAGCTGTG}TAACGTTTATGGTAC ATTTGTAGTTACTGAAAAAGGCTTTCGCCCTGCTTTATATATAAAGCACA ATCCACAACAACACGGTACAAACGCACGCCACCAAAAAAAAAAAAAAAAA AAA A B 3 and 4). Moreover, a gene locus may have several sets of primers for different SNPs or reverse and forward primers. Table 2 lists the number of three types of genotyping primers designed from 1,527 gene loci and their genome and chromosome distribution in wheat. Gene loci rather than sets of primers are reported in Table 2. The primers are derived from 1,186, 1,346, 485 gene loci for the three types of SNP genotyping primers, respectively. The success rates of picking primers based on SNPs was 48.7%, 61.6%, and 12.7% for the three types of primers, respectively, whereas the success rate based on gene loci was 77.7%, 88.2% and 31.8%, respectively. Because the tetraprimers design requires similar primer properties in two outer primers and two inner AS primers and differing sizes in two inner products, fewer tetra-primers were obtained than of the other two primer types. These primers will be a valuable resource for wheat genetics and breeding. All are accessible at [41]. In addition, 450 SBE primers and corresponding SNP-flanking primer pairs were designed for diploid Aegilops taushii SNPs for their mapping using the SNaPshot assay (Applied Biosystems) (Luo et al., unpublished data). The default parameters for PCR primer design were used, and SBE primers were designed in a range of 25 to 35 bases in primer length, 50 to 90°C in T m and 20 to 80% in GC content. Success rate of SBE Screenshot of a primer set in batch primer design Figure 6 Screenshot of a primer set in batch primer design. The picture shows the primer design results of sequence ID (rs16791736) for tetra-primer ARMS PCR.
Brachypodium standard primer design
In the Brachypodium SNP discover project, the non-redundant Brachypodium (accession Bd21) BAC end sequences [42,43] were used to design standard PCR primers for DNA amplification in the accessions Bd21 and Bd3-1, to find SNPs by comparing sequences of the two lines [44]. A total of 960 pairs of primers were designed, 689 (71.8%) successfully amplified single products. Approximately one quarter (28.2%) of the primers failed to amplify a product in Bd3-1 while producing an amplicon in Bd21.
Brachypodium SSR detection and SSR-flanking primer design
To develop SSR markers, the 49,134 Brachypodium BAC end sequences [43] were screened for SSRs and the corresponding SSR-flanking primers were designed. Screening was performed for di-, tri-, tera-, penta-and hexa-nucleotide repeats. The minimum SSR length was set to 12 base pairs and the minimum number of SSR motif repeats were 6, 4, 3, 3, 3, respectively, for di-to hexa-nucleotide repeats. Default parameters for primer design were used: product size of 100 to 300 bases, primer size of 18 to 23 bases with the optimum size of 21 bases, T m of 50 to 70°C with the optimum at 55°C and the maximum difference of 20°C, and the primer GC content set at 30 to70%. A total of 10,064 SSRs (1,123 dinucleotide, 3,928 trinucleotide, 3,818 tetranucleotide, 819 pentanucleotide and 376 hexanucleotide) were detected and 8,977 pairs of SSR primers were successfully designed. Genotyping of those SSRs is in progress. The primer list is available at [45].
Performance of BatchPrimer3 web application
Performance of the BatchPrimer3 program depends on primer type, speed of the server on which BatchPrimer3 resides, client Internet speed (affecting sequence data loading to the server) as well as the efficiency of the BatchPrimer3 and Primer3 core programs. Primer design for generic primer, sequencing primer, and SBE primer performs faster than for other types of primers, with tetraprimer design taking the most time. For example, the above screening of 49,134 sequences for SSR and subsequent primer design took about 526 seconds, whereas the tetra-primer designs from 5,509 sequences took 432 seconds through Internet connections.
Parameter setting
To obtain high quality primers, primer length, T m , GC content, specificity, and intra-or inter-primer homology must be taken into account [2]. Primer specificity is related to primer length and the final 8 to 10 bases of the 3' end sequence. A primer length of 18 to 30 bases is opti- mum [1,2]. T m is closely correlated to primer length, GC content and primer base composition. Ideal primer T m is in the range of 50 to 65°C with GC content in the range of 40 to 60% for standard primer pairs [1,2,17]. However, the optimal primer length varies depending on different types of primers. For example, SNP genotyping primers need a longer primer length (25 to 35 bases) to enhance their specificity, and thus the corresponding T m might be higher than 65°C. A suitable T m can be obtained by setting a broader GC content range (20 to 80%). A broader GC content range can increase the success rate of primer picking from sequences with relatively low GC contents (AT rich species or sequences). In BatchPrimer3, the entire primer complementarity and 3' complementarity between and within primers are calculated to assess the intra-or inter-primer homology for the entire primer or 10 bases at the 3' end. Generally, the score measuring the entire primer complementarity should be less than or equal to 8 and the score for 3' end complementarity should be less than or equal to 3 [3].
Batch primer design
The advantage of batch primer design is its high efficiency. However, designing primers in the batch mode can result in a failure to design primers for some sequences because input sequences vary in sequence quality and properties (for example, AT rich or GC rich) and/or because the same set of primer design parameters cannot be applied to all sequences. A utility tool for pre-analysis of input sequences is therefore provided in BatchPrimer3 to help users to understand the basic properties of input sequences, such as sequence length and GC content in an entire set of input sequences. The information can be used to adjust the parameter ranges of product size and primer GC content.
The success rate of picking primers in the batch primer design mode is affected by sequence quality, target polymorphism location (for SNP genotyping primers), and parameter settings. Ambiguity codes (N) in a sequence may result in a failure in picking proper primers, especially for position-specific primer design. SBE and AS primers are picked from the region adjacent to, or including the target polymorphism. If an ambiguity code exists in the region or the T m , GC content or other primer properties cannot meet the parameter settings, primer design will fail. In tetra-primer ARMS PCR, two inner AS products are amplified, which requires that the polymorphism site cannot be too close to an end of a sequence. For SSR primer design, no proper primer is available if an SSR is located at the end of a sequence.
The source of input sequences affects PCR amplification. EST sequences are often used to design SSR primers [11] or other types of primers (such as sequencing primers and conserved primers, see Results). A low amplification rate was reported for EST-derived SSR primers [11] and one of the possible reasons is that one or both primers of the EST-derived SSRs traverse a splice site [11]. The splice site analysis of EST sequences should be performed to mask splice sites or to insert ambiguity codes (Ns) into EST sequences at a splice site ( Figure 5B). The wheat conserved primer design strategy is a successful example to resolve this problem.
Conclusion
BatchPrimer3 is a comprehensive, extendible web primer design program to design different types of PCR and sequencing primers. The batch sequence input and convenient tab-delimited primer outputs facilitate rapid primer design for a large number of sequences and primer ordering. Additional primer design methods can be easily integrated into the program in the future. Using this software program, thousands of primers for wheat and Brachypodium SNP discovery, and SNP and SSR genotyping, have been designed and validated. The program with source code and designed primers can be accessed at [41] (also see Additional file 1).
Availability and requirements
Project name: BatchPrimer3.
Operating systems: the software should run in different operating systems, such as Solaris, Linux, Mac-OS or Windows. Tests were performed in Solaris and SuSE Linux systems.
Programming language: Perl
Other requirements: Apache HTTP server, Perl interpreter program
Any restrictions to use by non-academics: None
Authors' contributions FMY developed the major modules of the BatchPrimer3 program, designed wheat conserved primers and SNP genotyping primers, and drafted the manuscript. NH and YQG validated wheat conserved primers and Brachypodium primers. M-CL and YM designed and evaluated wheat SBE primers. DH implemented part of the pre-analysis module of input sequences. GRL helped set up the BatchPrimer3 server. JD and ODA participated the design and coordination, and helped to draft the manuscript. All authors read and approved the final manuscript. | 6,927.2 | 2008-05-29T00:00:00.000 | [
"Biology"
] |
RNA-Binding Proteins Impacting on Internal Initiation of Translation
RNA-binding proteins (RBPs) are pivotal regulators of all the steps of gene expression. RBPs govern gene regulation at the post-transcriptional level by virtue of their capacity to assemble ribonucleoprotein complexes on certain RNA structural elements, both in normal cells and in response to various environmental stresses. A rapid cellular response to stress conditions is triggered at the step of translation initiation. Two basic mechanisms govern translation initiation in eukaryotic mRNAs, the cap-dependent initiation mechanism that operates in most mRNAs, and the internal ribosome entry site (IRES)-dependent mechanism activated under conditions that compromise the general translation pathway. IRES elements are cis-acting RNA sequences that recruit the translation machinery using a cap-independent mechanism often assisted by a subset of translation initiation factors and various RBPs. IRES-dependent initiation appears to use different strategies to recruit the translation machinery depending on the RNA organization of the region and the network of RBPs interacting with the element. In this review we discuss recent advances in understanding the implications of RBPs on IRES-dependent translation initiation.
Introduction
RNA plays a central role in gene expression. Within the cell RNA molecules are associated to RNA-binding proteins (RBPs) forming dynamic ribonucleoprotein particles (RNPs) that affect all steps of RNA metabolism [1]. RBPs assemble on nascent and processed mRNAs, governing gene regulation at post-transcriptional level in health and disease. Indeed, mutations affecting the function of RBPs cause several diseases [2]. RBPs are major components of the cell that control transcription, splicing, catalytic processing, transport, localization, translation or RNA stability processes (Figure 1). These steps in the RNA lifespan are closely connected to each other, such that altering one of them will affect the others. RBPs often interact with the untranslated regions (UTRs) of mRNAs, which perform cis-acting regulatory functions in most cases and provide the landing site of many RBPs [3]. RBPs interacting with certain UTR structural elements or specific primary sequences play a pivotal role in the response of the cell to different environmental stresses, such as virus infections, heat or osmotic stress, nutrient deprivation and other stimuli that trigger apoptosis, inflammatory response, antiviral response, etc. (Figure 2) [4][5][6]. On the other hand, processes such as cell proliferation, cell death or cell differentiation occurring in healthy organisms also depend on RNA-protein interactions [7]. In response to distinct stresses, cells trigger a differential response that can displace the equilibrium towards cell survival or cell death. Key factors mediating this response are post-translational modification, relocalization, proteolysis or degradation of RBPs. A paradigmatic example of this response is observed in virus-infected cells [4]. Viral encoded proteases produced during picornavirus infection induce the proteolysis of a large number of host factors (Table 1) including splicing factors, RNA-processing proteins, RNA helicases or nuclear pore factors [8][9][10][11][12][13][14][15][16][17][18][19][20][21], leading to a redistribution of nuclear proteins to the cell cytoplasm. In addition, proteolytic cleavage of eukaryotic initiation factors (eIFs) [22][23][24][25][26] inhibits protein synthesis and in general, causes a shut-down of cellular gene expression. Specifically, cleavage of eIF4GI and PABP by picornavirus-encoded proteases induces the shut-off of cap-dependent translation in infected cells. mRNA stability [16] Traditional RBPs consist of modular RNA-binding domains (RBDs), typically RNA recognition motifs (RRM), heterogeneous nuclear ribonucleoprotein K-homology domains (KH), cold-shock domains (CSD), zinc finger domains (ZNF), double-stranded RNA-binding domains (dsRBD), Piwi/Argonaute/Zwile (PAZ) domain, RGG (Arg-Gly-Gly) box, DEAD/DEAH box, Sm domains and Pumilio/FBF (PUF or Pum-HD) domains [27]. Representative examples of RBPs are the polypyrimidin tract-binding protein (PTB), the hnRNP K, the upstream of N-ras (Unr), or the Zinc finger protein 9 (ZNF9). Among well characterized RBPs are hnRNPs, a large family of nuclear proteins (hnRNP A1 to hnRNP U) with RNA-binding domains and protein-protein binding motifs [28] that shuttle with the RNA from the nucleus to the cytoplasm and regulate transcription, splicing, RNA turnover and translation. PTB (also termed hnRNP I) is a protein with four RRMs that recognize U/C-rich sequences [29]. HnRNP K, PCBP1 (hnRNP E1) and PCBP2 (hnRNP E2) recognize polyr(C) regions and share the KH RNA-binding domain [30]. Unr is a cold-shock domain RBP that interacts with the poly(A)-binding protein (PABP) [31]. Zinc finger proteins, however, are DNA and RNA-binding proteins.
Given the many layers of post-transcriptional control operating in the cell, the number of factors that can be involved in various steps of RNA function is much larger than anticipated. Indeed, the recent development of methodologies aimed at searching for new RBPs has produced a catalogue of factors with RNA binding capacity [32], such as enzymes of intermediary metabolism among others. In the near future, in depth characterization will indicate whether these factors are passengers of RNP complexes or will provide evidences for the functional role of these factors in RNA-dependent pathways.
RNA-Binding Proteins and Translation Control
Most cellular mRNAs initiate translation by a mechanism that depends on the recognition of the m 7 G(5')ppp(5')N structure (termed cap) located at the 5' end of most mRNAs [3]. In this mechanism, the 5' cap structure of the mRNA is recognized by eIF4F, a complex composed of the cap-binding factor eIF4E, the scaffolding protein eIF4G and the RNA helicase eIF4A. The cap-binding capacity of eIF4E is regulated by phosphorylation level of eIF4E-binding proteins (eIF4E-BP 1-3). In turn, eIF4G interacts with eIF3 and PABP, the protein interacting with the poly(A) tail of the mRNA. In addition, the middle region of eIF4G around the first HEAT motif displays RNA binding capacity [33]. The thirteen multimeric factor eIF3 (eIF3a-eIF3m) is organized as a five-lobed structure [34]. Several subunits harbor RNA-binding domains (eIF3b and eIF3g) or have being involved in direct binding to mRNA (eIF3d). In addition, recent analysis of the interaction of eIF3 with a viral RNA has identified helix-loop-helix (HLH) motif [35] in eIF3a-c as the region responsible for RNA interaction.
On the other hand, the 40S ribosomal subunit with the ternary complex (TC) (consisting of the initiator methionyl-tRNA i and eIF2-GTP) mediates the formation of the 43S complex that is recruited to the mRNA along with eIF1A, eIF1, eIF3, and possibly eIF5. Assembly of a competent 43S complex into mRNA bound to eIF4F is further stabilized by the interaction of eIF4G with PABP, and of eIF4B with eIF4A and PABP. The protein eIF4B harbors an RRM motif and a C-terminal Arginine-rich motif required for RNA-binding [36]. The 43S complex scans the 5' UTR region of the mRNA until the first initiation codon in the proper context is encountered, leading to the formation of the 48S initiation complex. At this step, eIF1 ensures fidelity of initiation codon selection, discriminating against non-AUG and AUG codons located in poor context [37]. Furthermore, scanning of highly structured 5' UTRs depends on the RNA helicase DHX29 [38]. Following eIF1 displacement, eIF5 mediates the hydrolysis of eIF2-bound GTP and eIF5B mediates joining of the 60S subunit yielding the 80S ribosome that gives rise to the start of polypeptide synthesis. For recent reviews on translation initiation see [3,6].
Cap-dependent translation initiation is inhibited under cellular stress conditions, such as viral infection or apoptosis [5,6,39]. Proteolysis of eIF4G and PABP and changes in the phosphorylation levels of eIF4E-binding proteins (eIF4E-BPs) severely compromise cap-dependent translation initiation in picornavirus infected cells [3]. These adverse situations, however, allow translation of some viral mRNAs and a subset of cellular mRNAs that evade translation shut-down, taking advantage of cis-acting regulatory elements known as internal ribosome entry site (IRES) elements. Translation of mRNAs bearing IRES elements, first reported in the genomic RNA of picornaviruses [40,41], is therefore resistant to cap-dependent shut-down.
Soon after picornavirus IRES disclosure, other viral RNAs [42,43] and cellular mRNAs possessing IRES elements were discovered due to their capacity to remain attached to polysomes under conditions that inhibit cap-dependent translation [44]. Cellular mRNAs bearing IRES elements do contain a cap at the 5' end although they are translated at low levels, but can switch to an IRES-dependent mechanism when cap-dependent initiation is compromised [45]. In fact, there are examples where translation is increased (vimentin) and others where translation persists despite shut-down of cap-dependent translation (myc or nucleophosmin) [46]. The internal initiation process is assisted by RBPs, which are thought to facilitate the proper folding of the IRES region allowing the recruitment of the translation machinery [47]. The list of mRNAs that can be translated using cap-independent mechanisms is growing constantly [48][49][50][51][52][53][54][55][56][57][58][59]. For other IRES reports, see recent reviews [7,60]. Most IRES elements are located within the 5' UTR of mRNAs. However, examples of IRES elements located within the coding sequence also exist [61][62][63]. In these cases, the polypeptide expressed from the internal initiation codon results in a shorter protein with different function [52,64]. Thereby, this manner of translation initiation opens new avenues for gene expression control.
Although the presence of IRES elements in viral RNAs is well established, data on some cellular elements has been a matter of debate due to the lack of appropriate controls performed to discard the presence of cryptic promoters or alternative splicing transcripts [65,66]. Indeed, conserved features of IRES elements in cellular mRNAs remain largely unknown since they differ in nucleotide sequence, RNA secondary structure and IRES trans-acting factors (ITAFs) requirement [67][68][69]. Moreover, the lack of conserved features hampers prediction of novel IRES elements using computational methods.
According to the minimal requirement of factors for internal initiation, viral IRES elements can be grouped into two categories. The first category, represented by the intergenic region (IGR) of dicistroviruses, includes those that do not need proteins to assemble the initiation complex. This unique class of IRES elements adopts a compact tertiary structure that functionally substitutes the initiator met-tRNA i during internal initiation, driving protein synthesis without the help of eIFs at non-AUG codons [43, 70,71]. The second category includes those elements that do need eIFs and RBPs to recruit the ribosome, such as picornavirus or cellular IRES elements [72][73][74]. In addition, distinct groups can be made within the second category, depending on the RNA structural organization and the proteins required for activity (see below).
RNA-Binding Proteins Modulating Viral IRES Activity
RNA structure plays a fundamental role in viral IRES-dependent translation initiation [75][76][77]. Structural and functional studies have shown that RNA structure of viral IRES elements is organized in phylogenetically conserved modules [42, [78][79][80], suggesting a distribution of functions among the different domains of an IRES element [81,82]. Evidence for links between RNA structure and biological function is also supported by the conservation of structural motifs within IRES elements of highly variable genomes [83][84][85][86]. For the sake of brevity, we will discuss in this review the IRES elements located in the genome of picornavirus and hepacivirus.
RNA-Binding Proteins Modulating Picornavirus IRES Activity
Picornavirus IRES elements span from about 280 to 450 nucleotides upstream of the functional start codon [87]. Attending to RNA secondary structure organization and eIFs requirement, picornavirus IRES elements are grouped into four types, named I to IV [39]. Type I IRES [present in poliovirus (PV) and human rhinovirus (HRV)] and type II [present in encephalomyocarditis virus (EMCV) and foot-and-mouth disease virus (FMDV)] require the C-terminal region of eIF4G in addition to eIF4A, eIF2, and eIF3, but not eIF4E [88][89][90][91]. Furthermore, a differential requirement of eIF1 and eIF1A is needed to initiate at the second functional AUG of the FMDV IRES [92] which is, however, the most frequently used initiation codon on the viral RNA [93]. Translation initiation driven by type III IRES (present in Hepatitis A) does not require the full-length eIF4G [94]. On the other hand, type IV IRES (also termed HCV-like due to its similarity with hepatitis C virus (HCV) and pestivirus IRES elements [95,96] depend on eIF2 and eIF3, but they are eIF4G-independent. Interestingly, addition of specialized RBPs such as PTB or ITAF 45 (also termed Ebp1) strongly stimulates complex formation in reconstitution assays [92,97,98].
The observation that ITAFs are RBPs previously identified as transcription regulators, splicing factors, RNA transport, RNA stability or translation control proteins (Table 2) suggests a complex network of interactions among gene expression pathways. PTB was the first RBP identified as an ITAF using UV-crosslink assays conducted with radiolabelled IRES transcripts [99,100]. PTB is expressed in the cell as several isoforms. Interestingly, the expression pattern of the neural form of PTB was proposed to mediate cell-type IRES specificity of a neurotropic virus [115]. Many picornavirus IRES elements have two polypyrimidine tracts located at each end of the IRES region [39, 116,117]. It appears that a single PTB molecule binds to the IRES, with RRM1-2 contacting the 3' end and RRM3 the 5' end of the IRES, constraining the RNA structure in a unique orientation [118]. However, subtle differences exist among IRES located in viral genomes belonging to the same virus family. Like other picornavirus IRES, Aichi virus (AV) IRES is enhanced by PTB, but this particular element depends on the RNA helicase DHX29 due to the sequestration of its initiation codon into a stable hairpin [97]. This example illustrates the different strategies that distinct viral IRES can use to capture the ribosomal subunits.
Nonetheless, the large amount of data obtained over the years on the mechanism of action of picornavirus IRES reveals that these regulatory elements are more complex than initially proposed. Proteomic studies based on mass spectrometry analysis of affinity purified RNPs assembled on tagged RNAs with cytoplasmic cell extracts have allowed the identification of RBPs interacting with several picornavirus IRES elements (Table 2). In support of the reliability of these approaches, proteins reported to interact with IRES elements by biochemical methods (for example, eIF4B and eIF3) were identified exclusively bound to the specific domains that contain their recognition motifs in both FMDV and the HCV IRES [102,119]. Various hnRNPs and RNA helicases have been reported to bind to picornavirus IRES elements. Proteins associated with viral IRES elements include the poly-r(C) binding protein (PCBP2), the SR splicing factor (SRp20) and the far upstream element binding protein 2 (FBP2). PCBP2 stimulates the IRES activity of PV, HRV and coxsackievirus B3 (CBV3) [104]; SRp20 up-regulates PV IRES-mediated translation via its interaction with PCBP2 following its relocalization to the cytoplasm in infected cells [105]; the nuclear protein FBP2 is a KH protein that shuttles to the cytoplasm in infected cells, negatively regulating enterovirus 71 (EV71) IRES activity [106].
Other examples concern proteins previously reported to perform a role distinct than translation control. A protein recently revealed as a factor regulating translation is Gemin5 [107], an abundant protein predominantly located in the cell cytoplasm. Gemin5 was reported to be the RNA-binding factor of the survival of motor neurons (SMN) complex that is responsible for the assembly of the seven member (Sm) proteins on snRNAs [120], the principal components of the splicing machinery. In addition, Gemin5 binds directly to the FMDV IRES element through its C-terminal region partially competing out PTB binding [121], a result that explains its negative effect on internal initiation of translation. On the other hand, recent studies have shown that PV IRES recruits glycyl-tRNA synthetase (GARS) taking advantage of the tRNA(Gly) anticodon stem-loop mimicry to the apical part of a conserved stem-loop adjacent to the binding site of eIF4G, enhancing IRES function at the step of the 48S initiation complex formation [108].
Given their modular organization, RBPs can recognize a large number of targets. This feature raises the possibility that binding of a particular RBP to certain RNAs could facilitate different sorts of regulation depending on the other partners of the complex. One example is Ebp1 (erbB-3-binding protein 1) identified in proteomic analysis bound to domain 3 of the FMDV IRES [102]. Ebp1 cooperates with PTB to stimulate FMDV IRES activity [92,98,117], but not EMCV IRES activity [103]. Protein-protein bridges could contribute to stimulate picornavirus IRES-dependent translation, as in the case of hnRNPs, helicases and Unr [109]. Conversely, heterodimers such us the NF45 (nuclear factor of activated T cells) with the double-stranded RNA-binding protein 76 (DRBP76, also termed NF90/NFAR-1) (DRBP76:NF45) repress HRV translation in neuronal cells [110]. In other cases, protein-protein association during mRNA transport, such as hnRNP U, hnRNP A/B, YB-1, or PTB [102,111,122] can explain the identification of cytoskeleton proteins by mass spectrometry of factors associated to viral IRES elements.
RNA-Binding Proteins Modulating Hepatitis C IRES Activity
The IRES element of HCV genome differs from picornavirus IRES elements in RNA structure and eIF4G requirement [39,123]. The HCV IRES is organized in three structural domains (designated II, III and IV) [124], although destabilization of domain IV is not detrimental to IRES function [42]. In the absence of eIFs, domain III can form a high-affinity complex with the 40S ribosomal subunit [125]. However, eIF3 and eIF2 are necessary for 48S initiation complex formation in vitro [72] despite the fact that HCV IRES activity has been shown to be partially resistant to eIF2 inactivation [126]. Recent cryoEM studies have contributed to the understanding of the interaction of eIF3 with the HCV IRES [35]. Mutations in the RNA-binding motif of eIF3a weaken eIF3 binding to the HCV IRES and the 40S ribosomal subunit, suppressing eIF2-dependent recognition of the start codon. Mutations in the eIF3c RNA-binding motif also reduce 40S ribosomal subunit binding to eIF3 and inhibit eIF5B-dependent steps downstream of start codon recognition.
Both picornavirus and HCV IRES-dependent translation are synergistically enhanced by the 3' UTR of the viral genome [127][128][129], consistent with a functional link between the 5' and 3' ends of the viral RNA. In picornavirus RNAs, the 3' UTR is composed of two stem-loops and a short poly(A) tail that are required for virus multiplication. In contrast, the HCV viral RNA possesses a poly(U) tract and a complex RNA structure located near the 3' end. Bridging 5' and 3' ends of viral RNAs involves direct RNA-RNA contacts and RNA-protein interactions [130,131]. Accordingly, riboproteomic procedures on RNAs with two distant cis-acting regions identified factors mediating the formation of functional bridges between mRNA regions. This was the case of RNPs assembled on tagged RNAs that contained both the IRES and the 3' UTR of HCV [111]. One of the identified proteins, the insulin-like growth factor II mRNA-binding protein 1 (IGF2BP1), coimmunoprecipitates with eIF3 and the 40S ribosomal subunit, suggesting that it enhances HCV IRES-dependent translation by recruiting the ribosomal subunits to a pseudo-circularized RNA. Recent studies have proposed that 3' UTR interaction with the ribosomal subunit retains ribosome complexes during translation termination, facilitating efficient initiation of subsequent rounds of translation [132].
RNA-Binding Proteins Controlling Cellular IRES Activity
Cellular IRES elements are typically present in mRNAs encoding stress response proteins, such as those needed during nutrient deprivation, temperature shock, hibernation, hypoxia, cell cycle arrest, or apoptosis [5,7]. Hence, they have evolved mechanisms to evade global repression of translation. The study of some cellular IRES elements has shown that ITAFs facilitate the binding of the mRNA to ribosomal 40S subunits, in conjunction with eIF2, eIF3, eIF4F, and PABP [133][134][135][136]. Accordingly, changes in ITAFs abundance, post-translational modifications or subcellular localization, modulate internal initiation of translation during different stress conditions [137].
Multifunctional RBPs interact with various IRES elements, suggesting a mechanism for the coordinated regulation of translation initiation of a subset of mRNAs bound by shuttling proteins, such as hnRNPs and splicing factors (Table 3). Nuclear proteins found in complexes associated with various gene expression steps involving RNA molecules (PTB, the splicing-factor related protein proline and glutamine-rich SFPQ/PSF, the non-POU domain-containing octamer binding nuclear RNA binding protein (nonO/p54nrb), PCBP2, or HuR) control the expression of lymphoid enhancer binding factor (LEF1), c-myc, CDK11, or p53 [136,[138][139][140][141]. However, there are other cases where specific RBPs (such as mouse hnRNP Q and FMRP) are reported to control a reduced number of IRES elements [142,143]. This differential regulation suggests that specialized RBPs might exert their function in translation control by binding to the IRES region of specific cellular mRNAs during splicing complex assembly before nuclear export. It remains to be determined whether this is the result of a specialized activity or if it reflects the lack of sufficient detailed studies. Below, we discuss the capacity of various RBPs to control cellular IRES elements driving the expression of proteins grouped according to their function in the cell. In this process, deleting the content is not recommended.
ITAFs Controlling the Expression of Cell Proliferation Proteins
PTB stimulates translation driven by IRES elements located in mRNAs encoding proteins of the myc family controlling cell growth, the tumor suppressor gene p53 as well as other factors involved in apoptosis and nutrient deprivation [135,136,144]. A complex formed by Annexin A2, PSF and PTB binds and stimulates p53 IRES in the presence of calcium ions [139]. The unr, c-myc, CDK11, and serine/threonine-protein kinase PITSLREp58 IRES elements are activated during mitosis [140,146], a cell cycle phase where cap-dependent translation is compromised. Protein-protein interaction and/or coordinated RNA-proteins complex assembly influence internal initiation, as shown in the case of IRES activity of c-myc and PITSLRE mRNAs, whose function depends on the Unr-partners, hnRNP K, PCBP1-2, or hnRNP C1-2, respectively [141,147]. On the other hand, stress-dependent modifications or relocalization of hnRNP A1 mediates internal initiation of c-myc, unr, cyclin D1, or sterol-regulatory-element-binding protein 1 (SREBP-1a) mRNAs [148,149].
Translation of specific mRNAs in cells with quiescent v-akt murine thymoma viral oncogene homolog 1 (AKT) kinase maintains the levels of proteins involved in cell cycle progression when eIF4E-mediated (cap-dependent) translation is inhibited. This pathway is dependent on SAPK2/p38-mediated activation of IRES-dependent initiation of the cyclin D1 and c-myc mRNAs [152]. Inhibition of SAPK2/p38 in glioblastoma multiforme cells reduces rapamycin-induced IRES-mediated translation initiation of cyclin D1 and c-myc, resulting in G1 arrest and inhibition of tumor growth.
ITAFs Controlling Translation of Pro-apoptotic and Pro-survival mRNAs
IRES located in mRNAs encoding proteins synthesized under apoptosis such as the apoptotic protease activating factor 1 (Apaf-1), and BCL2-associated athanogene (BAG-1), are also responsive to PTB [145]. In particular, IRES activity of Apaf-1 mRNA is regulated via PTB and Unr [74]. However, during apoptosis the Apaf-1 IRES is activated while the X-linked inhibitor of apoptosis protein (XIAP) is inhibited [161]. It has been reported that relocalization of hnRNP A1 mediates internal initiation of Apaf-1 and XIAP [150,151]. Other proteins such as DAP5 and HuR exert a stimulatory role on apoptotic mRNAs [153,154].
With the exception of pyrimidine tracts, no distinctive RNA motifs that can be used to predict the binding of RBPs are apparent in cellular IRES elements. Yet, cellular IRES with high AU content, such as XIAP, depend on NF45 [157], since cells deficient in NF45 exhibit reduced IRES-mediated translation of XIAP and cellular inhibitor of apoptosis protein 1 (cIAP1) mRNAs that, in turn, leads to dysregulated expression of survivin and cyclin E.
Although most ITAFs stimulate translation, a few cases where RBPs repress translation have also been reported. IRES-dependent translation of anti-apoptotic proteins XIAP and Bcl-X is repressed by the tumor suppressor programmed cell death 4 (PDCD4), a factor that sequesters eIF4A and, thus, inhibits formation of the 48S translation initiation complex [162]. Phosphorylation of PDCD4 by activated S6K2 leads to the degradation of PDCD4, stimulating XIAP and Bcl-x(L) translation.
ITAFs Controlling Response to Nutrient Starvation, ER Stress, or Growth Factors
As in the case of IRES elements controlling the expression of cell proliferation proteins, stress-dependent relocalization of hnRNP A1 also mediates internal initiation of vascular endothelial growth factor (VEGF) and fibroblast growth factor (FGF-2) [151]. However, a negative effect of DKC1 on the VEGF IRES has been recently reported [163], likely due to a reduction on ribosome availability. HuR was found to negatively affect translation of the IGF1 receptor or the thrombomodulin (TM) endothelial cell receptor [159] by yet unknown mechanisms. A different case of a repressor ITAF is the DEAD-box RNA helicase 6 (DDX6) [160]. DDX6 inhibits VEGF IRES-mediated translation in normoxic MCF-7 extract. However, under hypoxia the level of DDX6 declines and its interaction with VEGF mRNA is diminished in vivo. In addition, DDX6 knockdown cells show increased secretion of VEGF.
The death-associated protein (DAP5), NF45, G-rich RNA sequence binding factor (GRSF-1), fragile-X mental retardation protein (FMRP), heterogeneous nuclear ribonucleoprotein D-like protein (JKTBP1) and zinc-finger protein ZNF9 are specific for some IRES elements such as DAP5/p97, cIAP1, ODC [155,156,158,164]. Moreover, circadian expression of mouse PERIOD1 (mPER1) is regulated by rhythmic translational control of mPer1 mRNA, together with transcriptional modulation, via an IRES element along with the mouse trans-acting factor mhnRNP Q [143]. The rate of IRES-mediated translation exhibits phase-dependent characteristics through rhythmic interactions between mPer1 mRNA and mhnRNP Q.
In summary, the data available so far indicate that individual mRNAs exploit the activity of multifunctional RBPs to overcome the global repression of protein synthesis.
Concluding Remarks and Perspectives
ITAFs are RBPs that activate or repress the expression of proteins critical in the cellular response to growth, nutritional, environmental and proliferation signals. It has been proposed that the main role of RBPs is to remodel the mRNA structure in a way that enhances its affinity for components of the translation machinery, or that they substitute some canonical eIFs providing functional bridges between the mRNA and the ribosomal subunits. Elucidating the function of ITAFs demands a deep understanding of their RNA targets and their protein partners with potential modifications. Yet, a subset of IRES elements exhibiting different structural organization interacts with the ribosomal protein RpS25 [165], consistent with the observation that two of these IRES elements (HCV and the dicistrovirus IGR) induce similar conformational changes in the 40S ribosomal subunit despite their different RNA structural organization [166]. Indeed, the IRES property of interacting with the ribosomal RNA [167,168] is an attractive idea that is also consistent with the finding that IRES activity is sensitive to changes in the ribosome composition [169][170][171]. In the near future, the characterization at the molecular level of more IRES elements will shed new light on the understanding of the strategies used to recruit the translation machinery. In turn, this will open new avenues to predict novel regulatory elements using this specialized mechanism of translation initiation. | 5,997.6 | 2013-11-01T00:00:00.000 | [
"Biology"
] |
Production prediction based on ASGA-XGBoost in shale gas reservoir
The advancement of horizontal drilling and hydraulic fracturing technologies has led to an increased significance of shale gas as a vital energy source. In the realm of oilfield development decisions, production forecast analysis stands as an essential aspect. Despite numerical simulation being a prevalent method for production prediction, its time-consuming nature is ill-suited for expeditious decision-making in oilfield development. Consequently, we present a data-driven model, ASGA-XGBoost, designed for rapid and precise forecasting of shale gas production from horizontally fractured wells. The central premise of ASGA-XGBoost entails the implementation of ASGA to optimize the hyperparameters of the XGBoost model, thereby enhancing its prediction performance. To assess the feasibility of the ASGA-XGBoost model, we employed a dataset comprising 250 samples, acquired by simulating shale gas multistage fractured horizontal well development through the use of CMG commercial numerical simulation software. Furthermore, XGBoost, GA-XGBoost, and ASGA-XGBoost models were trained using the data from the training set and employed to predict the 30-day cumulative gas production utilizing the data from the testing set. The outcomes demonstrate that the ASGA-XGBoost model yields the lowest mean absolute error and offers optimal performance in predicting the 30-day cumulative gas production. Additionally, the mean absolute error of the unoptimized XGBoost model is markedly greater than that of the optimized XGBoost model, indicating that the latter, refined through the application of intelligent optimization algorithms, exhibits superior performance. The insights gleaned from this investigation have the potential to inform the development of strategic plans for shale gas oilfields, ultimately promoting the cost-effective exploitation of this energy resource.
Introduction
Natural gas has emerged as a critical component of the global energy framework due to its high fuel value and low pollution (Barnes and Bosworth, 2015;Najibi et al., 2009;Reymond, 2007).Nevertheless, the world's burgeoning economy has rendered conventional gas reserves insufficient to satisfy societal demands (Abdul et al., 2021;Bhuiyan et al., 2022;Li et al., 2021).Consequently, an increasing number of studies are focusing on the exploration and production of unconventional gas resources (Song et al., 2017;Umbach, 2013;Wang and Lin, 2014).Shale gas, an unconventional gas with substantial global reserves (Guo et al., 2017), was once deemed unfeasible to extract due to its adsorption properties and super-low permeability and porosity (Zhang et al., 2015).Nevertheless, advancements in horizontal well fracturing technology have rendered shale gas extraction achievable (Liang et al., 2012;Lin et al., 2018;Wang et al., 2016).The horizontal wellbore facilitates the positioning of hydraulic fractures in various orientations, significantly enhancing permeability in the wellbore's vicinity and, consequently, bolstering gas production.
Predicting production is a critical aspect of oilfield development decision-making.There exist two primary types of production prediction models: Those driven by physical principles (Hu et al., 2016) and those driven by data (Liu et al., 2014).Physical-driven models can be further divided into analytical models (Cossio et al., 2013) and numerical simulation models (Paul et al., 2019).Analytical models aim to derive solutions based on seepage theory, as demonstrated by Lin et al. (2022), who developed a productivity prediction model for shale gas fractured horizontal wells, taking into account factors such as the fracture network's complexity, stress sensitivity effects, adsorption, and desorption.While analytical models offer rapid computation, their accuracy in predicting production from horizontal wells under complex conditions is limited due to the numerous assumptions involved.In contrast, numerical simulation models have the capacity to simulate intricate seepage situations and estimate production based on a wide range of reservoir data, encompassing geological, and drilling data.Theoretically, the accuracy of these models improves as more comprehensive data is considered.Yuan et al. (2018) established a shale gas discrete fracture network model based on an unstructured vertical bisection grid to predict the production of the shale gas fracturing horizontal well.While numerical simulation models often outperform their analytical counterparts in production prediction, their computational demand can be overwhelming for practical implementation.
In recent years, machine learning has been widely applied in the energy field, such as investment analysis of green energy projects (Hasan et al., 2022), levelized cost analysis (Li et al., 2022), analysis of financial development and open innovation (Alexey et al., 2023), oil production prediction (Cheng and Yang, 2021), and so on.Data-driven models, built by machine learning algorithms, could get the functional relationship between the production and its influencing parameters thanks to a training process based on available data (Mirzaei-Paiaman and Salavati, 2012).In general, data-driven models could be the substitute for physics-driven models.Compared with physics-driven models, data-driven models have no presumed functional relationship, and the functional relationships are obtained from the training data (Kulgaa et al., 2017).Chen et al. (2022) established a productivity prediction model of shale gas horizontal wells by the long short-term memory (LSTM) network.Wang et al. (2022) predicted the production of the shale gas horizontal well by building a deep learning network.Data-driven models offer high prediction accuracy and fast calculation speed, although ensuring optimal hyperparameters for the utilized machine learning algorithm is crucial to their accuracy.
Intelligent optimization algorithms, such as genetic algorithms (GAs) (Irani et al., 2011) and particle swarm optimization (PSO) (Nasimi et al., 2012), offer feasible approaches to hyperparameter optimization.Irani et al. (2011) established a neural network model coupled with GA to predict permeability.David et al. (2022) predicted the optimal rate of penetration by the multilayer perceptron model optimized by the GA.Li et al. (2022) proposed a PSO-CNN-LSTM model to solve the timeseries prediction problems.Massive researches clearly indicate that intelligent optimization algorithms can drastically improve the performance of data-driven models in production prediction.Nonetheless, these algorithms vary in their adaptability to specific data-driven models, requiring further research to explore their suitability for different scenarios.
In this study, a prediction model named ASGA-XGBoost was proposed to predict the 30-day cumulative gas production of the shale gas horizontal fracturing well.In the process, ASGA, an improved GA, was used to search for the optimal hyperparameter combination of the XGBoost model.Compared with the GA-XGBoost model, ASGA-XGBoost has better performance in predicting the 30-day cumulative gas production of the shale gas horizontal fracturing well.The prediction results can provide help for the formulation of a shale gas oilfield development plan, resulting in the economic and effective development of shale gas.
XGBoost algorithm
Extreme gradient boosting (XGBoost) is a prominent ensemble algorithm that integrates a vast array of weak learners to generate a strong learner (Chen and Guestrin, 2016).Typically, XGBoost relies on the classification and regression tree (CART) as its fundamental learner, which is well-suited to solve classification and regression problems.Owing to its superior performance, XGBoost has been extensively implemented in the petroleum industry, including sweet spot searching (Tang et al., 2021), dynamometer-card classification (Chris, 2020), and water absorption prediction (Liu et al., 2020).
XGBoost is renowned for its exceptional computation speed and remarkable prediction accuracy.As a supervised machine learning method, XGBoost leverages the input and output parameters of the dataset for modeling purposes.The algorithm functions through a process of incrementally integrating multiple base learners to consistently improve the residual reduction.As illustrated in Figure 1, each base learner is iteratively added to the ensemble, and the output prediction result is determined as mentioned below: where ŷ(t) i is the predicted value of the ith sample after iteration t, f t (x) denotes the calculated value of the tth base learner.And the objective function is shown as mentioned below: where y i denotes the actual value, and l represents the loss function.Ω indicates the regularization term, which can adjust the model complication and reduce overfitting.XGBoost has ten hyperparameters, including booster, n_estimators, max_depth, min_child_weight, eta, gamma, subsample, colsample_bytree, reg_alpha, and reg_lambda.Booster determines the type of base learner, usually a decision tree, and n_estimators is the number of base learners.For tree booster, max_depth denotes the maximum depth and min_child_weight is the minimum sum of leaf node sample weights.Eta represents the learning rate, which can be decreased to reduce overfitting.gamma indicates the minimum loss function drop for node splitting.subsample decides the proportion of random samples for each base learner.colsample_bytree is the subsample ratio of columns when constructing each tree.reg_alpha represents the L1 regularization term and reg_lambda denotes the L2 regularization term.These ten hyperparameters are key determinants for optimizing the prediction accuracy of XGBoost model.Hence, obtaining optimal hyperparameters is crucial for the XGBoost model's optimal functionality.
ASGA-XGBoost model
In this study, a modified adaptive GA based on the Spearman correlative coefficient (ASGA) was proposed to get the optimal hyperparameters.GA, proposed by John Holland first, is a method for searching for the optimal solution by simulating the natural evolution process (John, 1992).Over the past few decades, GA has extensively found applications in multiple optimization problems (Karen, 2005;Wathiq and Maytham, 2011;Souza et al., 2018).However, GA frequently necessitates numerous iterations to arrive at the most fitting solution, leading to prolonged optimization times.In other words, GA optimization accuracy is typically challenging to sustain within a certain number of iterations.Zhou and Ran (2023) proposed a modified GA based on the Spearman correlative coefficient (SGA) to improve the optimization speed and accuracy.Compared to GA, SGA modifies the crossover and mutation rates' determination methods.Generally, in GA, each gene has the same crossover and mutation rates, which can prolong the search for the optimal solution.The purpose of SGA is to approach the optimal solution quickly by adjusting the crossover and mutation rates of the genes.
However, SGA necessitates a dataset containing optimized hyperparameters and the optimization objective to determine Spearman correlation coefficients between parameters and objectives, subsequently determining the mutation and crossover rates.In this study, the ten hyperparameters of the XGBoost model constitute the optimized parameters, and the validation error of the XGBoost model becomes the optimization objective.Nonetheless, the absence of datasets containing hyperparameters and validation errors precludes the direct application of SGA to optimize XGBoost's hyperparameters.To address this, a modified SGA, referred to as ASGA, was introduced.ASGA integrates a dataset creation process to calculate Spearman correlation coefficients.Specifically, the new individuals and their corresponding validation errors in each iteration process are added to the dataset to increase the number of data samples.Accordingly, each iteration process necessitates recalculating the Spearman correlation coefficient, and the crossover and mutation rates differ for each step.The incorporation of ASGA helps XGBoost models identify optimal hyperparameters and enhance performance in production prediction.As depicted in Figure 2, the workflow of ASGA-XGBoost is as follows: Step 1-Population Initialization.The population consists of a certain number of individuals, also called chromosomes, and each chromosome is composed of some genes.In this study, each gene denotes a hyperparameter of the XGBoost model.In this process, n individuals are obtained by randomly generating the hyperparameters within limits.
Step 2-Fitness Calculation.Fitness serves as a crucial criterion for selecting desirable individuals for breeding in the subsequent generation.In this study, the fitness is calculated by the validation error of the XGBoost model, where smaller validation errors indicate higher fitness.
Step 3-Selection Operation.The purpose of the selection operation is to inherit the excellent individuals to the next generation.Roulette wheel selection is the most common way to select individuals from the population as parents.In this process, each individual in the population has a probability of being selected, which is associated with fitness.In general, individuals with higher fitness have a greater probability of selection.Moreover, the operation of roulette wheel selection needs to be repeated n times to get n pairs of individuals as parents, which are used to get the next generation via crossover and mutation operation.
Step 4-Calculation of the Crossover and Mutation Rates.Firstly, a dataset including the hyperparameters and training error of the XGBoost model should be established.In the optimization process, the new population obtained in each iteration needs to be put in the dataset.Secondly, the Spearman correlative coefficients between the hyperparameters and the validation error could be calculated.Correlation coefficients, such as Pearson, Kendall, and Spearman, are primarily used to represent the correlation between two parameters.The Pearson correlation coefficient works well only in describing the linear relationship of two continuous variables with a positive distribution.Nevertheless, the relationships between the hyperparameters and the validation error might be nonlinear.Moreover, the Kendall correlation coefficient is usually applied to the rank variables while most hyperparameters are continuous variables.Compared with the two methods, the Spearman correlation coefficient has no requirement for data distribution and variables.Furthermore, it also can indicate correlations between two variables that are linear or even partially non-linear.Therefore, the Spearman correlation coefficient is selected to calculate the correlative coefficients of the hyperparameters and the validation error.The Spearman correlation coefficient can be calculated by: where ρ i is the Spearman correlative coefficient of the ith hyperparameter, and n represents the number of samples.d ij denotes the rank of the jth sample sorted according to the jth hyperparameter, and d i is the mean rank of the samples sorted according to the ith hyperparameter.s j denotes the rank of the jth sample sorted according to the validation error, and s is the mean rank of the samples sorted according to the validation error.Generally, a higher absolute value of the Spearman correlative coefficient indicates a stronger correlation between the hyperparameter and validation error.However, do note that a smaller correlative coefficient does not necessarily imply a weak correlation, meaning this statistic can only represent the hyperparameters' importance to the validation error up to a certain degree.Even so, the correlative coefficient remains a critical performance indicator for assessing correlation.
Thirdly, the crossover and mutation rates could be gotten by the calculated Spearman correlative coefficients.In this study, the gene with a high Spearman correlative coefficient has low crossover and mutation rates so the excellent gene has a greater probability of retention.To achieve this purpose, the crossover and mutation rates could be calculated by: where r i denotes the crossover and mutation rates of the ith gene, and m is the number of hyperparameters.a is the control factor, which could limit the crossover and mutation rates.In general, crossover and mutation operations have different values of a.
Step 5-Crossover Operation.The process of crossover operation is to cross one or more genes on two individuals to get a new individual.Based on the calculation crossover rates of Step 4, each gene has a probability to decide whether to be crossed or not.In this step, n new individuals could be obtained by crossing the n pair of individuals selected thanks to the selection operation.
Step 6-Mutation Operation.Mutation operation is mainly used for the n new individuals generated by crossover operation.Its purpose is to get a new population by randomly changing the genes of the new individuals.Similarly, each gene has a probability to decide whether to be changed, which is calculated in Step 4. After mutation operation, a population with n new individual could be gotten.
Step 7-Output the Optimal Solution.Repeat Steps 2-6 until reaching the maximum iterations.Then, the individual with the greatest fitness is outputted as the optimal solution.
Data description
To assess the applicability of this approach, the CMG commercial numerical simulation software was employed to simulate multistage fracturing horizontal well development for shale gas extraction.The resultant data encompassed geological, fracturing, and production parameters.Specifically, six parameters, including porosity, permeability, the number of fracturing sections, the length of the horizontal well, fracture width, and fracture half-length, served as the inputs for the XGBoost model, while the 30-day cumulative gas production constituted the output.
Figure 3 displays the shale gas reservoir, which was established using the CMG commercial numerical simulation software, possessing dimensions of 3000 × 3000 × 100 m.The grid's dimensions measure 200 × 200 × 10, with a spacing of 15 m in the I and J directions and 10 m in the K direction, as indicated in Table 1.The shale gas horizontal well is located in the fifth layer, and perforation produces vertical fractures.
To get the dataset for the XGBoost model, three steps need to be done.Firstly, based on the bounds of the input parameters shown in Table 2, the geological and fracturing parameters were generated randomly by a computer program.Secondly, the cumulative gas production of the horizontal well could be calculated by inputting the geological and fracturing parameters into the established CMG numerical model.Thirdly, the dataset could be gotten by repeating the two steps.In this study, a dataset with 250 groups of samples was obtained.Figure 4 gives the distributions between input parameters and output parameter.
Building the productivity prediction model
For the production prediction model established by the XGBoost algorithm, the input has six parameters, including porosity, permeability, the number of fracturing sections, the length of the horizontal well, fracture width, fracture half-length, and the output is the 30-day cumulative gas production.In this study, 80% of the samples from the dataset above were randomly selected as the training set, and the remaining 20% were used as the testing set.The training set was used to train the XGBoost model.In the process, 10% of samples from the training set were used as the validation set to achieve the 10-fold cross-validation, which is beneficial for obtaining a stable and accurate XGBoost model.The validation accuracy could represent the training accuracy of the XGBoost model.To improve the performance of the XGBoost model, ASGA was used to optimize the hyperparameters of the XGBoost model.Furthermore, to test the superiority of ASGA, GA was applied to the hyperparameter optimization and the comparison results are shown in Figure 5.As can be seen, the number of iterations used by ASGA to reach the optimal training accuracy is less than that of GA, and the training accuracy of the XGBoost model optimized by ASGA is 2.28% higher than that of GA.Thus, ASGA has a faster optimization speed and higher accuracy than GA.More precisely, the hyperparameters optimized by ASGA and GA are shown in Table 3.
In addition, the samples in the testing set were used to validate the prediction performance of the XGBoost model optimized by ASGA.Moreover, the unoptimized XGBoost model and the XGBoost model optimized by GA were also used to predict the 30-day cumulative gas productions of the samples in the testing set, and the results are shown in Figure 6.As can be seen, the prediction results of the GA-XGBoost and ASGA-XGBoost models are better than that of the XGBoost model.More precisely, mean absolute error (MAE) was calculated to show the performance of the three models, and the result is shown in Table 4.It shows that the MAEs of XGBoost, GA-XGBoost, and ASGA-XGBoost are 7.51%, 4.04%, and 3.09%, respectively.Therefore, ASGA-XGBoost performs best in predicting the 30-day cumulative gas production.
Figure 1 .
Figure1.The workflow of the XGBoost algorithm.In the process, each tree is built to fit the residual of the previous tree.The final prediction result is obtained by synthesizing the calculation results of all trees.
Figure 2 .
Figure 2. The workflow of the ASGA-XGBoost model.There are 7 steps to achieve the ASGA-XGBoost algorithm.Its main idea is getting the best hyperparameters of the XGBoost model through a large number of iterative calculations.
Figure 3 .
Figure 3. Numerical simulation of the shale gas fracturing horizontal well.There are 10 layers in the reservoir and the horizontal well is located in the fifth layer.
Figure 4 .
Figure 4.The distribution plots between the input parameters and output parameter.The input parameters are porosity, permeability, the number of fracturing sections, the length of the horizontal well, fracture width, and fracture half-length.The output is the 30-day cumulative gas production.
Figure 5 .
Figure 5.Comparison plots of GA vs. ASGA.The horizontal axis represents the iterations in the optimization process, and the vertical axis denotes the training accuracy of the XGBoost model.The light blue line denotes the optimization process of GA, and the red line represents the optimization process of ASGA.
Figure 6 .
Figure 6.Comparison plots of actual values vs. predicted values for the validation samples.The black points denote the actual values, and the red points represent the values predicted by the XGBoost model.The blue points are the values predicted by the GA-XGBoost model, and the green points denote the values predicted by the ASGA-XGBoost model.
Table 2 .
The values of input parameters.
Table 3 .
Summary of optimal hyperparameter settings for XGBoosrt model. | 4,608.8 | 2023-09-18T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Microscopic-Macroscopic Approach for Binding Energies with Wigner-Kirkwood Method
The semi-classical Wigner-Kirkwood $\hbar$ expansion method is used to calculate shell corrections for spherical and deformed nuclei. The expansion is carried out up to fourth order in $\hbar$. A systematic study of Wigner-Kirkwood averaged energies is presented as a function of the deformation degrees of freedom. The shell corrections, along with the pairing energies obtained by using the Lipkin-Nogami scheme, are used in the microscopic-macroscopic approach to calculate binding energies. The macroscopic part is obtained from a liquid drop formula with six adjustable parameters. Considering a set of 367 spherical nuclei, the liquid drop parameters are adjusted to reproduce the experimental binding energies, which yields a {\it rms} deviation of 630 keV. It is shown that the proposed approach is indeed promising for the prediction of nuclear masses.
I. INTRODUCTION
Production and study of loosely bound exotic nuclei using Radioactive Ion Beam facilities is of current interest [1,2]. These experiments have given rise to a number of interesting and important discoveries in nuclear physics, like neutron and proton halos, thick skins, disappearance of magicity at the conventional numbers and appearance of new magic numbers, etc. Further, advances in detector systems, and in particular, the development of radioactive beam facilities like Spiral, REX-Isolde, FAIR, and the future FRIB may allow to investigate new features of atomic nuclei in a novel manner.
The study of nuclear masses and the systematics thereof is of immense importance in nuclear physics. With the advent of mass spectrometry, it is possible to measure masses of some of the short lived nuclei spanning almost the entire periodic table [3,4]. For example, the ISOL (isotope separator online) based mass analyzer for superheavy atoms (MASHA) [5,6] coming up at JINR-Dubna will be able to directly measure the masses of separated atoms in the range 112 ≤ Z ≤ 120. The limitation on measurements is set by the shortest measurable half-life, T 1/2 ∼ 1.0 s [5]. The JYFLTRAP [7] developed at the University of Jyväskylä, on the other hand, enables to measure masses of stable as well as highly neutron deficient nuclei (for masses up to A = 120) with very high precision (∼50 keV) [7].
On the theoretical front as well, considerable progress has already been achieved in the accurate prediction of the nuclear masses, and it is still being pursued vigorously by a number of groups around the globe. This is of great importance, since an accurate knowledge of the nuclear masses plays a decisive role in a reliable description of processes like the astrophysical r-process (see, for example, [3]). There are primarily two distinct approaches to calculate masses: a) the microscopic nuclear models based on density functional theory like, Skyrme [8,9] and Gogny [10] Hartree-Fock-Bogoliubov or Relativistic Mean Field (RMF) models [11]), b) microscopic-macroscopic (Mic-Mac) models [12,13,14,15] The Mic-Mac models are based on the well-known Strutinsky theorem. According to this, the nuclear binding energy, hence the mass can be written as sum of a smooth part, and an oscillatory part which has its origins in the quantum mechanical shell effects. The latter consists of the shell correction energy and the pairing correlation energy which in the Mic-Mac models are evaluated in an external potential well. The smooth part is normally taken from the liquid drop models of different degrees of sophistication. The largest uncertainties arise in the calculation of shell corrections. The shell correction is calculated by taking the difference between the total quantum mechanical energy of the given nucleus, and the corresponding 'averaged' energy. Usually, the averaging is achieved by the well-established Strutinsky scheme [16,17]. This technique of calculating the averaged energies runs into practical difficulties for finite potentials, since for carrying out the Strutinsky averaging, one requires the discrete single-particle spectrum, with cut-off well above (at least 3 ω 0 , ω 0 being the major shell spacing) the Fermi energy. For a realistic potential, this condition is not met, since continuum may start within ∼ ω 0 of the Fermi energy. Standard practice is to discretise the continuum by diagonalising the Hamiltonian in a basis of optimum size. A number of Mic-Mac calculations with varying degree of success are available in the literature (see, for example, [12,13,14,15]). The Mic-Mac models typically yield better than ∼0.7 MeV rms deviation in the masses. All these models agree reasonably well with each other and with experiment, but deviate widely among themselves in the regions far away from the valley of stability.
The semi-classical Wigner-Kirkwood (WK) approach [18,19,20,21,22,23,24,25], on the other hand, makes no explicit reference to the single-particle spectrum, and achieves an accurate averaging of the given one-body Hamiltonian. Thus, the WK approach is a good alternative to the conventional Strutinsky smoothing scheme. The quantum mechanical energy is calculated by diagonalising the one-body Hamiltonian in the axially symmetric deformed harmonic oscillator basis with 14 shells. The difference between the total quantum mechanical energy and the WK energy in the external potential well yields the value of the shell correction for a given system. In the present work, we propose to carry out a reliable microscopic-macroscopic calculation of the nuclear binding energies (and hence the masses), employing the semi-classical Wigner-Kirkwood (WK) expansion [18,19,20,21,22,23,24,25] for the calculation of shell corrections instead of the Strutinsky scheme. An exploratory study of using the WK method to compute the smooth part of the energy has been reported earlier to test the validity of the Strutinsky scheme, especially near the driplines [27].
It is known that the WK level density (g W K (ε)) with the 2 correction term exhibits a ε −1/2 divergence as ε → 0, for potentials which vanish at large distances as for instance Woods-Saxon potentials (see, for example, Ref. [26]). The Strutinsky level density, on the contrary, exhibits only a prominent peak as ε → 0. It was therefore concluded in Ref. [28] that the divergence of the WK level density as ε → 0 is unphysical, and the Strutinsky smoothed level density should be preferred. It should however be noted that the WK level densities, energy densities, etc., have to be understood in the mathematical sense of distributions and, consequently, only integrated quantities are meaningful. In fact, it has been shown [25] that the integrated quantities such as the accumulated level densities are perfectly well behaved, even for ε → 0.
Pairing correlations are important for open shell nuclei. In the present work, these are taken into account in the approximate particle number projected Lipkin-Nogami scheme [29,30,31]. Odd-even and odd-odd nuclei are treated in an entirely microscopic fashion (odd nucleon blocking method in the uniform filling approximation), allowing an improved determination of odd-even mass differences, see e.g. the discussion in [32]. The majority of nuclei in the nuclear chart are deformed. In particular, it is well known that inclusion of deformation is important for reliable predictions of nuclear masses. Therefore, here we incorporate in all three deformation degrees of freedom (β 2 , β 4 , γ). To our knowledge, no such detailed and extensive calculation based on the WK method is available in the literature.
The paper is organised as follows. We review the WK expansion in Section 2. The choice of the nuclear, spin-orbit, and Coulomb potentials forms the subject matter of Section 3.
Details of the WK calculations are discussed in Section 4. A systematic study of the WK energies for neutrons and protons as a function of the deformation degrees of freedom is presented in Section 5. The shell corrections for the chains of Gd, Dy and Pb isotopes obtained by using our formalism are reported, and are compared with those calculated employing the traditional Strutinsky averaging technique, in Section 6. Section 7 contains a brief discussion on the Lipkin-Nogami pairing scheme. As an illustrative example, the calculation of the binding energies for selected 367 spherical nuclei is presented and discussed in Section 8. Section 9 contains our summary and future outlook. Supplementary material can be found in appendices A and B.
II. SEMI-CLASSICAL WIGNER-KIRKWOOD EXPANSION
Following Ref. [20], we consider a system of N non-interacting fermions at zero temperature. Suppose that these fermions are moving in a given one-body potential including the spin-orbit interaction. To determine the smooth part of the energy of such a system, we start with the quantal partition function for the system: Here,Ĥ is the Hamiltonian of the system, given by: where V ( r) is the one-body central potential andV LS ( r) is the spin-orbit interaction.
In order to average out shell effects, the simplest one could do is replace the partition function in the above expression by the classical partition function. In this work, we shall use the WK expansion up to fourth order. For brevity, we represent the potentials and form factors without mentioning the dependence on the position vector.
Ignoring the spin-orbit interaction, the WK expansion of the partition function, correct up to fourth order is given by [20]: The spin-orbit interaction, in general, can be written as: whereσ is the unit Pauli matrix, κ is the strength of spin-orbit interaction, and f is the spin-orbit form factor. With the inclusion of such spin-orbit interaction, the WK expansion for the full partition function splits up into two parts: Here, Z (4) (β) is given by Eq. (3), and the spin-orbit contribution to the partition function, correct up to fourth order in , reads [20]: where The level density g W K , particle number N, and energy E can be calculated directly from the WK partition function by Laplace inversion: and where λ is the chemical potential, fixed by demanding the right particle number, and L −1 denotes the Laplace inversion. Using the identity and noting that, in order to get inverse Laplace transforms in convergent form, one obtains the level density for each kind of nucleons assuming spin degeneracy: the particle number: and the energy: It should be noted that we have explicitly assumed that all the derivatives of the potential V and the spin-orbit form factor f exist. The expansion defined here is therefore not valid for potentials with sharp surfaces. This automatically puts a restriction on the choice of the Coulomb potential: the conventional uniform distribution approximation for the charge distribution cannot be used in the present case. We shall discuss this point at a greater length in the next section. The integrals in the above expressions are cut off at the turning points, defined via the step function. The chemical potential λ appearing in these equations is determined from Eq. (16), separately for neutrons and protons. Further, it is interesting to note that the spin-orbit contribution to the particle number N as well as to the energy E appears only in the second order in . Secondly, the level density and particle number are calculated only up to the order 2 . It can be shown [20] that for the expansion correct up to fourth order in , it is sufficient to take Z W K up to order 2 in Eq. (11) to find the chemical potential (and hence the particle number), whereas one has to take the full partition function Z (4) W K up to order 4 in Eq. (12) to compute the energy in the WK approach.
The divergent terms appearing in Eq. (17) are treated by differentiation with respect to the chemical potential. Explicitly: In practice, the differentiation with respect to chemical potential is carried out after evaluation of the relevant integrals. Numerically, this approach is found to be stable. Its reliability has been checked explicitly by reproducing the values of fourth-order WK corrections quoted in Ref. [20].
The WK expansion thus defined, converges very rapidly for the harmonic oscillator potential: the second-order expansion itself is enough for most practical purposes. The convergence for the Woods-Saxon potential, however, is slower than that for the harmonic oscillator potential, but it is adequate [33]. For example, for ∼ 126 particles, the Thomas-Fermi energy is typically of the order of 10 3 MeV, the second-order ( 2 ) correction contributes a few 10's of MeVs, and the fourth-order ( 4 ) correction yields a contribution of the order of 1 MeV. This point will be discussed in greater details later. It is also important to note that the WK expansion of the density matrix has a variational character and that a variational theory based on a strict expansion of the of has been established [34].
The WK approach presented here should be distinguished from the extended Thomas-Fermi (ETF) approach. Divergence problems at the classical turning points (see the particle number and energy expressions above) can be eliminated by expressing the kinetic energy density as a functional of the local density. This is achieved by eliminating the chemical potential, the local potential, and the derivatives of the local potential (for further details, see Ref. [35]). It cannot be accomplished in closed form, and has to be done iteratively, leading to a functional series for the kinetic energy density. The resulting model is what is often referred to as the ETF approach. The WK approach as presented here, in this sense, is the starting point for ETF approach (further details of ETF can be found in Refs. [22,23,25,36,37,38]). The conventional ETF approach exhibits somewhat slower convergence properties which has been attributed to a non-optimal sorting out of terms of each given power in [25,35].
A. Form of the Nuclear Potential
The spherically symmetric nuclear mean field is well represented by the Woods-Saxon (WS) form [39], given by: where V 0 is the strength of the potential, R 0 is the half-density radius, and a is the diffuseness parameter. The WS form factor defined here, can be easily generalised to take the deformation effects into account. Note that the distance function l(r) = r − R 0 appearing in Eq. (20) can be interpreted as the minimum distance of a given point to the nuclear surface, defined by r = R 0 . One might thus generalise it to the case of deformed surfaces as well.
Using the standard expansion in terms of spherical harmonics, a general deformed surface may be defined by the relation r = r s , where Here, the Y λ,µ functions are the usual spherical harmonics and the constant C is the volume conservation factor (the volume enclosed by the deformed surface should be equal to the volume enclosed by an equivalent spherical surface of radius R 0 ): The distance function to be used in the WS potential would be the minimum distance of a given point to the nuclear surface defined by r = r s . Such definition has been used quite extensively in the literature, with good success (see, for example, Refs. [40,41,42,43,44]).
However, in the present case, this definition is not convenient, since it should be noted that the calculation of this distance function involves the minimisation of a segment from the given point to the nuclear surface. This in turn implies that each calculation of the distance function (for given r, θ, and φ coordinates: we are assuming a spherical polar coordinate system here) involves the calculation of two surface angles θ s and φ s , and these are implicit functions of r, θ, and φ. See Fig. (8) in Appendix A for details. Since the WK calculations involve differentiation of the WS function, one also needs to differentiate θ s and φ s , which are implicit functions of r, θ, and φ.
Alternatively, the distance function for the deformed Woods-Saxon potential can be written down by demanding that the rate of change of the potential calculated normal to the nuclear surface and evaluated at the nuclear surface should be a constant [45] which, indeed, is the case for the spherical Woods-Saxon form factor. Thus, wheren is the unit vector normal to the surface (r = r s ) and is given bŷ In fact, the above condition (23) is related to the observation that the second derivative of the spherical Woods-Saxon form factor vanishes at the nuclear surface, defined by r = R 0 .
The resulting distance function is given by [46]: where r s is as defined in Eq. (21). The denominator is evaluated at r = r s . Writing the θ and φ derivatives of r s as A and B respectively, we get: with In the present work, we use the distance function as defined in Eq. (25). The WS potential thus reads It is straightforward to check that the Woods-Saxon potential defined with the distance function as given by Eq. (25) satisfies the condition (23). Substituting this Woods-Saxon potential inn · ∇V ( r), we get Here, f ( r) = [1 + exp (l( r)/a)] −1 is the Woods-Saxon form factor. Clearly, at the surface defined by r = r s , the quantityn · ∇V ( r) is constant.
B. Deformation Parameters
In practice, we consider three deformation degrees of freedom, namely, β 2 , β 4 and γ.
C. Woods-Saxon Parameters
The parameters [47] appearing in the Woods-Saxon potential are as defined below: 1. Central potential: a. Strength: with U 0 =53.754 MeV and U 1 =0.791.
c. Diffuseness parameter: assumed to be same for neutrons and protons, and has the value a = 0.637 fm.
b. Half density radius and diffuseness parameter are taken to be the same as those for the central potential.
The parameters have the isospin dependence of the central and spin-orbit potentials "built-in". This potential yields a reasonably good description of charge radii (both magnitude and isospin dependence) as well as of moments of inertia for a wide range of nuclei.
It has been used extensively in the total Routhian surface (TRS) calculations, and it has been quite successful in accurately reproducing energies of single-particle as well as collective states [48].
D. Coulomb potential
The Coulomb potential is calculated by folding the point proton density distribution ρ( r ′ ), assumed to be of Woods-Saxon form. For simplicity, its parameters are assumed to be the same as those for the nuclear potential of protons. The reason for using folded potential here is, as we have indicated in section II, the WK expansion is not valid for potentials with sharp surfaces.
The Coulomb potential for the extended charge distribution is given by: Here, where as explained in Appendix A. It is instructive at this point, to compare the Coulomb potential calculated from the diffuse density with the corresponding potential obtained by using the conventional uniform density (sharp surface) approximation. Such comparison for 208 Pb is plotted in Fig. 1. The radius parameter for the diffuse density approach as well as for the sharp surface approximation is assumed to be equal to 7.11 fm (see the discussion on the choice of the Woods-Saxon parameters in Section 3). It can be seen that in the exterior region, the two potentials agree almost exactly, as expected. In the interior, however, the potential obtained from the diffuse density turns out to be somewhat less repulsive than that from the density with sharp surface.
IV. DETAILS OF THE WK CALCULATIONS
In the present work, we restrict our calculations to three deformation degrees of freedom, namely, β 2 , β 4 and the angle γ. The inclusion of γ allows to incorporate triaxiality. Thus, the present WK calculation is genuinely three dimensional. In principle it is natural to use a cylindrical coordinate system here. The spherical polar coordinates, however, turn out to be more convenient. The reason is, the cylindrical coordinates involve two length variables, and one angular coordinate which means that the turning points have to be evaluated for two coordinates (ρ and z). This makes the calculations very complicated. On the other hand, the spherical polar coordinates involve only one length variable, and thus the turning points are to be evaluated only for one coordinate (r). The numerical integrals involved are evaluated using Gaussian quadrature.
The first step in the WK calculations is the determination of the chemical potential.
This has to be done iteratively, using Eq. (16). Since the turning points are determined by the chemical potential, they have to be calculated using a suitable numerical technique at each step. Once the values of the chemical potential are known, the WK energies up to second order can be calculated in a straightforward way. The fourth-order calculations are very complicated, since they require higher-order derivatives of nuclear potentials, spinorbit form factors, and the Coulomb potential. The former can be evaluated analytically in the present case. The expressions are extremely lengthy, and we do not present them here. Comparatively, the derivatives of the Coulomb potential look simple; the Laplacian and Laplacian of Laplacian are completely straightforward: the former is proportional to the proton density and the latter is just the Laplacian of the WS form factor. However, the calculations also need terms like Laplacian of the gradient squared of the total potential. In the case of protons, this involves one crossed term: where V C is the Coulomb potential and V N is the nuclear potential. The determination of such objects is tricky. It turns out that if one uses the form of the Coulomb potential defined above, the calculation of expression (42) becomes numerically unstable.
There exists an alternative for of the Coulomb potential: where the notation ∇ 2 r ′ means that the Laplacian is calculated with respect to the variables r ′ , θ ′ , and φ ′ . Eqs. (39) and (43) It turns out that the WK calculations for protons are very time consuming. This is due to the fact that the calculation of Coulomb potential (Eq. (39)), in general, involves evaluation of three dimensional integral for each point (r, θ, φ). Typically, it takes few tens of minutes to complete one such calculation. This is certainly not desirable, since our aim is to calculate the masses of the nuclei spanning the entire periodic table. To speed up the calculations, we use the well-known technique of interpolation. Since we are using spherical polar coordinates, the turning points are to be evaluated only for the radial coordinate, r.
For the entire WK calculation, the θ and φ mesh points remain the same (over the domains [0, π] and [0, 2π], respectively), whereas the r mesh points change from step to step. This happens in particular during the evaluation of the chemical potential. Once the convergence of the particle number equation (Eq. (16)) is achieved, the r mesh points as well, remain fixed.
Motivated by the above observations, we apply the following procedure: 1. Before entering into the actual WK calculations (determination of chemical potential, etc.), for each pair of θ and φ mesh points, we calculate the Coulomb potential (Eq. 2. Next, for each pair of θ and φ mesh points, we fit a polynomial of degree 9 in the radial coordinate r to the Coulomb potential calculated in the above step. Thus, the fitting procedure is to be repeated N θ ×N φ times, N θ (N φ ) being total number of mesh points for the θ (φ) integration.
Thus, for any given value of radial the coordinate r (and fixed θ and φ), the Coulomb potential can be easily calculated just by evaluating the 9 th degree polynomial in r. It is found that this interpolation procedure is very accurate. The maximum percentage difference between the fitted and the exact Coulomb potentials is 0.4% for a highly deformed nucleus. Fig. (2). The other two deformation parameters, β 4 and γ, are set to zero in this test case. The partial contributions to the WK energy are plotted separately for protons and neutrons. It is found that all the correction terms vary smoothly as a function of deformation. As expected, the value of the contributions from the 2 and 4 terms to the averaged energy decreases rapidly. It is found that the proton and neutron Thomas-Fermi energies have opposite trends with respect to increasing β 2 . If Coulomb potential is suppressed, it is found that the Thomas-Fermi energies for protons follow the same trend as those for the neutrons. Further, it is interesting to note that comparatively, the variation in the second-order corrections with respect to deformation parameters is stronger than that in the Thomas-Fermi energies (∼ 10% for second-order corrections and ∼ 3% for Thomas-Fermi energies).
Next, the variation of the Thomas-Fermi energy and of the correction terms as a function of the hexadecapole deformation parameter β 4 is plotted in Fig. (3). Here, β 2 is taken to be 0.2 and γ is set to zero. It is seen that again, the different energies vary smoothly as a function of β 4 . The Thomas-Fermi energy for protons is found to have very little variation with respect to the β 4 deformation parameter. In contrast, the corresponding energies for neutrons have a stronger dependence on β 4 . The same behaviour is also observed in the corresponding quantum mechanical energies. It is found that the proton and neutron Thomas-Fermi energies have a very similar behaviour if the Coulomb potential is suppressed.
Further, to check if this conclusion depends on the value of β 2 , the analysis is repeated for β 2 = 0.4, and the same conclusion is found to emerge.
The behaviour of the Thomas-Fermi energies for protons in the above cases (Figs. (2) and (3)) seems to be due to the Coulomb potential. In the case of variation with respect to β 2 , qualitatively it can be expected that with increasing quadrupole deformation, protons are pulled apart and Coulomb repulsion decreases, thereby making the system more bound.
The β 4 deformation also affects the proton distribution, but, as expected, the effect of hexadecapole deformation is less prominent in comparison with that of quadrupole deformation.
Thus, the repulsion among protons does decrease with increasing β 4 , but the decrease is not large enough to make the system more bound with larger β 4 .
By keeping β 2 and β 4 fixed, if the parameter γ is varied, then it is found that the resulting energies are independent of the sign of γ. Moreover, the γ dependence of the WK energies is found to be rather weak. Therefore, here we do not present these result explicitly.
The fourth-order calculation for protons is very time consuming. Typically, it takes tens of minutes to do a complete WK calculation. Most of the run-time being consumed by particle number determination and the fourth-order calculations for protons. Thus, it is necessary to find an accurate approximation scheme for the fourth-order calculation for protons. Since in the nuclear interior, the Coulomb potential has approximately a quadratic nature (see Fig. (1)), it is expected that the Coulomb potential will have small influence on the fourthorder calculations (note that one needs higher-order derivatives in the fourth-order energy calculations). One may therefore drop the Coulomb potential completely from the fourthorder corrections; we shall refer to this approximation as "quadratic approximation". This approximation has been checked explicitly by performing exact fourth-order calculations for protons. The maximum difference between the WK energies obtained by using exact calculation and the quadratic approximation is found to be of the order 100 keV for 82 protons. It turns out that the difference between the quadratic approximation and exact calculation decreases with decreasing charge number. This approximation can be improved by keeping the Laplacian of the Coulomb potential in the fourth-order contribution i.e., the terms of the form (∇ 2 V ) 2 and ∇ 4 V in Eq. (17). This means that for protons, only the term ∇ 2 (∇V ) 2 is dropped from Eq. (17). It is found that with this modification, the value of the fourth-order correction energy for the mean field part for protons almost coincides with the value obtained by taking all of the derivatives of the Coulomb potential into account. This helps in reducing the total runtime further. Thus, effectively, with the interpolation for Coulomb potential as discussed before (see section IV), and the approximations introduced in the fourth-order correction terms for protons in the present section, the runtime reduces from tens of minutes to just about two minutes, without affecting the desired accuracy of the calculations.
VI. WIGNER-KIRKWOOD SHELL CORRECTIONS AND COMPARISON WITH STRUTINSKY CALCULATIONS
Numerically, it has been demonstrated that the WK and Strutinsky shell corrections are close to each other [20]. This is expected, since it has recently been shown [50] that the Strutinsky level density is an approximation to the semi-classical WK level density.
For illustration, we present and discuss the WK and the corresponding Strutinsky shell corrections for the chains of Pb, Gd and Dy isotopes. For the sake of completeness, we first present and discuss the essential features of the Strutinsky smoothing scheme.
According to the Strutinsky smoothing scheme, the smooth level density for a one-body Hamiltonian is given by [49]: where ǫ i are the single-particle energies calculated by diagonalising the Hamiltonian matrix.
The smoothing constant γ is taken to be of the order of ω 0 ( ω 0 = 1.
is the smoothing order, and is assumed to be equal to 6 in the present work; H j are the Hermite polynomials; and S j is a constant, defined as [49]: The Strutinsky shell correction is given by: where N n is the number of nucleons. This, upon substituting the expression for g st , yields [49], The error integral erf(x) is defined as: It should be noted that the Strutinsky procedure described here uses positive energy states generated by diagonalizing the Hamiltonian matrix, and not by taking resonances into account and smoothing them. Further, in practice, the summations defined above do not extend up to infinity, but are cut off at a suitable upper limit. The limit is chosen in such a way that all the states up to ∼ 4 ω 0 are included in the sum. It has been shown that the uncertainty in the Strutinsky shell corrections obtained this way is typically of the order of 0.5 MeV [49]. For lighter nuclei, however, it has been concluded [49] that this uncertainty is larger.
The total WK shell correction for the chain of even even Lead isotopes ( 178−214 Pb) is plotted in Fig. (4), along with the corresponding values obtained by using the Strutinsky smoothing method. It is found that both the WK and Strutinsky results exhibit very similar trends. As expected, there is a prominent minimum observed for 208 Pb, indicating the occurrence of shell closure. The WK and Strutinsky shell corrections slightly differ from each other. The difference is not a constant, and is found to be increasing slowly towards the more neutron deficient Lead isotopes.
Next we plot the calculated (WK) and the corresponding Strutinsky shell corrections for the chains of even even Gd and Dy isotopes, with neutron numbers ranging from 72 to 92. Apart from 144,146,148 Gd and 146,148,150 Dy, the rest of the nuclei considered here are known to be deformed [12]. For this test run, we adopt the deformation parameters from the Möller -Nix compilation [12]. It is seen that the WK and the corresponding Strutinsky shell corrections agree with each other, within few hundred keVs. The prominent minimum at shell closure at neutron number 82 is clearly seen. In these cases as well, the difference between the two calculations is not a constant. It is larger in the neutron deficient region, and becomes smaller as neutron number increases. by determining λ 1 and λ 2 using certain conditions. Here,Ĥ is the pairing Hamiltonian, and N is the particle number operator. Minimisation of the expectation value ofĤ − λ 1N leads to the usual BCS model, with λ 1 determined from the particle number condition. Thus, in Eq. (50) above, the quantity λ 1 is a Lagrange multiplier, but the particle number fluctuation constant λ 2 is not.
In practice, the LN calculation is carried out by assuming a constant pairing matrix element, G. For a given nucleus (assumed to be even-even for simplicity), one considers N h doubly degenerate states below, and N p doubly degenerate states above the Fermi level.
These states contain N nucleons. In practice, one takes N h = N p = N/2 or Z/2, depending on whether it is being applied to neutrons or protons. The occupation probabilities v 2 k , the pairing gap ∆, the chemical potential λ (= λ 1 + 2λ 2 (N + 1), see Ref. [31]), and the constant λ 2 are determined iteratively using the conditions [13,31]: such that and where E k are the single-particle energies and u 2 k = 1 − v 2 k . The particle number fluctuation constant λ 2 is given by: The pairing matrix element G is calculated by the Möller-Nix prescription [13]: Here,ρ L = g W K /2 is the Wigner-Kirkwood averaged level density (see Eq. (15). Factor of 2 appears because each quantal level here has degeneracy of 2. The level density is evaluated at fermi energy.); a 2 = N /2ρ L and a 1 = −N /2ρ L and∆ is the average pairing gap, taken to be 3.3/N 1/2 [13].
The ground-state energy within the LN model is given by: The pairing correlation energy, E pair is obtained by subtracting the ground-state energy in absence of pairing from Eq. (57):
VIII. CALCULATION OF BINDING ENERGIES
As an illustrative example, we now present and discuss the calculated binding energies (in this paper, we take binding energies as negative quantities) for 367 even-even, evenodd, odd-even and odd-odd spherical nuclei. These nuclei are predicted to be spherical or nearly spherical (β 2 < 0.05) in the Möller-Nix calculations [12] and include 38 ). Of course, it is known that the prediction of sphericity does depend to some extent on the details of the density functional employed [52]. Therefore, it may so happen that some of the nuclei assumed to be spherical here, may actually turn out to be slightly deformed when energy minimization is carried out on the grid of deformation parameters.
Our calculation proceeds in the following steps. For each nucleus, the quantum mechanical and WK energies are calculated as described earlier. This then yields values of the shell corrections (δE) for these nuclei. The pairing energies (E pair ) are then calculated using the Lipkin-Nogami scheme [29,30,31] described previously in the same potential well where the shell correction is computed. These two pieces constitute the microscopic part of the binding energy. The macroscopic part of the binding energy (E LDM ) is obtained from the liquid drop formula. Thus, for a given nucleus with Z protons and N neutrons (mass number A = N + Z), the binding energy in the Mic-Mac picture is given by: The liquid drop part of binding energy is chosen to be: where the terms respectively represent: volume energy, surface energy, Coulomb energy and correction to Coulomb energy due to surface diffuseness of charge distribution. The coefficients a v , a s , k v , k s , r 0 and C 4 are free parameters; T z is the third component of isospin, and e is the electronic charge. The free parameters are determined by minimising the χ 2 value in comparison with the experimental energies: where E(N j , Z j ) is the calculated total binding energy for the given nucleus, E (j) expt is the corresponding experimental value [53], and ∆E (j) expt is the uncertainty in E (j) expt . In the present fit, for simplicity, ∆E (j) expt is set to 1 MeV. The minimisation is achieved using the well-known Levenberg-Marquardt algorithm [54,55]. Table I. Clearly, the obtained values of the parameters are reasonable. The detailed table containing the nuclei considered in the present fit, and the corresponding calculated and experimental [53] binding energies may be found in Ref. [51].
To examine the quality of the fit further, first, we plot the difference between the fitted and the corresponding experimental [53] binding energies for the 367 nuclei as a function of the mass number A in Fig. (5) The corresponding differences obtained for the Möller-Nix Next, the difference between the calculated and the corresponding experimental [53] binding energies (denoted by "WK") for Ca, Ti, Sn, and Pb isotopes considered in this fit are presented in Fig. (6). The differences obtained by using the Möller-Nix [13] values of binding energies (denoted by "MN") are also shown there for comparison. It can be seen that the present calculations agree well with the experiment. It is found that the differences vary smoothly as a function of mass number: the exceptions being the doubly closed shell nuclei 48 Ca, 132 Sn, and 208 Pb, where a kink is observed. The overall behaviour of the differences is somewhat smoother than that obtained by using the values of Möller and Nix. To investigate the effect of the parameters of the single-particle potential, we make a refit of the liquid drop parameters, by using the Rost parameters [56] in the microscopic part Thus, overall, this potential is more realistic than the Rost potential. This is reflected in the calculated binding energies as well, showing clearly that the choice of the single-particle potential (or in other words, the parameters) is indeed important for reliable predictions of binding energies (and hence the masses).
Single and two neutron separation energies (S 1n and S 2n ) are crucial observables. They are obtained by calculating binding energy differences between pairs of isotopes differing by one and two neutron numbers, respectively. The single neutron separation energies govern asymptotic behaviour of the neutron density distributions [57]. They exhibit oddeven staggering along an isotopic chain, indicating that the isotopes with even number of neutrons are more bound than the neighbouring isotopes with odd number of neutrons. The systematics of S 2n primarily reveals the shell structure in an isotopic chain. The correct prediction of these separation energies is crucial for determination of the neutron drip lines.
The calculated S 1n and S 2n values for Sc, Sn and Pb isotopes are displayed in Fig. (7). The corresponding experimental values of S 1n and S 2n [53] are also plotted for comparison. The agreement between calculations and experiment is found to be excellent. The odd -even staggering is nicely reproduced. The shell closures at 132 Sn and 208 Pb are clearly visible both in single and two neutron separation energies. At a finer level, however, a marginal underestimation of the shell gap at the neutron number 82 (126) is observed in 132 Sn ( 208 Pb).
Finally, we remark that the calculated single and two proton separation energies are also found to be in close agreement with the experiment.
The results presented in this section indicate that the present calculations of binding energies, indeed, are reliable.
IX. SUMMARY AND FUTURE OUTLOOK
In the present work, we intend to carry out reliable mass calculations for the nuclei spanning the entire periodic table. For this purpose, we employ the 'microscopic-macroscopic' framework. The microscopic component has two ingredients: the shell correction energy and the pairing energy. The pairing energy is calculated by using the well-known Lipkin-Nogami scheme. To average out the given one-body Hamiltonian (and hence find the shell corrections, given the total quantum mechanical energy of the system), we use the semiclassical Wigner-Kirkwood expansion technique. This method does not use the detailed single-particle structure, as in the case of the conventional Strutinsky smoothing method.
In addition to the bound states, the Strutinsky scheme requires the contributions coming in from the continuum as well. Treating the continuum is often tricky, and in most of the practical calculations, the continuum is taken into account rather artificially, by generating positive energy states by means of diagonalisation of the Hamiltonian matrix. For neutronrich and neutron-deficient nuclei, the contribution from the continuum becomes more and more important as the Fermi energy becomes smaller (less negative). Uncertainty in the conventional Strutinsky scheme thus increases as one goes away from the line of stability.
It is therefore expected that the Wigner-Kirkwood method will be a valuable and suitable option especially for nuclei lying far away from the line of stability.
We now summarise our observations and future perspectives: Therefore, before performing the large scale calculations, we intend to make a refit to the Woods-Saxon potential, with the Coulomb potential obtained from folding.
5.
Having established the feasibility of the present approach, we now intend to extend our binding energy calculations to deformed nuclei. For this purpose, we plan to minimise the binding energy on a mesh of deformation parameters to find the absolute minimum in the deformation space. Work along these lines is in progress.
APPENDIX A: GEOMETRY OF DISTANCE FUNCTION
Consider an arbitrary surface, defined by the relation r = r s , where r s is given by Eq.
(21) of the text. Let us fix the origin of the coordinate system at the centre of mass of the object. Let r ≡ (r, θ, φ) define an arbitrary point in space. This point could be inside or outside the surface. Here, for concreteness, we assume that it is within the volume of the object. Our aim is to find the minimum distance of the point r to the surface r = r s .
To achieve this, construct a vector R s from the centre of mass to the surface. To find the minimum distance, one has to minimise the object | r − R s |. Denoting the angle between the r and R s vectors as Ψ, we have: where, from Fig. (A), the cosine of the angle Ψ is given by: cos Ψ = cos θ cos θ s + sin θ sin θ s cos(φ s − φ). The Coulomb potential for an arbitrary charge distribution is given by: Let, for brevity, | r − r ′ | = R. Consider: Here, the symbol ∇ r ′ means that the differentiation is done with respect to the r ′ , θ ′ , φ ′ coordinates. Let us consider the above derivative component-wise. The contribution coming from the first component is: Adding contributions coming from all the three components, one gets: With this, the potential becomes: Here, the term in the curly brackets has been represented as a unit vectorR. Using the identity: one obtains: which, upon integrating by parts and transferring derivatives to density, becomes: q.e.d.
Derivatives of Coulomb Potential
The calculation of the higher-order derivatives of the Coulomb potential (third and above), even with the form defined in Eq. (43), turns out to be numerically unstable.
For this purpose, we employ the Poisson's equation. According to this, the Laplacian of Coulomb potential is proportional to the charge density: The Laplacian of ∇ 2 V C ( r) is simple to compute, for, all one needs to calculate there are the derivatives of density (assumed to be of Woods-Saxon form).
Thus, it is desirable to generate the required higher order-derivatives of the Coulomb potential (see expression (42) in the text) from Poisson's equation. For this purpose, we evaluate the commutators: The results are: With these expressions, the required higher-order derivatives of the Coulomb potential can be generated. These are then used to evaluate the fourth-order WK energy, as we have described in Section 4. | 10,110.8 | 2009-11-24T00:00:00.000 | [
"Physics"
] |
InterCriteria analysis results based on different number of objects
InterCriteria Analysis (ICrA) results based on different number of objects are investigated in this paper. To evaluate the influence of the number of objects, data from parameter identification procedures of an E. coli fed-batch fermentation process model are used. Model parameters are estimated applying 100 genetic algorithms with different mutation rate values. Seven different index matrices are constructed for ICrA. The results show that the number of objects in ICrA is important for the reliability of the obtained results.
Introduction
A contemporary approach for multicriteria decision making, named InterCriteria Analysis (ICrA), is proposed in [4]. This approach implements the means of the index matrices (IM) and intuitionistic fuzzy sets (IFS), aiming a comparison of predefined criteria and the objects estimated by them.
ICrA has been applied for the first time in the field of model parameter identification of fermentation processes (FP) using genetic algorithms (GAs) in [10]. Series of papers with ICrA applications in this area have been published since then, for example [1,14,15]. ICrA has been proven to be an appropriate approach for establishing the correlations between model and optimization algorithm parameters, when given parameters are considered as criteria. The reported results confirm some existing dependencies that are based on the physical meaning of the FP model parameters and the stochastic nature of GAs.
There is a lack of studies in the literature about the influence of the number of objects on the ICrA results. It is important to determine the number of objects that are sufficient to obtain reliable results from the ICrA application. It can be assumed that the larger the number of objects are used in the analysis, the more reliable the results will be. But, at least how many objects are enough to be able to rely on the results?
The current research is an attempt to investigate the impact of the different number of objects on the InterCriteria analysis results in the particular test case. It is a very important issue concerning the application of ICrA in the field of model parameter identification of FP. Any additional exploring of the FP model is valuable in the case of modelling living systems, such as FP. Moreover, the relation between mathematical model and optimization algorithm will be established. In order to improve both mathematical modelling and optimization algorithm performance reliable and secure results are needed.
Data from series of parameter identification procedures of an E. coli fed-batch fermentation model are used to construct several IMs with different number of objects. ICrA is applied over the so defined IMs and the results are discussed.
The paper is organized as follows: Section 2 presents the background of ICrA. Numerical results and discussion are presented in Section 3 and conclusion remarks are given in Section 4.
InterCriteria analysis
Following [4] and [2], an Intuitionistic Fuzzy Pair (IFP), as the degrees of "agreement" and "disagreement" between two criteria applied on different objects, will be obtained. As a remainder, an IFP is an ordered pair of real non-negative numbers a, b , such that a + b ≤ 1.
For clarity, let an IM [3], whose index sets consist of the names of the criteria (for rows) and objects (for columns), be given. The elements of this IM are further supposed to be real numbers, which is not required in the general case. An IM with index sets, consisting of the names of the criteria, and IFPs, corresponding to the "agreement" and "disagreement" of the respective criteria, as elements will be obtained.
Let O denotes the set of all objects being evaluated, and C(O) is the set of values assigned by a given criteria C (i.e., C = C p for some fixed p) to the objects, i.e., Then the following set can be defined: Further, if x = C(O i ) and y = C(O j ), x ≺ y will be written iff i < j. The vectors of all internal comparisons for each criterion are constructed in order to find the agreement of different criteria. The elements of the vectors fulfil one of the three relations R, R andR : x, y ∈ R ⇔ y, x ∈ R, For example, if "R" is the relation "<", then R is the relation ">", and vice versa. Hence, for the effective calculation of the vector of internal comparisons, denoted further by V (C), only the subset of C(O) × C(O) needs to be considered, namely: . Then, the vector with lexicographically ordered pairs as elements is constructed for a fixed criterion C: Then, the degree of "agreement" between two criteria, which are to be compared, is determined as the number of the matching components, divided by the length of the vector for the purpose of normalization. This can be done in several ways, e.g. by counting the matches or by taking the complement of the Hamming distance. The degree of "disagreement" is the number of the components of opposing signs in the two vectors, again normalized by the length. This also may be done in various ways. If the respective degrees of "agreement" and "disagreement" are denoted by µ C,C and ν C,C , it is obvious (from the way of computation) that µ C,C = µ C ,C and ν C,C = ν C ,C . Also it is true that µ C,C , ν C,C is an IFP.
The sum µ C,C + ν C,C is equal to 1 in the most of the obtained pairs µ C,C , ν C,C . However, there may be some pairs, for which this sum is less than 1. The difference is the degree of "uncertainty".
3 Numerical results and discussion
Parameter identification of a mathematical model of an E. coli fed-batch fermentation process
The mathematical model of the E. coli fed-batch FP has the form [6]: and X is the biomass concentration, [g/l]; S is the substrate concentration, [g/l]; F in is the feeding rate, [l/h]; V is the bioreactor volume, [l]; S in is the substrate concentration in the feeding solution, [g/l]; µ and q S are the specific rate functions, Real experimental data for biomass and glucose concentration are used in the model parameters identification. The detailed description of the process conditions and experimental data are presented in [11,13].
Optimization criterion
The objective function is considered as a mean square deviation between the experimental data trajectories and the ones predicted by the model, defined as: where m and n are the experimental data dimensions; X exp and S exp -available experimental data for biomass and substrate; X mod and S mod -model predictions for biomass and substrate with a given model parameter vector,
Genetic algorithm identification
Genetic algorithm, initially presented in Goldberg [8], searches a global optimal solution using three main genetic operators in a sequence selection, crossover and mutation. GA starts with a creation of a randomly generated initial population. Each solution is then evaluated and assigned a fitness value. According to the fitness function, the most suitable solutions are selected. After that, crossover proceeds to form a new offspring. Mutation is next applied with specified probability, aiming to prevent falling of all solutions into a local optimum. The execution of the GA has been repeated until the termination criterion (i.e. reached number of populations, or when a solution with specified tolerance is found, etc.) is satisfied. When applying GA, there are many operators, functions, parameters, and settings in the GAs that can be implemented specifically for different problems. For the parameter identification problem considered here, the GA operators and parameters are tuned as follows: crossover operators are double point; mutation operators -bit inversion; selection operators -roulette wheel selection; number of generations -maxgen = 100; crossover rate -p c = 0.7; number of individualsnind = 100 and generation gap -ggap = 0.97. The GA parameter mutation rate p m is varied in the range p m = [0.001 : 0.001 : 0.1]. The p m values are chosen based on the results in [7,9,12]. While the mutation rate is varied using vector p m , all the other parameters and operators are kept constant. As a result, 100 differently tuned GAs are produced. 30 independent runs have been performed for each GA. The obtained model parameters estimates (µ max , k S , Y S/X ), total computation time and objective function value are recorded. As a result, 3000 records are obtained -30 estimates of µ max , k S and Y S/X for each of the 100 GAs, where GA 1 corresponds to p m = 0.001, GA 2 corresponds to p m = 0.002, etc. These results are further processed in order to generate one main IM with elements the average values of the five criteria (J(C 1 ), T (C 2 ), µ max (C 3 ), k S (C 4 ), Y S/X (C 5 )) of every 30 runs (estimates). The rows of the IM represent the five criteria and the columns are the 100 objects (100 differently tuned GAs -GA 1 , GA 2 , ..., GA 99 , GA 100 ): To investigate the influence of the objects number on ICrA results seven IMs are defined, as follows: • IM 1 with 11 objects (the results from GA 1 , GA 10 , GA 20 , GA 30 , GA 40 , GA 50 , GA 60 , GA 70 , GA 80 , GA 90 and GA 100 are included): • IM 7 with 100 objects (the results from all 100 GAs are included).
InterCriteria analysis of results
ICrA is applied on the 7 IMs. The obtained results are presented in the form of IM (12). They are analysed based on the scale proposed in [5], which defines the consonance and dissonance between the criteria pairs (see Table 1).
Based on the values of degree of "agreement", µ C,C , and degree of "disagreement", ν C,C , the following conclusions could be made: • The existing relations and dependencies for some of the criteria pairs are so established that the variation of the number of objects does not affect the results substantially. For example: -Criteria pair C 1 − C 2 : the observed µ C,C values show that the criteria pair is in strong dissonance and only in the case of IM 1 the pair is in dissonance; -Criteria pair C 1 − C 5 : in this case the pair is in dissonance for all IMs; -Criteria pair C 3 − C 5 : the here observed µ C,C values show that the criteria pair is in dissonance and in the cases of IM 1 and IM 2 the pair is in weak dissonance; -Criteria pair C 4 − C 5 : the observed µ C,C values show that the criteria pair is in dissonance and again only in the case of IM 1 the pair is in weak dissonance.
As it can be seen, an alteration in the obtained µ values is observed exactly in the cases of IM 1 and IM 2 , i.e. in the case of a small number of objects. To the contrary, the obtained results are stable, when a larger number of objects is considered.
• For other criteria pairs, the existing relations are not so established and the variation of the number of objects has impact on the results. For example: -Criteria pairs C 1 − C 3 and C 1 − C 4 : the obtained µ C,C values show that the criteria pairs are in weak dissonance, while in the case of IM 1 the pairs are in weak positive consonance; -Criteria pairs C 2 − C 3 , C 2 − C 4 and C 2 − C 5 : the obtained µ C,C values show that the criteria pairs are mainly in dissonance (from weak dissonance to dissonance, see Table 1), while in the case of IM 1 weak negative consonance between the pairs is observed.
In this case, the use of a small number of objects could lead to some incorrect assumptions. For example, to conclude that a criteria pair is in consonance (dependence) to be made, while in fact the criteria are independent.
• Finally, there is a criteria pair C 3 − C 4 for which a high degree of "agreement" is observed no matter the number of used objects in the ICrA. For all IMs µ values, as 0.96 and 0.97, i.e. strong positive consonance, are obtained.
Conclusion
The influence of different number of objects on the InterCriteria analysis results is explored in this paper. The research is done on the basis of particular test case -parameter identification procedures of an E. coli fed-batch fermentation model. Data from 100 series of parameter identification procedures are used to construct several IMs with different number of objects. ICrA is applied on the so defined IMs.
The results show the importance of the number of objects for the reliability of the obtained ICrA results. | 3,036.6 | 2018-01-01T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Double-Forming Mechanism of TaOx-Based Resistive Memory Device and Its Synaptic Applications
The bipolar resistive switching properties of Pt/TaOx/InOx/ITO-resistive random-access memory devices under DC and pulse measurement conditions are explored in this work. Transmission electron microscopy and X-ray photoelectron spectroscopy were used to confirm the structure and chemical compositions of the devices. A unique two-step forming process referred to as the double-forming phenomenon and self-compliance characteristics are demonstrated under a DC sweep. A model based on oxygen vacancy migration is proposed to explain its conduction mechanism. Varying reset voltages and compliance currents were applied to evaluate multilevel cell characteristics. Furthermore, pulses were applied to the devices to demonstrate the neuromorphic system’s application via testing potentiation, depression, spike-timing-dependent plasticity, and spike-rate-dependent plasticity.
Introduction
Because processing and memory components are physically separated, the conventional von Neumann architecture employed in computers encounters processing issues.Furthermore, modern CMOS-based electronic devices reach their scaling constraints owing to Moore's law [1][2][3].To overcome these problems, bioinspired neuromorphic computing is attracting great attention because of its high efficiency, low power consumption, and parallel data-processing feature [4,5].The main focus of a neuromorphic computing system is to emulate the human brain's synapses, in which large amounts of information move from one neuron to another.Many solid-state devices have been researched to mimic this system, and emerging nonvolatile devices are applicable candidates, including ferroelectric random-access memory (RAM) [6][7][8], magnetic RAM [9,10], phase-change RAM [11,12], and resistant RAM (RRAM) [13][14][15][16][17][18].Among these, RRAM devices have benefits such as simple fabrication, high switching speeds, outstanding scalability, and high endurance, making them one of the most promising choices [19][20][21][22].Moreover, the simple two-terminal structure of RRAMs, comprising a switching layer sandwiched between the top and bottom electrodes, most closely emulates the structure of a biological synapse [23].Furthermore, applying different biases with different polarities causes a phenomenon termed the electroresistance effect, where the resistance condition changes between a low-resistance state (LRS) and a high-resistance state (HRS), and which information is stored at 0 s and 1 s, respectively [24,25].Various transition metal oxides have been employed as resistive switching insulators, including HfO 2 [26], TiO 2 [27][28][29], TaO x [30], Al 2 O 3 [31,32], and ZnO [30].Extensive research has been conducted on TaO x , revealing it as a promising candidate for the resistive switching layer [33].In addition, previous studies have indicated that TaO x exhibits superior memory characteristics owing to its high endurance (>10 10 ) [34], fast switching speed (<1 ns) [35], and good scalability (<30 nm) [36].
The resistive switching phenomena, which is the basis of RRAM, occurs due to the change in resistance states under an applied bias.For instance, in a valence change memory (VCM) device, the generation and rupture of the conducting filament is a key function in switching resistance [26][27][28].The applied bias separates oxygen ions (O 2− ) and oxygen vacancies (Vo + ).Then, due to the migration of oxygen ions under the applied electric field, the generated oxygen vacancies create a conductive filament that connects the top and bottom electrodes.Through the filament, a large current flows; thus, the resistance state is switched from a high resistance state (HRS) to a low resistance state (LRS).On the other hand, when the opposite bias is applied, reoxidation occurs in the conductive filament, and the filament is ruptured.Consequently, the filament ruptures and the device returns to HRS.The RRAM device stores memory in these two states, HRS and LRS, which can be consequently reproduced by applying sufficient bias [30].To increase storage density, research has indicated that by applying multilevel cell (MLC) characteristics, a high storage density could be achieved due to multiple stable states between HRS and LRS.Research has indicated that multilevel cell (MLC) characteristics are evident in resistive switching devices, and they are key features that result in high storage density.This functionality allows devices to save data in the HRS, LRS, and between these two states by simply altering the compliance current (CC) and reset voltage [28,29,37].Furthermore, the modulation of the memristive device's conductance is both controllable and incremental, emulating the biological synapse.Here, the strength of the connection between the presynaptic and postsynaptic neurons is incrementally increased or decreased through input spikes by maintaining a history-dependent synaptic weight update [38,39].Additionally, various synaptic functions can be emulated using pulse responses to assess the application of RRAM as a neuromorphic computing device.These functions include the potentiation and depression of short-and long-term memory (STM and LTM, respectively), spike-rate-dependent plasticity, and spike-time-dependent plasticity (STDP).Controllable conductance and synaptic weight changes can be monitored using these methods [40][41][42], while complex tests (such as pattern recognition systems through handwritten Modified National Institute of Standards and Technology (MNIST) datasets) are often conducted to evaluate the use of memristors as artificial synapses [43,44].
In this work, we studied a Pt/TaO x /InO x /indium tin oxide (ITO) device to investigate its potential for mimicking biological synapses.Bipolar gradual and uniform-resistive change behaviors were achieved with the MLC characteristic.Additionally, during the RF-sputtered deposition of the TaO x layer, the diffusion of oxygen toward the ITO bottom electrode occurred, creating an InO x layer.Due to the additional layer, the device exhibited unique forming behavior (termed double forming [45]) and self-compliance.Uniform switching during cycles (>10 2 ) and retention (>10 4 ) was also examined with gradual changes in potentiation and depression.The result of potentiation and depression was inserted into PRS using MNIST handwritten figures.Finally, synaptic functions such as long-term potentiation (LTP), long-term depression (LTD), STDP, and SRDP were emulated.
Experimental Section
The Pt/TaO x /InO x /ITO RRAM device was prepared using the following procedure.The bottom electrode was a commercially available ITO with a 30 nm thickness on a glass substrate (ITO/glass).Isopropyl alcohol and acetone were used to clean the surface, after which radio frequency (RF) reactive sputtering with a power of 150 W was used to deposit a 5 nm TaO x layer on an ITO/glass substrate.The Ta source target was sputtered at room temperature with Ar (20 sccm) and O 2 (6 sccm) at a pressure of 5 mTorr.An oxygen-rich InOx layer of 3 nm thickness was produced by the reactive-sputtering process owing to oxygen migration from TaO x to ITO.Subsequently, a Ti adhesion layer of 1.5 nm thickness was formed by an e-beam evaporator.Finally, the e-beam evaporator was used to deposit Pt with a thickness of 100 nm, a deposition rate of 3 Å/s, and a pressure of 3.7 Torr.The liftoff process was performed to create patterned RRAM cells with a diameter of 100 µm.The electrical properties of Pt/TaO x /InO x /ITO were investigated using a Keithley 4200-SCS semiconductor parameter analyzer in the DC mode and a 4225-PMU ultrafast currentvoltage (I-V) pulse module in the pulse mode.Furthermore, bias was applied to the top electrode (Pt), while the bottom electrode (ITO) was grounded at room temperature.The device properties (including cross-section analysis and elemental profiles) were determined using field emission transmission electron microscopy (TEM, JEOL JEMF200,Tokyo, Japan)) and X-ray photoelectron spectroscopy (XPS).
Results and Discussion
Figure 1a displays a schematic of the Pt/TaO x /InO x /ITO device.A TEM image was also used to verify the thickness of the device, as displayed in Figure 1b.Pt (100 nm thickness) and ITO (30 nm thickness) were verified using the TEM image.Furthermore, TaO x and InO x layers were observed between the Ti adhesion layer (1 nm thickness) and the ITO bottom electrode.Figure 1c depicts the elemental distribution of each layer, which was validated by an energy-dispersive X-ray spectroscopy line scan.
Pt with a thickness of 100 nm, a deposition rate of 3 Å/s, and a pressure of 3.7 Torr.The liftoff process was performed to create patterned RRAM cells with a diameter of 100 µm.The electrical properties of Pt/TaOx/InOx/ITO were investigated using a Keithley 4200-SCS semiconductor parameter analyzer in the DC mode and a 4225-PMU ultrafast currentvoltage (I-V) pulse module in the pulse mode.Furthermore, bias was applied to the top electrode (Pt), while the bottom electrode (ITO) was grounded at room temperature.The device properties (including cross-section analysis and elemental profiles) were determined using field emission transmission electron microscopy (TEM, JEOL JEMF200,Tokyo, Japan)) and X-ray photoelectron spectroscopy (XPS).
Results and Discussion
Figure 1a displays a schematic of the Pt/TaOx/InOx/ITO device.A TEM image was also used to verify the thickness of the device, as displayed in Figure 1b.Pt (100 nm thickness) and ITO (30 nm thickness) were verified using the TEM image.Furthermore, TaOx and InOx layers were observed between the Ti adhesion layer (1 nm thickness) and the ITO bottom electrode.Figure 1c depicts the elemental distribution of each layer, which was validated by an energy-dispersive X-ray spectroscopy line scan.sputtering process, the percentage of oxygen vacancies in the TaOx layer was 23.21% com pared to 45.89% in the InOx layer.Accordingly, fewer oxygen vacancies were stored in t TaOx layer compared to the InOx layer [45].Figure 3 displays the electrical characteristics of Pt/TaOx/InOx/ITO under DC swe conditions.In particular, Figure 3a displays the I-V curve, including the double-formin switching phenomenon, which is unlike the conventional RRAM operation.The "form ing" process [46] (also known as dielectric soft breakdown) has been reported to occ once under applied bias, transforming the resistance state of the device from its init state to LRS [47,48].However, the RRAM device in this work required additional formin processes in the opposite bias to switch the resistance condition to LRS.The double-form ing process can be divided into six steps: (1) the first forming process, (2) the mediu state, (3) the second forming process, (4) LRS, (5) reset process, and (6) HRS, as display in Figure 3a [45].By applying a voltage bias of −4 V and a CC of 100 µA to protect t device from hard breakdown, the first conducting filament was formed, and the devi turned from its initial state into a medium state.Then, upon the application of a set volta of 3 V and a reset voltage of −3 V, the device switched from the medium state to LRS an from LRS to HRS owing to the rupture and creation of a conduction path.The bipolar s and reset operations did not require CC, demonstrating self-compliance properties th could be implemented because of the ITO electrode [49,50].Figure 3b depicts the succes ful resistive switching characteristic for 275 cycles.Under self-compliance conditions modest change in the current was detected, and the device maintained its original HR and LRS without any significant degradation.As demonstrated in Figure 3a, there was abrupt jump in the current at 2.1 V, similar to the first forming process at -3.8 V. Howeve a gradual change in current was exhibited in the set process shown in Figure 3b, and Figure 3 displays the electrical characteristics of Pt/TaO x /InO x /ITO under DC sweep conditions.In particular, Figure 3a displays the I-V curve, including the double-forming switching phenomenon, which is unlike the conventional RRAM operation.The "forming" process [46] (also known as dielectric soft breakdown) has been reported to occur once under applied bias, transforming the resistance state of the device from its initial state to LRS [47,48].However, the RRAM device in this work required additional forming processes in the opposite bias to switch the resistance condition to LRS.The double-forming process can be divided into six steps: (1) the first forming process, (2) the medium state, (3) the second forming process, (4) LRS, (5) reset process, and (6) HRS, as displayed in Figure 3a [45].By applying a voltage bias of −4 V and a CC of 100 µA to protect the device from hard breakdown, the first conducting filament was formed, and the device turned from its initial state into a medium state.Then, upon the application of a set voltage of 3 V and a reset voltage of −3 V, the device switched from the medium state to LRS and from LRS to HRS owing to the rupture and creation of a conduction path.The bipolar set and reset operations did not require CC, demonstrating self-compliance properties that could be implemented because of the ITO electrode [49,50].Figure 3b depicts the successful resistive switching characteristic for 275 cycles.Under self-compliance conditions, a modest change in the current was detected, and the device maintained its original HRS and LRS without any significant degradation.As demonstrated in Figure 3a, there was an abrupt jump in the current at 2.1 V, similar to the first forming process at -3.8 V.However, a gradual change in current was exhibited in the set process shown in Figure 3b, and its window decreased from 4.55 to 2.21 at a read voltage of 0.3 V. Therefore, we applied the switching phenomena ( 3) and (4) as the second forming process with a forming voltage of 3 V. Figure 3c displays the endurance of the device with a read voltage of -0.3 V, where the device maintained its HRS and LRS.Additionally, we investigated the data retention capability of the device, as displayed in Figure 3d.These results indicate that the device maintained its HRS and LRS for 10 4 s without degradation, demonstrating its good nonvolatile properties.
window decreased from 4.55 to 2.21 at a read voltage of 0.3 V. Therefore, we applied the switching phenomena (3) and (4) as the second forming process with a forming voltage of 3 V. Figure 3c displays the endurance of the device with a read voltage of -0.3 V, where the device maintained its HRS and LRS.Additionally, we investigated the data retention capability of the device, as displayed in Figure 3d.These results indicate that the device maintained its HRS and LRS for 10 4 s without degradation, demonstrating its good nonvolatile properties.Figure 4 depicts the conduction mechanism of the Pt/TaOx/InOx/ITO device, where the white dot in the figure represents the implementation of lattice oxygen vacancy.Lin et al. reported that the physical size of the conductive filament could be determined by the existence of lattice oxygen vacancy, where the size of the conducting filament in the oxygen-vacancy-deficient layer was narrower than that in an oxygen-vacancy-rich layer [51].Furthermore, Huang et al. explained the double-forming mechanism of bilayer-structured resistive switching devices using an asymmetric conductive filament [45].Based on these previous studies, it can be implied that implementing different amounts of existing lattice oxygen vacancies in the TaOx and InOx layers constructs two different sizes of conducting filaments in each insulating layer.Through the connection of two asymmetric filaments, the device switches to LRS, observing the double-forming mechanism.A thick filament was formed in the oxygen-vacancy-rich InOx layer, whereas a relatively narrow filament was formed in the oxygen-vacancy-deficient TaOx layer.When negative voltage was applied to the top electrode, the first forming process occurred.Redox reactions then caused the separation of oxygen vacancies (Vo + ) and oxygen ions (O 2− ).These separated oxygen ions were then repelled away from the top electrode due to an applied bias.Thus, the Figure 4 depicts the conduction mechanism of the Pt/TaO x /InO x /ITO device, where the white dot in the figure represents the implementation of lattice oxygen vacancy.Lin et al. reported that the physical size of the conductive filament could be determined by the existence of lattice oxygen vacancy, where the size of the conducting filament in the oxygen-vacancy-deficient layer was narrower than that in an oxygen-vacancy-rich layer [51].Furthermore, Huang et al. explained the double-forming mechanism of bilayer-structured resistive switching devices using an asymmetric conductive filament [45].Based on these previous studies, it can be implied that implementing different amounts of existing lattice oxygen vacancies in the TaO x and InO x layers constructs two different sizes of conducting filaments in each insulating layer.Through the connection of two asymmetric filaments, the device switches to LRS, observing the double-forming mechanism.A thick filament was formed in the oxygen-vacancy-rich InO x layer, whereas a relatively narrow filament was formed in the oxygen-vacancy-deficient TaO x layer.When negative voltage was applied to the top electrode, the first forming process occurred.Redox reactions then caused the separation of oxygen vacancies (V o + ) and oxygen ions (O 2− ).These separated oxygen ions were then repelled away from the top electrode due to an applied bias.Thus, the generated defects (oxygen vacancies) accumulated and formed conductive filaments of different sizes in the InO x and TaO x layers, as depicted in Figure 4b.However, CC limited the thickening of the conduction path in the TaO x layer, implying that the current decreased in step (2) of Figure 3a.Then, under a positive voltage applied to the top electrode, oxygen ions migrated toward the bottom electrode and were repelled back toward the top electrode due to the electric field.Due to the additional generation of oxygen vacancies in this process, a conductive filament in TaO x , with its thickest part formed at the interface of InO x and TaO x could be completed.Consequently, the asymmetric shape of the conductive filament (Figure 4c) could be constructed.In this second forming process, the device finally changed to LRS (as in step ( 4)), and repeatable resistive switching phenomena could be observed in the device.The reset process occurred when a negative voltage was applied to Pt. Oxygen ions drifted toward the TaO x layer to recombine with the oxygen vacancies.The weakest part of the filament, at the interface of Pt/TaO x , was ruptured due to the recombination of the oxygen ion and vacancy, assisted with local joule heating [52,53].The schematic of the reset process is shown in Figure 4d, where the rupture of the weakest part of the conductive filament could be observed.On the other hand, when a positive voltage was applied to the top electrode, oxygen ions migrated toward the top electrode due to the applied electric field, leaving the oxygen vacancies.Thus, defects again accumulated, and the reconstruction of conduction paths occurred.Thus, a large current flowed through the filament, altering the device state to LRS.
Materials 2023, 16, x FOR PEER REVIEW 6 of 14 generated defects (oxygen vacancies) accumulated and formed conductive filaments of different sizes in the InOx and TaOx layers, as depicted in Figure 4b.However, CC limited the thickening of the conduction path in the TaOx layer, implying that the current decreased in step (2) of Figure 3a.Then, under a positive voltage applied to the top electrode, oxygen ions migrated toward the bottom electrode and were repelled back toward the top electrode due to the electric field.Due to the additional generation of oxygen vacancies in this process, a conductive filament in TaOx, with its thickest part formed at the interface of InOx and TaOx could be completed.Consequently, the asymmetric shape of the conductive filament (Figure 4c) could be constructed.In this second forming process, the device finally changed to LRS (as in step ( 4)), and repeatable resistive switching phenomena could be observed in the device.The reset process occurred when a negative voltage was applied to Pt. Oxygen ions drifted toward the TaOx layer to recombine with the oxygen vacancies.The weakest part of the filament, at the interface of Pt/TaOx, was ruptured due to the recombination of the oxygen ion and vacancy, assisted with local joule heating [52,53].The schematic of the reset process is shown in Figure 4d, where the rupture of the weakest part of the conductive filament could be observed.On the other hand, when a positive voltage was applied to the top electrode, oxygen ions migrated toward the top electrode due to the applied electric field, leaving the oxygen vacancies.Thus, defects again accumulated, and the reconstruction of conduction paths occurred.Thus, a large current flowed through the filament, altering the device state to LRS.Further, we investigated the MLC characteristics of the device.For the application of the device as a synaptic device, MLC characteristics are important for implementing the multiple weights of each synapse in an artificial neural network [54,55].Additionally, it features high storage density and introduces multiple data storage areas.Two types of voltage bias schemes are usually applied to investigate MLC characteristics: (i) controlling the reset voltage and (ii) controlling the CC.
Figure 5 presents the method for controlling the reset voltage.In Figure 5a, the I-V curve of a Pt/TaOx/InOx/ITO device with four reset voltages is displayed, which reveals that adjusting the reset voltage results in multiple HRSs while maintaining the LRS as constant.This effect is caused by the varying rate of ruptured filaments under different reset voltages [56].As the reset voltage increases, more oxygen ions are repelled from the top electrode, changing the degree of recombination rate.Therefore, the gap between the Pt top electrode and the conducting filament increases, resulting in decreased IHRS [Figure 5c]. Figure 5b depicts the cycling endurance with varying reset voltages.As stated previously, increasing the reset voltages increased HRS, resulting in a lower IHRS, while LRS remained fairly constant.Further, we investigated the MLC characteristics of the device.For the application of the device as a synaptic device, MLC characteristics are important for implementing the multiple weights of each synapse in an artificial neural network [54,55].Additionally, it features high storage density and introduces multiple data storage areas.Two types of voltage bias schemes are usually applied to investigate MLC characteristics: (i) controlling the reset voltage and (ii) controlling the CC.
Figure 5 presents the method for controlling the reset voltage.In Figure 5a, the I-V curve of a Pt/TaO x /InO x /ITO device with four reset voltages is displayed, which reveals that adjusting the reset voltage results in multiple HRSs while maintaining the LRS as constant.This effect is caused by the varying rate of ruptured filaments under different reset voltages [56].As the reset voltage increases, more oxygen ions are repelled from the top electrode, changing the degree of recombination rate.Therefore, the gap between the Pt top electrode and the conducting filament increases, resulting in decreased I HRS [Figure 5c]. Figure 5b depicts the cycling endurance with varying reset voltages.As stated previously, increasing the reset voltages increased HRS, resulting in a lower I HRS , while LRS remained fairly constant.Another method of obtaining MLC characteristics was controlling CC during the set operation.Figure 6a displays the I-V curve of a Pt/TaOx/InOx/ITO device with six CC settings.Here, CC was adjusted from 200 µA to 1 mA, and the reset voltage was maintained at −2.5 V. Evidently, increasing CC reduced LRS while maintaining a constant HR.This phenomenon was considered to be induced by an increase in CC, which caused an increase in the current flow in the LRS state.Figure 6c presents a filament schematic of the controlled CC.As the CC increased, the width of the conducting filament also increased.Then, more electrons could move through the enlarged conducting path, resulting in a larger ILRS [44,57].Consequently, ILRS increased, while IHRS remained unchanged.Figure 6b illustrates a 10-cycle endurance test while applying different CCs.Here, the resistance of the LRS decreased while CC increased.Another method of obtaining MLC characteristics was controlling CC during the set operation.Figure 6a displays the I-V curve of a Pt/TaO x /InO x /ITO device with six CC settings.Here, CC was adjusted from 200 µA to 1 mA, and the reset voltage was maintained at −2.5 V. Evidently, increasing CC reduced LRS while maintaining a constant HR.This phenomenon was considered to be induced by an increase in CC, which caused an increase in the current flow in the LRS state.Figure 6c presents a filament schematic of the controlled CC.As the CC increased, the width of the conducting filament also increased.Then, more electrons could move through the enlarged conducting path, resulting in a larger I LRS [44,57].Consequently, I LRS increased, while I HRS remained unchanged.Figure 6b illustrates a 10-cycle endurance test while applying different CCs.Here, the resistance of the LRS decreased while CC increased.
Increasing reset voltage
Next, a scheme of 100 consecutive pulses was applied to evaluate the synaptic characteristics of Pt/TaO x /InO x /ITO.The pulse train comprised 50 identical potentiation and depression pulses each.For potentiation, the pulse width and amplitude were 100 µs and 2 V, respectively, compared to 50 µs and −2.7 V for depression.Figure 7a depicts the result of the applied pulse scheme, which demonstrated a linear rise and decay of conductance.Then, to test the synaptic reproducibility, LTP and LTD were explored using 10-cycle potentiation and depression pulse methods [58].The results are depicted in Figure 7b, where identical conductance changes could be observed.In other words, the conductance levels after each pulse application favorably maintained their states under repetitive operation, proving the applicability of the device to mimic the human brain.Additionally, a pattern recognition test using handwritten digits from an MNIST dataset was conducted to check the further application of the device as a synaptic device.The training was conducted with unclear images, and the gradual and symmetric conductance changes in potentiation and depression resulted in clearer images with higher accuracy [59,60].As shown in Figure 7c, the deep neural network (DNN) comprised 784 input neurons, 3 hidden layers, and 10 output neurons.Each of these three hidden layers had 128, 64, and 32 neurons, and the backpropagation method was employed to improve accuracy.To determine linearity and accuracy, the potentiation and depression depicted in Figure 7a were converted into an MNIST handwritten number of 28 × 28 pixels and applied to the neural network.The result of the number recognition is illustrated in Figure 7d, where the highest obtained accuracy was 94.21%.Next, a scheme of 100 consecutive pulses was applied to evaluate the synaptic characteristics of Pt/TaOx/InOx/ITO.The pulse train comprised 50 identical potentiation and depression pulses each.For potentiation, the pulse width and amplitude were 100 µs and 2 V, respectively, compared to 50 µs and −2.7 V for depression.Figure 7a depicts the result of the applied pulse scheme, which demonstrated a linear rise and decay of conductance.Then, to test the synaptic reproducibility, LTP and LTD were explored using 10-cycle potentiation and depression pulse methods [58].The results are depicted in Figure 7b, where identical conductance changes could be observed.In other words, the conductance levels after each pulse application favorably maintained their states under repetitive operation, proving the applicability of the device to mimic the human brain.Additionally, a pattern recognition test using handwritten digits from an MNIST dataset was conducted to check the further application of the device as a synaptic device.The training was conducted with unclear images, and the gradual and symmetric conductance changes in potentiation and depression resulted in clearer images with higher accuracy [59,60].As shown in Figure 7c, the deep neural network (DNN) comprised 784 input neurons, 3 hidden layers, and 10 output neurons.Each of these three hidden layers had 128, 64, and 32 neurons, and the backpropagation method was employed to improve accuracy.To determine linearity and Finally, the Hebbian learning rules of synapses and neurons were tested on a Pt/TaO x / InO x /ITO device to investigate its ability to mimic a biological synapse [61].Figure 8a illustrates the RRAM device mimicking the human synapse.Here, the top and bottom electrodes mimic pre-and post-spikes, while synaptic information between the neurons varied due to the conducting filament connecting the top and bottom electrodes [62,63].STDP is composed of two parts of synaptic variance: LTP and LTD.For example, LTP occurs when the pre-spike exceeds the post-spike, while LTD occurs when the post-spike exceeds the pre-spike.These synaptic investigations were conducted by applying the same pulse train to the pre-and post-spikes.A pulse train with a pulse width of 100 µs was applied to Pt/TaO x /InO x /ITO, as illustrated in Figure 8b.The pulse application was conducted with a difference in the time interval, which is termed spike timing difference ∆t (∆t = t pre − t post ).During LTP (∆t > 0), a set of positive pulse trains was applied, which induced a decrease in the resistance of the devices.Further, negative pulse trains were applied during LTD (∆t < 0), inducing an increase in the resistance of the devices.The conductance was acquired because the pulse application was input into the formula below to convert it into synaptic weight (∆W), which represented the spike connection: where G i and G f represent the conductance of the initial value before and the final value after applying the pulse trains, respectively.Figure 8c presents the experimental data of STDP obtained from the device.For ∆t > 0, the weight change increased continuously with decreasing time intervals, and LTP was obtained.By contrast, for ∆t < 0, the weight change decreased, and LTD was obtained.This result proves the successful experimental demonstration of the STDP learning rule with synaptic weight changes at different spike times using the proposed Pt/TaO x /InO x /ITO memristor device, favorably mimicking a biological synapse [23].Additionally, another Hebbian learning rule, SRDP, was tested to obtain the device's frequency-dependent characteristics [64].Ten consequent pulses were applied to the Pt/TaO x /InO x /ITO device with the same pulse width and amplitude of 100 µs and 2 V.These pulse intervals varied from 1 µs to 1 ms, as depicted in Figure 8d.The term SRDP Index is calculated as: where I n and I i represent the current of the initial value before and the current after applying consequent pulse trains.The result indicates that when the pulse interval is small, the device response rapidly increases, successfully emulating SRDP behavior.Finally, the Hebbian learning rules of synapses and neurons were tested on a Pt/TaOx/InOx/ITO device to investigate its ability to mimic a biological synapse [61].Figure 8a illustrates the RRAM device mimicking the human synapse.Here, the top and bottom electrodes mimic pre-and post-spikes, while synaptic information between the neurons varied due to the conducting filament connecting the top and bottom electrodes [62,63].where Gi and Gf represent the conductance of the initial value before and the final valu after applying the pulse trains, respectively.Figure 8c presents the experimental data
Figure 1 .Figure 1 .
Figure 1.(a) Schematic illustration of the device's structure.(b) Typical cross-sectional TEM image of the Pt/TaOx/InOx/ITO structure.(c) Component distribution: Pt, Ta, O, In, and Sn.The chemical properties of the RRAM device are displayed in Figure 2. The insulating TaOx and InOx films were investigated in the XPS depth-profile mode.Figure 2a,b display the XPS spectra of Ta 4f and O 1s, respectively, for the TaOx layer as the first insulator at 4 s.Two peaks of Ta4f7/2 and Ta4f5/2 were located around the binding energies of 22.52 and 25.13 eV, representing the Ta-O bonds.Additionally, the O 1s peak position of bulk TaOx was located around 530.2 eV, indicating the existence of the TaOx thin film.Furthermore, the second insulating layer (InOx) at an etch time of 20 s is depicted in Figure 2c,d.The spectral peaks of In3d5/2 and O 1s were located at approximately 444.81 and 530.5 eV, representing In-O bonding and the existence of an oxygen-distributed InOx layer, respectively.In addition, the oxygen vacancy concentration of each insulating layer was inspected, as displayed in the insets of Figure 2b,d.The peak at 532.2 eV corresponds to the oxygen vacancies in the InOx and TaOx layers.Because of oxygen migration during the RF Figure 1.(a) Schematic illustration of the device's structure.(b) Typical cross-sectional TEM image of the Pt/TaO x /InO x /ITO structure.(c) Component distribution: Pt, Ta, O, In, and Sn.The chemical properties of the RRAM device are displayed in Figure2.The insulating TaO x and InO x films were investigated in the XPS depth-profile mode.Figure2a,bdisplay the XPS spectra of Ta 4f and O 1s, respectively, for the TaO x layer as the first insulator at 4 s.Two peaks of Ta4f 7/2 and Ta4f 5/2 were located around the binding energies of 22.52 and 25.13 eV, representing the Ta-O bonds.Additionally, the O 1s peak position of bulk TaO x was located around 530.2 eV, indicating the existence of the TaO x thin film.Furthermore, the second insulating layer (InO x ) at an etch time of 20 s is depicted in Figure2c,d.The spectral peaks of In3d 5/2 and O 1s were located at approximately 444.81 and 530.5 eV, representing In-O bonding and the existence of an oxygen-distributed InO x layer, respectively.In addition, the oxygen vacancy concentration of each insulating layer was inspected, as displayed in the insets of Figure2b,d.The peak at 532.2 eV corresponds to the oxygen vacancies in the InO x and TaO x layers.Because of oxygen migration during the RF sputtering process, the percentage of oxygen vacancies in the TaO x layer was 23.21% compared to 45.89% in the InO x layer.Accordingly, fewer oxygen vacancies were stored in the TaO x layer compared to the InO x layer[45].
Figure 2 .
Figure 2. XPS spectra of (a) Ta 4f (b) and O 1s at an etch time of 4 s.(c) In 3d and (d) O 1s at an et time of 20 s.
Figure 2 .
Figure 2. XPS spectra of (a) Ta 4f (b) and O 1s at an etch time of 4 s.(c) In 3d and (d) O 1s at an etch time of 20 s.
Figure 4 .
Figure 4. Schematic description of the conduction mechanism of the double-forming process of the Pt/TaOx/InOx/ITO RRAM device in the (a) Initial state, (b) First forming process, (c) Second forming process, (d) Reset process, and (e) Set process.
Figure 4 .
Figure 4. Schematic description of the conduction mechanism of the double-forming process of the Pt/TaO x /InO x /ITO RRAM device in the (a) Initial state, (b) First forming process, (c) Second forming process, (d) Reset process, and (e) Set process.
Figure 5 .
Figure 5. (a) MLC obtained by controlling the reset voltage and (b) DC endurance performance.(c) Filament schematics.
Figure 5 .
Figure 5. (a) MLC obtained by controlling the reset voltage and (b) DC endurance performance.(c) Filament schematics.
Figure 6 .
Figure 6.(a) MLC obtained by the controlling compliance current and (b) DC endurance performance.(c) Filament schematics.
Figure 7 .
Figure 7. (a) Potentiation and depression.(b) Potentiation and depression run for 10 cycles.(c) Schematic illustration of a DNN for numerical number recognition containing the input (784 neurons), hidden (3 layers), and output (10 neurons) layers.(d) Simulated recognition accuracy using the MNIST numerical datasets of raining images, with approximately 94% recognition accuracy for the Pt/TaOx/InOx/ITO memristor.
Figure 7 .
Figure 7. (a) Potentiation and depression.(b) Potentiation and depression run for 10 cycles.(c) Schematic illustration of a DNN for numerical number recognition containing the input (784 neurons), hidden (3 layers), and output (10 neurons) layers.(d) Simulated recognition accuracy using the MNIST numerical datasets of raining images, with approximately 94% recognition accuracy for the Pt/TaO x /InO x /ITO memristor.
Figure 8 .
Figure 8.(a) Schematic illustration of the human synaptic neural structure.(b) Pulse schematic.( Result of the STDP measurement.(d) Result of the SRDP measurement.
Figure 8 .
Figure 8.(a) Schematic illustration of the human synaptic neural structure.(b) Pulse schematic.(c) Result of the STDP measurement.(d) Result of the SRDP measurement. | 7,964.6 | 2023-09-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |